1. 17 Aug, 2007 1 commit
    • David S. Miller's avatar
      [MATH-EMU]: Fix underflow exception reporting. · 40584961
      David S. Miller authored
      The underflow exception cases were wrong.
      
      This is one weird area of ieee1754 handling in that the underflow
      behavior changes based upon whether underflow is enabled in the trap
      enable mask of the FPU control register.  As a specific case the Sparc
      V9 manual gives us the following description:
      
      --------------------
      If UFM = 0:     Underflow occurs if a nonzero result is tiny and a
                      loss of accuracy occurs.  Tininess may be detected
                      before or after rounding.  Loss of accuracy may be
                      either a denormalization loss or an inexact result.
      
      If UFM = 1:     Underflow occurs if a nonzero result is tiny.
                      Tininess may be detected before or after rounding.
      --------------------
      
      What this amounts to in the packing case is if we go subnormal,
      we set underflow if any of the following are true:
      
      1) rounding sets inexact
      2) we ended up rounding back up to normal (this is the case where
         we set the exponent to 1 and set the fraction to zero), this
         should set inexact too
      3) underflow is set in FPU control register trap-enable mask
      
      The initially discovered example was "DBL_MIN / 16.0" which
      incorrectly generated an underflow.  It should not, unless underflow
      is set in the trap-enable mask of the FPU csr.
      
      Another example, "0x0.0000000000001p-1022 / 16.0", should signal both
      inexact and underflow.  The cpu implementations and ieee1754
      literature is very clear about this.  This is case #2 above.
      
      However, if underflow is set in the trap enable mask, only underflow
      should be set and reported as a trap.  That is handled properly by the
      prioritization logic in
      
      arch/sparc{,64}/math-emu/math.c:record_exception().
      
      Based upon a report and test case from Jakub Jelinek.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      40584961
  2. 16 Aug, 2007 5 commits
  3. 15 Aug, 2007 2 commits
  4. 10 Aug, 2007 4 commits
    • Robert Reif's avatar
    • Jesper Juhl's avatar
      SLUB: Fix format specifier in Documentation/vm/slabinfo.c · ac078602
      Jesper Juhl authored
      There's a little problem in Documentation/vm/slabinfo.c
      The code is using "%d" in a printf() call to print an 'unsigned long'.
      This patch corrects it to use "%lu" instead.
      Signed-off-by: default avatarJesper Juhl <jesper.juhl@gmail.com>
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      ac078602
    • Christoph Lameter's avatar
      SLUB: Fix dynamic dma kmalloc cache creation · 1ceef402
      Christoph Lameter authored
      The dynamic dma kmalloc creation can run into trouble if a
      GFP_ATOMIC allocation is the first one performed for a certain size
      of dma kmalloc slab.
      
      - Move the adding of the slab to sysfs into a workqueue
        (sysfs does GFP_KERNEL allocations)
      - Do not call kmem_cache_destroy() (uses slub_lock)
      - Only acquire the slub_lock once and--if we cannot wait--do a trylock.
      
        This introduces a slight risk of the first kmalloc(x, GFP_DMA|GFP_ATOMIC)
        for a range of sizes failing due to another process holding the slub_lock.
        However, we only need to acquire the spinlock once in order to establish
        each power of two DMA kmalloc cache. The possible conflict is with the
        slub_lock taken during slab management actions (create / remove slab cache).
      
        It is rather typical that a driver will first fill its buffers using
        GFP_KERNEL allocations which will wait until the slub_lock can be acquired.
        Drivers will also create its slab caches first outside of an atomic
        context before starting to use atomic kmalloc from an interrupt context.
      
        If there are any failures then they will occur early after boot or when
        loading of multiple drivers concurrently. Drivers can already accomodate
        failures of GFP_ATOMIC for other reasons. Retries will then create the slab.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      1ceef402
    • Christoph Lameter's avatar
      SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrink · fcda3d89
      Christoph Lameter authored
      The MAX_PARTIAL checks were supposed to be an optimization. However, slab
      shrinking is a manually triggered process either through running slabinfo
      or by the kernel calling kmem_cache_shrink.
      
      If one really wants to shrink a slab then all operations should be done
      regardless of the size of the partial list. This also fixes an issue that
      could surface if the number of partial slabs was initially above MAX_PARTIAL
      in kmem_cache_shrink and later drops below MAX_PARTIAL through the
      elimination of empty slabs on the partial list (rare). In that case a few
      slabs may be left off the partial list (and only be put back when they
      are empty).
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      fcda3d89
  5. 09 Aug, 2007 28 commits