Skip to content
Snippets Groups Projects
Select Git revision
  • 9888495a14a80a0a229e13c293592da559c51e64
  • vme-testing default
  • ci-test
  • master
  • remoteproc
  • am625-sk-ov5640
  • pcal6534-upstreaming
  • lps22df-upstreaming
  • msc-upstreaming
  • imx8mp
  • iio/noa1305
  • vme-next
  • vme-next-4.14-rc4
  • v4.14-rc4
  • v4.14-rc3
  • v4.14-rc2
  • v4.14-rc1
  • v4.13
  • vme-next-4.13-rc7
  • v4.13-rc7
  • v4.13-rc6
  • v4.13-rc5
  • v4.13-rc4
  • v4.13-rc3
  • v4.13-rc2
  • v4.13-rc1
  • v4.12
  • v4.12-rc7
  • v4.12-rc6
  • v4.12-rc5
  • v4.12-rc4
  • v4.12-rc3
32 results

dsi.c

Blame
  • slub.c 103.33 KiB
    /*
     * SLUB: A slab allocator that limits cache line use instead of queuing
     * objects in per cpu and per node lists.
     *
     * The allocator synchronizes using per slab locks and only
     * uses a centralized lock to manage a pool of partial slabs.
     *
     * (C) 2007 SGI, Christoph Lameter <clameter@sgi.com>
     */
    
    #include <linux/mm.h>
    #include <linux/module.h>
    #include <linux/bit_spinlock.h>
    #include <linux/interrupt.h>
    #include <linux/bitops.h>
    #include <linux/slab.h>
    #include <linux/seq_file.h>
    #include <linux/cpu.h>
    #include <linux/cpuset.h>
    #include <linux/mempolicy.h>
    #include <linux/ctype.h>
    #include <linux/kallsyms.h>
    #include <linux/memory.h>
    
    /*
     * Lock order:
     *   1. slab_lock(page)
     *   2. slab->list_lock
     *
     *   The slab_lock protects operations on the object of a particular
     *   slab and its metadata in the page struct. If the slab lock
     *   has been taken then no allocations nor frees can be performed
     *   on the objects in the slab nor can the slab be added or removed
     *   from the partial or full lists since this would mean modifying
     *   the page_struct of the slab.
     *
     *   The list_lock protects the partial and full list on each node and
     *   the partial slab counter. If taken then no new slabs may be added or
     *   removed from the lists nor make the number of partial slabs be modified.
     *   (Note that the total number of slabs is an atomic value that may be
     *   modified without taking the list lock).
     *
     *   The list_lock is a centralized lock and thus we avoid taking it as
     *   much as possible. As long as SLUB does not have to handle partial
     *   slabs, operations can continue without any centralized lock. F.e.
     *   allocating a long series of objects that fill up slabs does not require
     *   the list lock.
     *
     *   The lock order is sometimes inverted when we are trying to get a slab
     *   off a list. We take the list_lock and then look for a page on the list
     *   to use. While we do that objects in the slabs may be freed. We can
     *   only operate on the slab if we have also taken the slab_lock. So we use
     *   a slab_trylock() on the slab. If trylock was successful then no frees
     *   can occur anymore and we can use the slab for allocations etc. If the
     *   slab_trylock() does not succeed then frees are in progress in the slab and
     *   we must stay away from it for a while since we may cause a bouncing
     *   cacheline if we try to acquire the lock. So go onto the next slab.
     *   If all pages are busy then we may allocate a new slab instead of reusing
     *   a partial slab. A new slab has noone operating on it and thus there is
     *   no danger of cacheline contention.
     *
     *   Interrupts are disabled during allocation and deallocation in order to
     *   make the slab allocator safe to use in the context of an irq. In addition
     *   interrupts are disabled to ensure that the processor does not change
     *   while handling per_cpu slabs, due to kernel preemption.
     *
     * SLUB assigns one slab for allocation to each processor.
     * Allocations only occur from these slabs called cpu slabs.
     *
     * Slabs with free elements are kept on a partial list and during regular