- Mar 29, 2011
-
-
Mike Frysinger authored
Recent vm changes brought in a new function which the core procfs code utilizes. So implement it for nommu systems too to avoid link failures. Signed-off-by:
Mike Frysinger <vapier@gentoo.org> Signed-off-by:
David Howells <dhowells@redhat.com> Tested-by:
Simon Horman <horms@verge.net.au> Tested-by:
Ithamar Adema <ithamar.adema@team-embedded.nl> Acked-by:
Greg Ungerer <gerg@uclinux.org>
-
- Mar 28, 2011
-
-
David Howells authored
per_cpu_ptr_to_phys() uses VMALLOC_START and VMALLOC_END to determine if an address is in the vmalloc() region or not. This is incorrect on NOMMU as there is no real vmalloc() capability (vmalloc() is emulated by kmalloc()). The correct way to do this is to use is_vmalloc_addr(). This encapsulates the vmalloc() region test in MMU mode and just returns 0 in NOMMU mode. On FRV in NOMMU mode, the percpu compilation fails without this patch: mm/percpu.c: In function 'per_cpu_ptr_to_phys': mm/percpu.c:1011: error: 'VMALLOC_START' undeclared (first use in this function) mm/percpu.c:1011: error: (Each undeclared identifier is reported only once mm/percpu.c:1011: error: for each function it appears in.) mm/percpu.c:1012: error: 'VMALLOC_END' undeclared (first use in this function) mm/percpu.c:1018: warning: control reaches end of non-void function Signed-off-by:
David Howells <dhowells@redhat.com>
-
- Mar 25, 2011
-
-
Dave Chinner authored
Protect the inode writeback list with a new global lock inode_wb_list_lock and use it to protect the list manipulations and traversals. This lock replaces the inode_lock as the inodes on the list can be validity checked while holding the inode->i_lock and hence the inode_lock is no longer needed to protect the list. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Dave Chinner authored
Protect inode state transitions and validity checks with the inode->i_lock. This enables us to make inode state transitions independently of the inode_lock and is the first step to peeling away the inode_lock from the code. This requires that __iget() is done atomically with i_state checks during list traversals so that we don't race with another thread marking the inode I_FREEING between the state check and grabbing the reference. Also remove the unlock_new_inode() memory barrier optimisation required to avoid taking the inode_lock when clearing I_NEW. Simplify the code by simply taking the inode->i_lock around the state change and wakeup. Because the wakeup is no longer tricky, remove the wake_up_inode() function and open code the wakeup where necessary. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
David Rientjes authored
Commit ddd588b5 ("oom: suppress nodes that are not allowed from meminfo on oom kill") moved lib/show_mem.o out of lib/lib.a, which resulted in build warnings on all architectures that implement their own versions of show_mem(): lib/lib.a(show_mem.o): In function `show_mem': show_mem.c:(.text+0x1f4): multiple definition of `show_mem' arch/sparc/mm/built-in.o:(.text+0xd70): first defined here The fix is to remove __show_mem() and add its argument to show_mem() in all implementations to prevent this breakage. Architectures that implement their own show_mem() actually don't do anything with the argument yet, but they could be made to filter nodes that aren't allowed in the current context in the future just like the generic implementation. Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Reported-by:
James Bottomley <James.Bottomley@hansenpartnership.com> Suggested-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 24, 2011
-
-
Christoph Lameter authored
It turns out that the cmpxchg16b emulation has to access vmalloced percpu memory with interrupts disabled. If the memory has never been touched before then the fault necessary to establish the mapping will not to occur and the kernel will fail on boot. Fix that by reusing the CONFIG_PREEMPT code that writes the cpu number into a field on every cpu. Writing to the per cpu area before causes the mapping to be established before we get to a cmpxchg16b emulation. Tested-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Thomas Gleixner authored
On Thu, 24 Mar 2011, Ingo Molnar wrote: > RIP: 0010:[<ffffffff810570a9>] [<ffffffff810570a9>] get_next_timer_interrupt+0x119/0x260 That's a typical timer crash, but you were unable to debug it with debugobjects because commit d3f661d6 broke those. Cc: Christoph Lameter <cl@linux.com> Tested-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Pekka Enberg <penberg@kernel.org>
-
Olaf Hering authored
crash_dump: export is_kdump_kernel to modules, consolidate elfcorehdr_addr, setup_elfcorehdr and saved_max_pfn The Xen PV drivers in a crashed HVM guest can not connect to the dom0 backend drivers because both frontend and backend drivers are still in connected state. To run the connection reset function only in case of a crashdump, the is_kdump_kernel() function needs to be available for the PV driver modules. Consolidate elfcorehdr_addr, setup_elfcorehdr and saved_max_pfn into kernel/crash_dump.c Also export elfcorehdr_addr to make is_kdump_kernel() usable for modules. Leave 'elfcorehdr' as early_param(). This changes powerpc from __setup() to early_param(). It adds an address range check from x86 also on ia64 and powerpc. [akpm@linux-foundation.org: additional #includes] [akpm@linux-foundation.org: remove elfcorehdr_addr export] [akpm@linux-foundation.org: fix for Tejun's mm/nobootmem.c changes] Signed-off-by:
Olaf Hering <olaf@aepfle.de> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Rientjes authored
When a memcg is oom and current has already received a SIGKILL, then give it access to memory reserves with a higher scheduling priority so that it may quickly exit and free its memory. This is identical to the global oom killer and is done even before checking for panic_on_oom: a pending SIGKILL here while panic_on_oom is selected is guaranteed to have come from userspace; the thread only needs access to memory reserves to exit and thus we don't unnecessarily panic the machine until the kernel has no last resort to free memory. Signed-off-by:
David Rientjes <rientjes@google.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
KAMEZAWA Hiroyuki authored
fs/fuse/dev.c::fuse_try_move_page() does (1) remove a page by ->steal() (2) re-add the page to page cache (3) link the page to LRU if it was not on LRU at (1) This implies the page is _on_ LRU when it's added to radix-tree. So, the page is added to memory cgroup while it's on LRU. because LRU is lazy and no one flushs it. This is the same behavior as SwapCache and needs special care as - remove page from LRU before overwrite pc->mem_cgroup. - add page to LRU after overwrite pc->mem_cgroup. And we need to taking care of pagevec. If PageLRU(page) is set before we add PCG_USED bit, the page will not be added to memcg's LRU (in short period). So, regardlress of PageLRU(page) value before commit_charge(), we need to check PageLRU(page) after commit_charge(). Addresses https://bugzilla.kernel.org/show_bug.cgi?id=30432 Signed-off-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Balbir Singh <balbir@in.ibm.com> Reported-by:
Daniel Poelzleithner <poelzi@poelzi.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Hocko authored
KAMEZAWA Hiroyuki noted that free_pages_cgroup doesn't have to check for PageReserved because we never store the array on reserved pages (neither alloc_pages_exact nor vmalloc use those pages). So we can replace the check by a BUG_ON. Signed-off-by:
Michal Hocko <mhocko@suse.cz> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Balbir Singh <balbir@in.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Hocko authored
Currently we are allocating a single page_cgroup array per memory section (stored in mem_section->base) when CONFIG_SPARSEMEM is selected. This is correct but memory inefficient solution because the allocated memory (unless we fall back to vmalloc) is not kmalloc friendly: - 32b - 16384 entries (20B per entry) fit into 327680B so the 524288B slab cache is used - 32b with PAE - 131072 entries with 2621440B fit into 4194304B - 64b - 32768 entries (40B per entry) fit into 2097152 cache This is ~37% wasted space per memory section and it sumps up for the whole memory. On a x86_64 machine it is something like 6MB per 1GB of RAM. We can reduce the internal fragmentation by using alloc_pages_exact which allocates PAGE_SIZE aligned blocks so we will get down to <4kB wasted memory per section which is much better. We still need a fallback to vmalloc because we have no guarantees that we will have a continuous memory of that size (order-10) later on during the hotplug events. [hannes@cmpxchg.org: do not define unused free_page_cgroup() without memory hotplug] Signed-off-by:
Michal Hocko <mhocko@suse.cz> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Balbir Singh <balbir@in.ibm.com> Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Andrew Morton authored
mm/memcontrol.c: In function 'mem_cgroup_force_empty': mm/memcontrol.c:2280: warning: 'flags' may be used uninitialized in this function It's a false positive. Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Greg Thelen <gthelen@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
The statistic counters are in units of pages, there is no reason to make them 64-bit wide on 32-bit machines. Make them native words. Since they are signed, this leaves 31 bit on 32-bit machines, which can represent roughly 8TB assuming a page size of 4k. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Signed-off-by:
Greg Thelen <gthelen@google.com> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by:
Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
For increasing and decreasing per-cpu cgroup usage counters it makes sense to use signed types, as single per-cpu values might go negative during updates. But this is not the case for only-ever-increasing event counters. All the counters have been signed 64-bit so far, which was enough to count events even with the sign bit wasted. This patch: - divides s64 counters into signed usage counters and unsigned monotonically increasing event counters. - converts unsigned event counters into 'unsigned long' rather than 'u64'. This matches the type used by the /proc/vmstat event counters. The next patch narrows the signed usage counters type (on 32-bit CPUs, that is). Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Signed-off-by:
Greg Thelen <gthelen@google.com> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by:
Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
There is no clear pattern when we pass a page count and when we pass a byte count that is a multiple of PAGE_SIZE. We never charge or uncharge subpage quantities, so convert it all to page counts. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
We never uncharge subpage quantities. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
We never keep subpage quantities in the per-cpu stock. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
We have two charge cancelling functions: one takes a page count, the other a page size. The second one just divides the parameter by PAGE_SIZE and then calls the first one. This is trivial, no need for an extra function. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
The reclaim_param_lock is only taken around single reads and writes to integer variables and is thus superfluous. Drop it. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Reviewed-by:
Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
page_cgroup_zoneinfo() will never return NULL for a charged page, remove the check for it in mem_cgroup_get_reclaim_stat_from_page(). Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Reviewed-by:
Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
In struct page_cgroup, we have a full word for flags but only a few are reserved. Use the remaining upper bits to encode, depending on configuration, the node or the section, to enable page_cgroup-to-page lookups without a direct pointer. This saves a full word for every page in a system with memory cgroups enabled. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
The per-cgroup LRU lists string up 'struct page_cgroup's. To get from those structures to the page they represent, a lookup is required. Currently, the lookup is done through a direct pointer in struct page_cgroup, so a lot of functions down the callchain do this lookup by themselves instead of receiving the page pointer from their callers. The next patch removes this pointer, however, and the lookup is no longer that straight-forward. In preparation for that, this patch only leaves the non-optional lookups when coming directly from the LRU list and passes the page down the stack. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
It is one logical function, no need to have it split up. Also, get rid of some checks from the inner function that ensured the sanity of the outer function. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by:
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
Instead of passing a whole struct page_cgroup to this function, let it take only what it really needs from it: the struct mem_cgroup and the page. This has the advantage that reading pc->mem_cgroup is now done at the same place where the ordering rules for this pointer are enforced and explained. It is also in preparation for removing the pc->page backpointer. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
This patch series removes the direct page pointer from struct page_cgroup, which saves 20% of per-page memcg memory overhead (Fedora and Ubuntu enable memcg per default, openSUSE apparently too). The node id or section number is encoded in the remaining free bits of pc->flags which allows calculating the corresponding page without the extra pointer. I ran, what I think is, a worst-case microbenchmark that just cats a large sparse file to /dev/null, because it means that walking the LRU list on behalf of per-cgroup reclaim and looking up pages from page_cgroups is happening constantly and at a high rate. But it made no measurable difference. A profile reported a 0.11% share of the new lookup_cgroup_page() function in this benchmark. This patch: All callsites check PCG_USED before passing pc->mem_cgroup, so the latter is never NULL. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by:
Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Daisuke Nishimura authored
Add checks at allocating or freeing a page whether the page is used (iow, charged) from the view point of memcg. This check may be useful in debugging a problem and we did similar checks before the commit 52d4b9ac(memcg: allocate all page_cgroup at boot). This patch adds some overheads at allocating or freeing memory, so it's enabled only when CONFIG_DEBUG_VM is enabled. Signed-off-by:
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
The page_cgroup array is set up before even fork is initialized. I seriously doubt that this code executes before the array is alloc'd. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
No callsite ever passes a NULL pointer for a struct mem_cgroup * to the committing function. There is no need to check for it. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
These definitions have been unused since '4b3bde4c memcg: remove the overhead associated with the root cgroup'. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
Since transparent huge pages, checking whether memory cgroups are below their limits is no longer enough, but the actual amount of chargeable space is important. To not have more than one limit-checking interface, replace memory_cgroup_check_under_limit() and memory_cgroup_check_margin() with a single memory_cgroup_margin() that returns the chargeable space and leaves the comparison to the callsite. Soft limits are now checked the other way round, by using the already existing function that returns the amount by which soft limits are exceeded: res_counter_soft_limit_excess(). Also remove all the corresponding functions on the res_counter side that are now no longer used. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by:
Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
Soft limit reclaim continues until the usage is below the current soft limit, but the documented semantics are actually that soft limit reclaim will push usage back until the soft limits are met again. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by:
Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
KAMEZAWA Hiroyuki authored
Remove initialization of vaiable in caller of memory cgroup function. Actually, it's return value of memcg function but it's initialized in caller. Some memory cgroup uses following style to bring the result of start function to the end function for avoiding races. mem_cgroup_start_A(&(*ptr)) /* Something very complicated can happen here. */ mem_cgroup_end_A(*ptr) In some calls, *ptr should be initialized to NULL be caller. But it's ugly. This patch fixes that *ptr is initialized by _start function. Signed-off-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 23, 2011
-
-
Stephen Wilson authored
Provide an alternative to access_process_vm that allows the caller to obtain a reference to the supplied mm_struct. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Stephen Wilson authored
Introduce an internal helper __access_remote_vm and base access_process_vm on top of it. This new method may be called with a NULL task_struct if page fault accounting is not desired. This code will be shared with a new address space accessor that is independent of task_struct. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Stephen Wilson authored
We now check if a requested user page overlaps a gate vma using the supplied mm instead of the supplied task. The given task is now used solely for accounting purposes and may be NULL. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Stephen Wilson authored
Now that gate vma's are referenced with respect to a particular mm and not a particular task it only makes sense to propagate the change to this predicate as well. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Reviewed-by:
Michel Lespinasse <walken@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Stephen Wilson authored
Morally, the question of whether an address lies in a gate vma should be asked with respect to an mm, not a particular task. Moreover, dropping the dependency on task_struct will help make existing and future operations on mm's more flexible and convenient. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Reviewed-by:
Michel Lespinasse <walken@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Stephen Wilson authored
Morally, the presence of a gate vma is more an attribute of a particular mm than a particular task. Moreover, dropping the dependency on task_struct will help make both existing and future operations on mm's more flexible and convenient. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Reviewed-by:
Michel Lespinasse <walken@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Cesar Eduardo Barros authored
A conflict between 52c50567 ("mm: swap: unlock swapfile inode mutex before closing file on bad swapfiles") and 83ef99be ("sys_swapon: remove did_down variable") caused a double unlock of the inode mutex (once in bad_swap: before the filp_close, once at the end just before returning). The patch which added the extra unlock cleared did_down to avoid unlocking twice, but the other patch removed the did_down variable. To fix, set inode to NULL after the first unlock, since it will be used after that point only for the final unlock. While checking this patch, I found a path which could unlock without locking, in case the same inode was added as a swapfile twice. To fix, move the setting of the inode variable further down, to just before claim_swapfile, which will lock the inode before doing anything else. Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: Eric B Munson <emunson@mgebm.net> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Cesar Eduardo Barros <cesarb@cesarb.net> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-