Skip to content
Snippets Groups Projects
  1. Mar 24, 2006
    • Amos Waterland's avatar
      The comment describing how MS_ASYNC works in msync.c is confusing · 16538c40
      Amos Waterland authored
      because of a typo.  This patch just changes "my" to "by", which I
      believe was the original intent.
      
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      16538c40
    • Andrew Morton's avatar
      [PATCH] msync(): use do_fsync() · 8f2e9f15
      Andrew Morton authored
      
      No need to duplicate all that code.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      8f2e9f15
    • Andrew Morton's avatar
      [PATCH] msync: fix return value · 676758bd
      Andrew Morton authored
      
      msync() does a strange thing.  Essentially:
      
      	vma = find_vma();
      	for ( ; ; ) {
      		if (!vma)
      			return -ENOMEM;
      		...
      		vma = vma->vm_next;
      	}
      
      so an msync() request which starts within or before a valid VMA and which ends
      within or beyond the final VMA will incorrectly return -ENOMEM.
      
      Fix.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      676758bd
    • Andrew Morton's avatar
      [PATCH] msync(MS_SYNC): don't hold mmap_sem while syncing · 707c21c8
      Andrew Morton authored
      
      It seems bad to hold mmap_sem while performing synchronous disk I/O.  Alter
      the msync(MS_SYNC) code so that the lock is released while we sync the file.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      707c21c8
    • Andrew Morton's avatar
      [PATCH] msync(): perform dirty page levelling · 9c50823e
      Andrew Morton authored
      
      It seems sensible to perform dirty page throttling in msync: as the application
      dirties pages we can kick off pdflush early, or even force the msync() caller
      to perform writeout, or even throttle the msync() caller.
      
      The main effect of this is to start disk writeback earlier if we've just
      discovered that a large amount of pagecache has been dirtied.  (Otherwise it
      wouldn't happen for up to five seconds, next time pdflush wakes up).
      
      It also will cause the page-dirtying process to get panalised for dirtying
      those pages rather than whacking someone else with the problem.
      
      We should do this for munmap() and possibly even exit(), too.
      
      We drop the mmap_sem while performing the dirty page balancing.  It doesn't
      seem right to hold mmap_sem for that long.
      
      Note that this patch only affects MS_ASYNC.  MS_SYNC will be syncing all the
      dirty pages anyway.
      
      We note that msync(MS_SYNC) does a full-file-sync inside mmap_sem, and always
      has.  We can fix that up...
      
      The patch also tightens up the mmap_sem coverage in sys_msync(): no point in
      taking it while we perform the incoming arg checking.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      9c50823e
  2. Jan 09, 2006
  3. Nov 28, 2005
    • Linus Torvalds's avatar
      mm: re-architect the VM_UNPAGED logic · 6aab341e
      Linus Torvalds authored
      
      This replaces the (in my opinion horrible) VM_UNMAPPED logic with very
      explicit support for a "remapped page range" aka VM_PFNMAP.  It allows a
      VM area to contain an arbitrary range of page table entries that the VM
      never touches, and never considers to be normal pages.
      
      Any user of "remap_pfn_range()" automatically gets this new
      functionality, and doesn't even have to mark the pages reserved or
      indeed mark them any other way.  It just works.  As a side effect, doing
      mmap() on /dev/mem works for arbitrary ranges.
      
      Sparc update from David in the next commit.
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      6aab341e
  4. Nov 22, 2005
    • Hugh Dickins's avatar
      [PATCH] unpaged: VM_UNPAGED · 0b14c179
      Hugh Dickins authored
      
      Although we tend to associate VM_RESERVED with remap_pfn_range, quite a few
      drivers set VM_RESERVED on areas which are then populated by nopage.  The
      PageReserved removal in 2.6.15-rc1 changed VM_RESERVED not to free pages in
      zap_pte_range, without changing those drivers not to set it: so their pages
      just leak away.
      
      Let's not change miscellaneous drivers now: introduce VM_UNPAGED at the core,
      to flag the special areas where the ptes may have no struct page, or if they
      have then it's not to be touched.  Replace most instances of VM_RESERVED in
      core mm by VM_UNPAGED.  Force it on in remap_pfn_range, and the sparc and
      sparc64 io_remap_pfn_range.
      
      Revert addition of VM_RESERVED to powerpc vdso, it's not needed there.  Is it
      needed anywhere?  It still governs the mm->reserved_vm statistic, and special
      vmas not to be merged, and areas not to be core dumped; but could probably be
      eliminated later (the drivers are probably specifying it because in 2.4 it
      kept swapout off the vma, but in 2.6 we work from the LRU, which these pages
      don't get on).
      
      Use the VM_SHM slot for VM_UNPAGED, and define VM_SHM to 0: it serves no
      purpose whatsoever, and should be removed from drivers when we clean up.
      
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Acked-by: default avatarWilliam Irwin <wli@holomorphy.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      0b14c179
  5. Oct 30, 2005
    • Hugh Dickins's avatar
      [PATCH] mm: pte_offset_map_lock loops · 705e87c0
      Hugh Dickins authored
      
      Convert those common loops using page_table_lock on the outside and
      pte_offset_map within to use just pte_offset_map_lock within instead.
      
      These all hold mmap_sem (some exclusively, some not), so at no level can a
      page table be whipped away from beneath them.  But whereas pte_alloc loops
      tested with the "atomic" pmd_present, these loops are testing with pmd_none,
      which on i386 PAE tests both lower and upper halves.
      
      That's now unsafe, so add a cast into pmd_none to test only the vital lower
      half: we lose a little sensitivity to a corrupt middle directory, but not
      enough to worry about.  It appears that i386 and UML were the only
      architectures vulnerable in this way, and pgd and pud no problem.
      
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      705e87c0
    • Nick Piggin's avatar
      [PATCH] core remove PageReserved · b5810039
      Nick Piggin authored
      
      Remove PageReserved() calls from core code by tightening VM_RESERVED
      handling in mm/ to cover PageReserved functionality.
      
      PageReserved special casing is removed from get_page and put_page.
      
      All setting and clearing of PageReserved is retained, and it is now flagged
      in the page_alloc checks to help ensure we don't introduce any refcount
      based freeing of Reserved pages.
      
      MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
      deprecated.  We never completely handled it correctly anyway, and is be
      reintroduced in future if required (Hugh has a proof of concept).
      
      Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
      arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
      be trivially removed.
      
      Last real user of PageReserved is swsusp, which uses PageReserved to
      determine whether a struct page points to valid memory or not.  This still
      needs to be addressed (a generic page_is_ram() should work).
      
      A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
      thus mapcounted and count towards shared rss).  These writes to the struct
      page could cause excessive cacheline bouncing on big systems.  There are a
      number of ways this could be addressed if it is an issue.
      
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      
      Refcount bug fix for filemap_xip.c
      
      Signed-off-by: default avatarCarsten Otte <cotte@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b5810039
    • Hugh Dickins's avatar
      [PATCH] mm: msync_pte_range progress · 0c942a45
      Hugh Dickins authored
      
      Use latency breaking in msync_pte_range like that in copy_pte_range, instead
      of the ugly CONFIG_PREEMPT filemap_msync alternatives.
      
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      0c942a45
    • OGAWA Hirofumi's avatar
      [PATCH] mm/msync.c cleanup · b57b98d1
      OGAWA Hirofumi authored
      
      This is not problem actually, but sync_page_range() is using for exported
      function to filesystems.
      
      The msync_xxx is more readable at least to me.
      
      Signed-off-by: default avatarOGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Acked-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b57b98d1
  6. Jun 22, 2005
    • Abhijit Karmarkar's avatar
      [PATCH] msync: check pte dirty earlier · b4955ce3
      Abhijit Karmarkar authored
      
      It's common practice to msync a large address range regularly, in which
      often only a few ptes have actually been dirtied since the previous pass.
      
      sync_pte_range then goes much faster if it tests whether pte is dirty
      before locating and accessing each struct page cacheline; and it is hardly
      slowed by ptep_clear_flush_dirty repeating that test in the opposite case,
      when every pte actually is dirty.
      
      But beware, s390's pte_dirty always says false, since its dirty bit is kept
      in the storage key, located via the struct page address.  So skip this
      optimization in its case: use a pte_maybe_dirty macro which just says true
      if page_test_and_clear_dirty is implemented.
      
      Signed-off-by: default avatarAbhijit Karmarkar <abhijitk@veritas.com>
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b4955ce3
  7. Apr 16, 2005
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      v2.6.12-rc2
      1da177e4
Loading