Skip to content
Snippets Groups Projects
Commit ee146245 authored by David Rientjes's avatar David Rientjes Committed by Linus Torvalds
Browse files

fs, jfs: remove slab object constructor


Mempools based on slab caches with object constructors are risky because
element allocation can happen either from the slab cache itself, meaning
the constructor is properly called before returning, or from the mempool
reserve pool, meaning the constructor is not called before returning,
depending on the allocation context.

For this reason, we should disallow creating mempools based on slab caches
that have object constructors.  Callers of mempool_alloc() will be
responsible for properly initializing the returned element.

Then, it doesn't matter if the element came from the slab cache or the
mempool reserved pool.

The only occurrence of a mempool being based on a slab cache with an
object constructor in the tree is in fs/jfs/jfs_metapage.c.  Remove it and
properly initialize the element in alloc_metapage().

At the same time, META_free is never used, so remove it as well.

Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
Acked-by: default avatarDave Kleikamp <dave.kleikamp@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4db0c3c2
No related branches found
No related tags found
No related merge requests found
...@@ -183,30 +183,23 @@ static inline void remove_metapage(struct page *page, struct metapage *mp) ...@@ -183,30 +183,23 @@ static inline void remove_metapage(struct page *page, struct metapage *mp)
#endif #endif
static void init_once(void *foo) static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{ {
struct metapage *mp = (struct metapage *)foo; struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0; mp->lid = 0;
mp->lsn = 0; mp->lsn = 0;
mp->flag = 0;
mp->data = NULL; mp->data = NULL;
mp->clsn = 0; mp->clsn = 0;
mp->log = NULL; mp->log = NULL;
set_bit(META_free, &mp->flag);
init_waitqueue_head(&mp->wait); init_waitqueue_head(&mp->wait);
} }
return mp;
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
return mempool_alloc(metapage_mempool, gfp_mask);
} }
static inline void free_metapage(struct metapage *mp) static inline void free_metapage(struct metapage *mp)
{ {
mp->flag = 0;
set_bit(META_free, &mp->flag);
mempool_free(mp, metapage_mempool); mempool_free(mp, metapage_mempool);
} }
...@@ -216,7 +209,7 @@ int __init metapage_init(void) ...@@ -216,7 +209,7 @@ int __init metapage_init(void)
* Allocate the metapage structures * Allocate the metapage structures
*/ */
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage), metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, init_once); 0, 0, NULL);
if (metapage_cache == NULL) if (metapage_cache == NULL)
return -ENOMEM; return -ENOMEM;
......
...@@ -48,7 +48,6 @@ struct metapage { ...@@ -48,7 +48,6 @@ struct metapage {
/* metapage flag */ /* metapage flag */
#define META_locked 0 #define META_locked 0
#define META_free 1
#define META_dirty 2 #define META_dirty 2
#define META_sync 3 #define META_sync 3
#define META_discard 4 #define META_discard 4
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment