Skip to content
Snippets Groups Projects
Commit a667cb7a authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'akpm' (patches from Andrew)

Merge misc updates from Andrew Morton:

 - a few misc things

 - the rest of MM

-  remove flex_arrays, replace with new simple radix-tree implementation

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (38 commits)
  Drop flex_arrays
  sctp: convert to genradix
  proc: commit to genradix
  generic radix trees
  selinux: convert to kvmalloc
  md: convert to kvmalloc
  openvswitch: convert to kvmalloc
  of: fix kmemleak crash caused by imbalance in early memory reservation
  mm: memblock: update comments and kernel-doc
  memblock: split checks whether a region should be skipped to a helper function
  memblock: remove memblock_{set,clear}_region_flags
  memblock: drop memblock_alloc_*_nopanic() variants
  memblock: memblock_alloc_try_nid: don't panic
  treewide: add checks for the return value of memblock_alloc*()
  swiotlb: add checks for the return value of memblock_alloc*()
  init/main: add checks for the return value of memblock_alloc*()
  mm/percpu: add checks for the return value of memblock_alloc*()
  sparc: add checks for the return value of memblock_alloc*()
  ia64: add checks for the return value of memblock_alloc*()
  arch: don't memset(0) memory returned by memblock_alloc()
  ...
parents cb1d150d 586187d7
No related branches found
No related tags found
No related merge requests found
Showing
with 113 additions and 267 deletions
===================================
Using flexible arrays in the kernel
===================================
Large contiguous memory allocations can be unreliable in the Linux kernel.
Kernel programmers will sometimes respond to this problem by allocating
pages with :c:func:`vmalloc()`. This solution not ideal, though. On 32-bit
systems, memory from vmalloc() must be mapped into a relatively small address
space; it's easy to run out. On SMP systems, the page table changes required
by vmalloc() allocations can require expensive cross-processor interrupts on
all CPUs. And, on all systems, use of space in the vmalloc() range increases
pressure on the translation lookaside buffer (TLB), reducing the performance
of the system.
In many cases, the need for memory from vmalloc() can be eliminated by piecing
together an array from smaller parts; the flexible array library exists to make
this task easier.
A flexible array holds an arbitrary (within limits) number of fixed-sized
objects, accessed via an integer index. Sparse arrays are handled
reasonably well. Only single-page allocations are made, so memory
allocation failures should be relatively rare. The down sides are that the
arrays cannot be indexed directly, individual object size cannot exceed the
system page size, and putting data into a flexible array requires a copy
operation. It's also worth noting that flexible arrays do no internal
locking at all; if concurrent access to an array is possible, then the
caller must arrange for appropriate mutual exclusion.
The creation of a flexible array is done with :c:func:`flex_array_alloc()`::
#include <linux/flex_array.h>
struct flex_array *flex_array_alloc(int element_size,
unsigned int total,
gfp_t flags);
The individual object size is provided by ``element_size``, while total is the
maximum number of objects which can be stored in the array. The flags
argument is passed directly to the internal memory allocation calls. With
the current code, using flags to ask for high memory is likely to lead to
notably unpleasant side effects.
It is also possible to define flexible arrays at compile time with::
DEFINE_FLEX_ARRAY(name, element_size, total);
This macro will result in a definition of an array with the given name; the
element size and total will be checked for validity at compile time.
Storing data into a flexible array is accomplished with a call to
:c:func:`flex_array_put()`::
int flex_array_put(struct flex_array *array, unsigned int element_nr,
void *src, gfp_t flags);
This call will copy the data from src into the array, in the position
indicated by ``element_nr`` (which must be less than the maximum specified when
the array was created). If any memory allocations must be performed, flags
will be used. The return value is zero on success, a negative error code
otherwise.
There might possibly be a need to store data into a flexible array while
running in some sort of atomic context; in this situation, sleeping in the
memory allocator would be a bad thing. That can be avoided by using
``GFP_ATOMIC`` for the flags value, but, often, there is a better way. The
trick is to ensure that any needed memory allocations are done before
entering atomic context, using :c:func:`flex_array_prealloc()`::
int flex_array_prealloc(struct flex_array *array, unsigned int start,
unsigned int nr_elements, gfp_t flags);
This function will ensure that memory for the elements indexed in the range
defined by ``start`` and ``nr_elements`` has been allocated. Thereafter, a
``flex_array_put()`` call on an element in that range is guaranteed not to
block.
Getting data back out of the array is done with :c:func:`flex_array_get()`::
void *flex_array_get(struct flex_array *fa, unsigned int element_nr);
The return value is a pointer to the data element, or NULL if that
particular element has never been allocated.
Note that it is possible to get back a valid pointer for an element which
has never been stored in the array. Memory for array elements is allocated
one page at a time; a single allocation could provide memory for several
adjacent elements. Flexible array elements are normally initialized to the
value ``FLEX_ARRAY_FREE`` (defined as 0x6c in <linux/poison.h>), so errors
involving that number probably result from use of unstored array entries.
Note that, if array elements are allocated with ``__GFP_ZERO``, they will be
initialized to zero and this poisoning will not happen.
Individual elements in the array can be cleared with
:c:func:`flex_array_clear()`::
int flex_array_clear(struct flex_array *array, unsigned int element_nr);
This function will set the given element to ``FLEX_ARRAY_FREE`` and return
zero. If storage for the indicated element is not allocated for the array,
``flex_array_clear()`` will return ``-EINVAL`` instead. Note that clearing an
element does not release the storage associated with it; to reduce the
allocated size of an array, call :c:func:`flex_array_shrink()`::
int flex_array_shrink(struct flex_array *array);
The return value will be the number of pages of memory actually freed.
This function works by scanning the array for pages containing nothing but
``FLEX_ARRAY_FREE`` bytes, so (1) it can be expensive, and (2) it will not work
if the array's pages are allocated with ``__GFP_ZERO``.
It is possible to remove all elements of an array with a call to
:c:func:`flex_array_free_parts()`::
void flex_array_free_parts(struct flex_array *array);
This call frees all elements, but leaves the array itself in place.
Freeing the entire array is done with :c:func:`flex_array_free()`::
void flex_array_free(struct flex_array *array);
As of this writing, there are no users of flexible arrays in the mainline
kernel. The functions described here are also not exported to modules;
that will probably be fixed when somebody comes up with a need for it.
Flexible array functions
------------------------
.. kernel-doc:: include/linux/flex_array.h
=================================
Generic radix trees/sparse arrays
=================================
.. kernel-doc:: include/linux/generic-radix-tree.h
:doc: Generic radix trees/sparse arrays
generic radix tree functions
----------------------------
.. kernel-doc:: include/linux/generic-radix-tree.h
:functions:
...@@ -28,6 +28,7 @@ Core utilities ...@@ -28,6 +28,7 @@ Core utilities
errseq errseq
printk-formats printk-formats
circular-buffers circular-buffers
generic-radix-tree
memory-allocation memory-allocation
mm-api mm-api
gfp_mask-from-fs-io gfp_mask-from-fs-io
......
===================================
Using flexible arrays in the kernel
===================================
:Updated: Last updated for 2.6.32
:Author: Jonathan Corbet <corbet@lwn.net>
Large contiguous memory allocations can be unreliable in the Linux kernel.
Kernel programmers will sometimes respond to this problem by allocating
pages with vmalloc(). This solution not ideal, though. On 32-bit systems,
memory from vmalloc() must be mapped into a relatively small address space;
it's easy to run out. On SMP systems, the page table changes required by
vmalloc() allocations can require expensive cross-processor interrupts on
all CPUs. And, on all systems, use of space in the vmalloc() range
increases pressure on the translation lookaside buffer (TLB), reducing the
performance of the system.
In many cases, the need for memory from vmalloc() can be eliminated by
piecing together an array from smaller parts; the flexible array library
exists to make this task easier.
A flexible array holds an arbitrary (within limits) number of fixed-sized
objects, accessed via an integer index. Sparse arrays are handled
reasonably well. Only single-page allocations are made, so memory
allocation failures should be relatively rare. The down sides are that the
arrays cannot be indexed directly, individual object size cannot exceed the
system page size, and putting data into a flexible array requires a copy
operation. It's also worth noting that flexible arrays do no internal
locking at all; if concurrent access to an array is possible, then the
caller must arrange for appropriate mutual exclusion.
The creation of a flexible array is done with::
#include <linux/flex_array.h>
struct flex_array *flex_array_alloc(int element_size,
unsigned int total,
gfp_t flags);
The individual object size is provided by element_size, while total is the
maximum number of objects which can be stored in the array. The flags
argument is passed directly to the internal memory allocation calls. With
the current code, using flags to ask for high memory is likely to lead to
notably unpleasant side effects.
It is also possible to define flexible arrays at compile time with::
DEFINE_FLEX_ARRAY(name, element_size, total);
This macro will result in a definition of an array with the given name; the
element size and total will be checked for validity at compile time.
Storing data into a flexible array is accomplished with a call to::
int flex_array_put(struct flex_array *array, unsigned int element_nr,
void *src, gfp_t flags);
This call will copy the data from src into the array, in the position
indicated by element_nr (which must be less than the maximum specified when
the array was created). If any memory allocations must be performed, flags
will be used. The return value is zero on success, a negative error code
otherwise.
There might possibly be a need to store data into a flexible array while
running in some sort of atomic context; in this situation, sleeping in the
memory allocator would be a bad thing. That can be avoided by using
GFP_ATOMIC for the flags value, but, often, there is a better way. The
trick is to ensure that any needed memory allocations are done before
entering atomic context, using::
int flex_array_prealloc(struct flex_array *array, unsigned int start,
unsigned int nr_elements, gfp_t flags);
This function will ensure that memory for the elements indexed in the range
defined by start and nr_elements has been allocated. Thereafter, a
flex_array_put() call on an element in that range is guaranteed not to
block.
Getting data back out of the array is done with::
void *flex_array_get(struct flex_array *fa, unsigned int element_nr);
The return value is a pointer to the data element, or NULL if that
particular element has never been allocated.
Note that it is possible to get back a valid pointer for an element which
has never been stored in the array. Memory for array elements is allocated
one page at a time; a single allocation could provide memory for several
adjacent elements. Flexible array elements are normally initialized to the
value FLEX_ARRAY_FREE (defined as 0x6c in <linux/poison.h>), so errors
involving that number probably result from use of unstored array entries.
Note that, if array elements are allocated with __GFP_ZERO, they will be
initialized to zero and this poisoning will not happen.
Individual elements in the array can be cleared with::
int flex_array_clear(struct flex_array *array, unsigned int element_nr);
This function will set the given element to FLEX_ARRAY_FREE and return
zero. If storage for the indicated element is not allocated for the array,
flex_array_clear() will return -EINVAL instead. Note that clearing an
element does not release the storage associated with it; to reduce the
allocated size of an array, call::
int flex_array_shrink(struct flex_array *array);
The return value will be the number of pages of memory actually freed.
This function works by scanning the array for pages containing nothing but
FLEX_ARRAY_FREE bytes, so (1) it can be expensive, and (2) it will not work
if the array's pages are allocated with __GFP_ZERO.
It is possible to remove all elements of an array with a call to::
void flex_array_free_parts(struct flex_array *array);
This call frees all elements, but leaves the array itself in place.
Freeing the entire array is done with::
void flex_array_free(struct flex_array *array);
As of this writing, there are no users of flexible arrays in the mainline
kernel. The functions described here are also not exported to modules;
that will probably be fixed when somebody comes up with a need for it.
...@@ -331,7 +331,10 @@ cia_prepare_tbia_workaround(int window) ...@@ -331,7 +331,10 @@ cia_prepare_tbia_workaround(int window)
long i; long i;
/* Use minimal 1K map. */ /* Use minimal 1K map. */
ppte = memblock_alloc_from(CIA_BROKEN_TBIA_SIZE, 32768, 0); ppte = memblock_alloc(CIA_BROKEN_TBIA_SIZE, 32768);
if (!ppte)
panic("%s: Failed to allocate %u bytes align=0x%x\n",
__func__, CIA_BROKEN_TBIA_SIZE, 32768);
pte = (virt_to_phys(ppte) >> (PAGE_SHIFT - 1)) | 1; pte = (virt_to_phys(ppte) >> (PAGE_SHIFT - 1)) | 1;
for (i = 0; i < CIA_BROKEN_TBIA_SIZE / sizeof(unsigned long); ++i) for (i = 0; i < CIA_BROKEN_TBIA_SIZE / sizeof(unsigned long); ++i)
......
...@@ -83,6 +83,9 @@ mk_resource_name(int pe, int port, char *str) ...@@ -83,6 +83,9 @@ mk_resource_name(int pe, int port, char *str)
sprintf(tmp, "PCI %s PE %d PORT %d", str, pe, port); sprintf(tmp, "PCI %s PE %d PORT %d", str, pe, port);
name = memblock_alloc(strlen(tmp) + 1, SMP_CACHE_BYTES); name = memblock_alloc(strlen(tmp) + 1, SMP_CACHE_BYTES);
if (!name)
panic("%s: Failed to allocate %zu bytes\n", __func__,
strlen(tmp) + 1);
strcpy(name, tmp); strcpy(name, tmp);
return name; return name;
...@@ -118,6 +121,9 @@ alloc_io7(unsigned int pe) ...@@ -118,6 +121,9 @@ alloc_io7(unsigned int pe)
} }
io7 = memblock_alloc(sizeof(*io7), SMP_CACHE_BYTES); io7 = memblock_alloc(sizeof(*io7), SMP_CACHE_BYTES);
if (!io7)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(*io7));
io7->pe = pe; io7->pe = pe;
raw_spin_lock_init(&io7->irq_lock); raw_spin_lock_init(&io7->irq_lock);
......
...@@ -34,6 +34,9 @@ alloc_pci_controller(void) ...@@ -34,6 +34,9 @@ alloc_pci_controller(void)
struct pci_controller *hose; struct pci_controller *hose;
hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES); hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES);
if (!hose)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(*hose));
*hose_tail = hose; *hose_tail = hose;
hose_tail = &hose->next; hose_tail = &hose->next;
...@@ -44,7 +47,13 @@ alloc_pci_controller(void) ...@@ -44,7 +47,13 @@ alloc_pci_controller(void)
struct resource * __init struct resource * __init
alloc_resource(void) alloc_resource(void)
{ {
return memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); void *ptr = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
if (!ptr)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(struct resource));
return ptr;
} }
SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, bus, SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, bus,
...@@ -54,7 +63,7 @@ SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, bus, ...@@ -54,7 +63,7 @@ SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, bus,
/* from hose or from bus.devfn */ /* from hose or from bus.devfn */
if (which & IOBASE_FROM_HOSE) { if (which & IOBASE_FROM_HOSE) {
for (hose = hose_head; hose; hose = hose->next) for (hose = hose_head; hose; hose = hose->next)
if (hose->index == bus) if (hose->index == bus)
break; break;
if (!hose) if (!hose)
......
...@@ -393,6 +393,9 @@ alloc_pci_controller(void) ...@@ -393,6 +393,9 @@ alloc_pci_controller(void)
struct pci_controller *hose; struct pci_controller *hose;
hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES); hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES);
if (!hose)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(*hose));
*hose_tail = hose; *hose_tail = hose;
hose_tail = &hose->next; hose_tail = &hose->next;
...@@ -403,7 +406,13 @@ alloc_pci_controller(void) ...@@ -403,7 +406,13 @@ alloc_pci_controller(void)
struct resource * __init struct resource * __init
alloc_resource(void) alloc_resource(void)
{ {
return memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); void *ptr = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
if (!ptr)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(struct resource));
return ptr;
} }
......
...@@ -80,6 +80,9 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base, ...@@ -80,6 +80,9 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base,
" falling back to system-wide allocation\n", " falling back to system-wide allocation\n",
__func__, nid); __func__, nid);
arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES); arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES);
if (!arena)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(*arena));
} }
arena->ptes = memblock_alloc_node(sizeof(*arena), align, nid); arena->ptes = memblock_alloc_node(sizeof(*arena), align, nid);
...@@ -87,13 +90,22 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base, ...@@ -87,13 +90,22 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base,
printk("%s: couldn't allocate arena ptes from node %d\n" printk("%s: couldn't allocate arena ptes from node %d\n"
" falling back to system-wide allocation\n", " falling back to system-wide allocation\n",
__func__, nid); __func__, nid);
arena->ptes = memblock_alloc_from(mem_size, align, 0); arena->ptes = memblock_alloc(mem_size, align);
if (!arena->ptes)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, mem_size, align);
} }
#else /* CONFIG_DISCONTIGMEM */ #else /* CONFIG_DISCONTIGMEM */
arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES); arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES);
arena->ptes = memblock_alloc_from(mem_size, align, 0); if (!arena)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(*arena));
arena->ptes = memblock_alloc(mem_size, align);
if (!arena->ptes)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, mem_size, align);
#endif /* CONFIG_DISCONTIGMEM */ #endif /* CONFIG_DISCONTIGMEM */
......
...@@ -293,7 +293,7 @@ move_initrd(unsigned long mem_limit) ...@@ -293,7 +293,7 @@ move_initrd(unsigned long mem_limit)
unsigned long size; unsigned long size;
size = initrd_end - initrd_start; size = initrd_end - initrd_start;
start = memblock_alloc_from(PAGE_ALIGN(size), PAGE_SIZE, 0); start = memblock_alloc(PAGE_ALIGN(size), PAGE_SIZE);
if (!start || __pa(start) + size > mem_limit) { if (!start || __pa(start) + size > mem_limit) {
initrd_start = initrd_end = 0; initrd_start = initrd_end = 0;
return NULL; return NULL;
......
...@@ -181,8 +181,7 @@ static void init_unwind_hdr(struct unwind_table *table, ...@@ -181,8 +181,7 @@ static void init_unwind_hdr(struct unwind_table *table,
*/ */
static void *__init unw_hdr_alloc_early(unsigned long sz) static void *__init unw_hdr_alloc_early(unsigned long sz)
{ {
return memblock_alloc_from_nopanic(sz, sizeof(unsigned int), return memblock_alloc_from(sz, sizeof(unsigned int), MAX_DMA_ADDRESS);
MAX_DMA_ADDRESS);
} }
static void *unw_hdr_alloc(unsigned long sz) static void *unw_hdr_alloc(unsigned long sz)
......
...@@ -124,6 +124,10 @@ static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr) ...@@ -124,6 +124,10 @@ static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
pmd_k = pmd_offset(pud_k, kvaddr); pmd_k = pmd_offset(pud_k, kvaddr);
pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
if (!pte_k)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
pmd_populate_kernel(&init_mm, pmd_k, pte_k); pmd_populate_kernel(&init_mm, pmd_k, pte_k);
return pte_k; return pte_k;
} }
......
...@@ -867,6 +867,9 @@ static void __init request_standard_resources(const struct machine_desc *mdesc) ...@@ -867,6 +867,9 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
boot_alias_start = phys_to_idmap(start); boot_alias_start = phys_to_idmap(start);
if (arm_has_idmap_alias() && boot_alias_start != IDMAP_INVALID_ADDR) { if (arm_has_idmap_alias() && boot_alias_start != IDMAP_INVALID_ADDR) {
res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES); res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
if (!res)
panic("%s: Failed to allocate %zu bytes\n",
__func__, sizeof(*res));
res->name = "System RAM (boot alias)"; res->name = "System RAM (boot alias)";
res->start = boot_alias_start; res->start = boot_alias_start;
res->end = phys_to_idmap(end); res->end = phys_to_idmap(end);
...@@ -875,6 +878,9 @@ static void __init request_standard_resources(const struct machine_desc *mdesc) ...@@ -875,6 +878,9 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
} }
res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES); res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
if (!res)
panic("%s: Failed to allocate %zu bytes\n", __func__,
sizeof(*res));
res->name = "System RAM"; res->name = "System RAM";
res->start = start; res->start = start;
res->end = end; res->end = end;
......
...@@ -205,7 +205,11 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align) ...@@ -205,7 +205,11 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align)
BUG_ON(!arm_memblock_steal_permitted); BUG_ON(!arm_memblock_steal_permitted);
phys = memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE); phys = memblock_phys_alloc(size, align);
if (!phys)
panic("Failed to steal %pa bytes at %pS\n",
&size, (void *)_RET_IP_);
memblock_free(phys, size); memblock_free(phys, size);
memblock_remove(phys, size); memblock_remove(phys, size);
......
...@@ -721,7 +721,13 @@ EXPORT_SYMBOL(phys_mem_access_prot); ...@@ -721,7 +721,13 @@ EXPORT_SYMBOL(phys_mem_access_prot);
static void __init *early_alloc(unsigned long sz) static void __init *early_alloc(unsigned long sz)
{ {
return memblock_alloc(sz, sz); void *ptr = memblock_alloc(sz, sz);
if (!ptr)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, sz, sz);
return ptr;
} }
static void *__init late_alloc(unsigned long sz) static void *__init late_alloc(unsigned long sz)
...@@ -994,6 +1000,9 @@ void __init iotable_init(struct map_desc *io_desc, int nr) ...@@ -994,6 +1000,9 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
return; return;
svm = memblock_alloc(sizeof(*svm) * nr, __alignof__(*svm)); svm = memblock_alloc(sizeof(*svm) * nr, __alignof__(*svm));
if (!svm)
panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
__func__, sizeof(*svm) * nr, __alignof__(*svm));
for (md = io_desc; nr; md++, nr--) { for (md = io_desc; nr; md++, nr--) {
create_mapping(md); create_mapping(md);
...@@ -1016,6 +1025,9 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size, ...@@ -1016,6 +1025,9 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size,
struct static_vm *svm; struct static_vm *svm;
svm = memblock_alloc(sizeof(*svm), __alignof__(*svm)); svm = memblock_alloc(sizeof(*svm), __alignof__(*svm));
if (!svm)
panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
__func__, sizeof(*svm), __alignof__(*svm));
vm = &svm->vm; vm = &svm->vm;
vm->addr = (void *)addr; vm->addr = (void *)addr;
......
...@@ -208,6 +208,7 @@ static void __init request_standard_resources(void) ...@@ -208,6 +208,7 @@ static void __init request_standard_resources(void)
struct memblock_region *region; struct memblock_region *region;
struct resource *res; struct resource *res;
unsigned long i = 0; unsigned long i = 0;
size_t res_size;
kernel_code.start = __pa_symbol(_text); kernel_code.start = __pa_symbol(_text);
kernel_code.end = __pa_symbol(__init_begin - 1); kernel_code.end = __pa_symbol(__init_begin - 1);
...@@ -215,9 +216,10 @@ static void __init request_standard_resources(void) ...@@ -215,9 +216,10 @@ static void __init request_standard_resources(void)
kernel_data.end = __pa_symbol(_end - 1); kernel_data.end = __pa_symbol(_end - 1);
num_standard_resources = memblock.memory.cnt; num_standard_resources = memblock.memory.cnt;
standard_resources = memblock_alloc_low(num_standard_resources * res_size = num_standard_resources * sizeof(*standard_resources);
sizeof(*standard_resources), standard_resources = memblock_alloc_low(res_size, SMP_CACHE_BYTES);
SMP_CACHE_BYTES); if (!standard_resources)
panic("%s: Failed to allocate %zu bytes\n", __func__, res_size);
for_each_memblock(memory, region) { for_each_memblock(memory, region) {
res = &standard_resources[i++]; res = &standard_resources[i++];
......
...@@ -40,6 +40,11 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node) ...@@ -40,6 +40,11 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node)
void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE, void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE,
__pa(MAX_DMA_ADDRESS), __pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_KASAN, node); MEMBLOCK_ALLOC_KASAN, node);
if (!p)
panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n",
__func__, PAGE_SIZE, PAGE_SIZE, node,
__pa(MAX_DMA_ADDRESS));
return __pa(p); return __pa(p);
} }
...@@ -48,6 +53,11 @@ static phys_addr_t __init kasan_alloc_raw_page(int node) ...@@ -48,6 +53,11 @@ static phys_addr_t __init kasan_alloc_raw_page(int node)
void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE, void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE,
__pa(MAX_DMA_ADDRESS), __pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_KASAN, node); MEMBLOCK_ALLOC_KASAN, node);
if (!p)
panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n",
__func__, PAGE_SIZE, PAGE_SIZE, node,
__pa(MAX_DMA_ADDRESS));
return __pa(p); return __pa(p);
} }
......
...@@ -103,6 +103,8 @@ static phys_addr_t __init early_pgtable_alloc(void) ...@@ -103,6 +103,8 @@ static phys_addr_t __init early_pgtable_alloc(void)
void *ptr; void *ptr;
phys = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); phys = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
if (!phys)
panic("Failed to allocate page table page\n");
/* /*
* The FIX_{PGD,PUD,PMD} slots may be in active use, but the FIX_PTE * The FIX_{PGD,PUD,PMD} slots may be in active use, but the FIX_PTE
......
...@@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) ...@@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
pr_info("Initmem setup node %d [<memory-less node>]\n", nid); pr_info("Initmem setup node %d [<memory-less node>]\n", nid);
nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid); nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
if (!nd_pa)
panic("Cannot allocate %zu bytes for node %d data\n",
nd_size, nid);
nd = __va(nd_pa); nd = __va(nd_pa);
/* report and initialize */ /* report and initialize */
......
...@@ -138,6 +138,10 @@ void __init coherent_mem_init(phys_addr_t start, u32 size) ...@@ -138,6 +138,10 @@ void __init coherent_mem_init(phys_addr_t start, u32 size)
dma_bitmap = memblock_alloc(BITS_TO_LONGS(dma_pages) * sizeof(long), dma_bitmap = memblock_alloc(BITS_TO_LONGS(dma_pages) * sizeof(long),
sizeof(long)); sizeof(long));
if (!dma_bitmap)
panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
__func__, BITS_TO_LONGS(dma_pages) * sizeof(long),
sizeof(long));
} }
static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size, static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment