- Jul 06, 2022
-
-
Ariel D'Alessandro authored
The build-amd64 job will generate the core.efi binary artifact that can PXE boot on UEFI x86_64 platforms. Signed-off-by: Ariel D'Alessandro <ariel.dalessandro@collabora.com>
-
- Jul 04, 2022
-
-
Glenn Washburn authored
Signed-off-by: Glenn Washburn <development@efficientek.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Mathieu Desnoyers authored
There are no users left of version_find_latest(), version_test_gt(), and version_test_numeric(). Remove those unused helper functions. Using those helper functions is what caused the quadratic sorting performance issues in the first place, so removing them is a net win. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Robbie Harwood <rharwood@redhat.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Mathieu Desnoyers authored
The current implementation of the 10_kfreebsd script implements its menu items sorting in bash with a quadratic algorithm, calling "sed", "sort", "head", and "grep" to compare versions between individual lines, which is annoyingly slow for kernel developers who can easily end up with 50-100 kernels in their boot partition. This fix is ported from the 10_linux script, which has a similar quadratic code pattern. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: debian-bsd@lists.debian.org Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Mathieu Desnoyers authored
The current implementation of the 10_hurd script implements its menu items sorting in bash with a quadratic algorithm, calling "sed", "sort", "head", and "grep" to compare versions between individual lines, which is annoyingly slow for kernel developers who can easily end up with 50-100 kernels in their boot partition. This fix is ported from the 10_linux script, which has a similar quadratic code pattern. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Samuel Thibault <samuel.thibault@ens-lyon.org> Tested-by: Samuel Thibault <samuel.thibault@ens-lyon.org> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Mathieu Desnoyers authored
The current implementation of the 20_linux_xen script implements its menu items sorting in bash with a quadratic algorithm, calling "sed", "sort", "head", and "grep" to compare versions between individual lines, which is annoyingly slow for kernel developers who can easily end up with 50-100 kernels in their boot partition. This fix is ported from the 10_linux script, which has a similar quadratic code pattern. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: xen-devel@lists.xenproject.org Tested-by: Jason Andryuk <jandryuk@gmail.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Mathieu Desnoyers authored
The current implementation of the 10_linux script implements its menu items sorting in bash with a quadratic algorithm, calling "sed", "sort", "head", and "grep" to compare versions between individual lines, which is annoyingly slow for kernel developers who can easily end up with 50-100 kernels in /boot. As an example, on a Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz, running: /usr/sbin/grub-mkconfig > /dev/null With 44 kernels in /boot, this command takes 10-15 seconds to complete. After this fix, the same command runs in 5 seconds. With 116 kernels in /boot, this command takes 40 seconds to complete. After this fix, the same command runs in 8 seconds. For reference, the quadratic algorithm here is: while [ "x$list" != "x" ] ; do <--- outer loop linux=`version_find_latest $list` version_find_latest() for i in "$@" ; do <--- inner loop version_test_gt() fork+exec sed version_test_numeric() version_sort fork+exec sort fork+exec head -n 1 fork+exec grep list=`echo $list | tr ' ' '\n' | fgrep -vx "$linux" | tr '\n' ' '` tr fgrep tr So all commands executed under version_test_gt() are executed O(n^2) times where n is the number of kernel images in /boot. Here is the improved algorithm proposed: - Prepare a list with all the relevant information for ordering by a single sort(1) execution. This is done by renaming ".old" suffixes by " 1" and by suffixing all other files with " 2", thus making sure the ".old" entries will follow the non-old entries in reverse-sorted-order. - Call version_reverse_sort on the list (sort -r -V): A single execution of sort(1). For instance, GNU coreutils' sort will reverse-sort the list in O(n*log(n)) with a merge sort. - Replace the " 1" suffixes by ".old", and remove the " 2" suffixes. - Iterate on the reverse-sorted list to output each menu entry item. Therefore, the algorithm proposed has O(n*log(n)) complexity with GNU coreutils' sort compared to the prior O(n^2) complexity. Moreover, the constant time required for each list entry is much less because sorting is done within a single execution of sort(1) rather than requiring O(n^2) executions of sed(1), sort(1), head(1), and grep(1) in sub-shells. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Robbie Harwood <rharwood@redhat.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Glenn Washburn authored
Signed-off-by: Glenn Washburn <development@efficientek.com> Reviewed-by: Patrick Steinhardt <ps@pks.im> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Glenn Washburn authored
Using the disk read hook mechanism, setup a read hook on the source disk which will read from the given header file during the scan and recovery cryptodisk backend functions. Disk read hooks are executed after the data has been read from the disk. This is okay, because the read hook is given the read buffer before its sent back to the caller. In this case, the hook can then overwrite the data read from the disk device with data from the header file sent in as the read hook data. This is transparent to the read caller. Since the callers of this function have just opened the source disk, there are no current read hooks, so there's no need to save/restore them nor consider if they should be called or not. This hook assumes that the header is at the start of the volume, which is not the case for some formats (e.g. GELI). So GELI will return an error if a detached header is specified. It also can only be used with formats where the detached header file can be written to the first blocks of the volume and the volume could still be unlocked. So the header file can not be formatted differently from the on-disk header. If these assumpts are not met, detached header file processing must be specially handled in the cryptodisk backend module. The hook will be called potentially many times by a backend. This is fine because of the assumptions mentioned and the read hook reads from absolute offsets and is stateless. Also add a --header (short -H) option to cryptomount which takes a file argument. Signed-off-by: Glenn Washburn <development@efficientek.com> Reviewed-by: Patrick Steinhardt <ps@pks.im> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Glenn Washburn authored
It will be desirable in the future to allow having the read hook modify the data passed back from a read function call on a disk or file. This adds that infrastructure and has no impact on code flow for existing uses of the read hook. Also changed is that now when the read hook callback is called it can also indicate what error code should be sent back to the read caller. Signed-off-by: Glenn Washburn <development@efficientek.com> Reviewed-by: Patrick Steinhardt <ps@pks.im> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Glenn Washburn authored
Document the variables net_<interface>_clientid, net_<interface>_clientuuid, lockdown, and shim_lock in the list of special environment variables. Signed-off-by: Glenn Washburn <development@efficientek.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Patrick Steinhardt authored
Adjust the interface of grub_efi_mm_add_regions() to take a set of GRUB_MM_ADD_REGION_* flags, which most notably is currently only the GRUB_MM_ADD_REGION_CONSECUTIVE flag. This allows us to set the function up as callback for the memory subsystem and have it call out to us in case there's not enough pages available in the current heap. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
Patrick Steinhardt authored
The function add_memory_regions() is currently only called on system initialization to allocate a fixed amount of pages. As such, it didn't need to return any errors: in case it failed, we cannot proceed anyway. This will change with the upcoming support for requesting more memory from the firmware at runtime, where it doesn't make sense anymore to fail hard. Refactor the function to return an error to prepare for this. Note that this does not change the behaviour when initializing the memory system because grub_efi_mm_init() knows to call grub_fatal() in case grub_efi_mm_add_regions() returns an error. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
Patrick Steinhardt authored
In preparation of support for runtime-allocating additional memory region, this patch extracts the function to retrieve the EFI memory map and add a subset of it to GRUB's own memory regions. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
Patrick Steinhardt authored
When initializing the EFI memory subsystem, we will by default request a quarter of the available memory, bounded by a minimum/maximum value. Given that we're about to extend the EFI memory system to dynamically request additional pages from the firmware as required, this scaling of requested memory based on available memory will not make a lot of sense anymore. Remove this logic as a preparatory patch such that we'll instead defer to the runtime memory allocator. Note that ideally, we'd want to change this after dynamic requesting of pages has been implemented for the EFI platform. But because we'll need to split up initialization of the memory subsystem and the request of pages from the firmware, we'd have to duplicate quite some logic at first only to remove it afterwards again. This seems quite pointless, so we instead have patches slightly out of order. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
Patrick Steinhardt authored
Currently, all platforms will set up their heap on initialization of the platform code. While this works mostly fine, it poses some limitations on memory management on us. Most notably, allocating big chunks of memory in the gigabyte range would require us to pre-request this many bytes from the firmware and add it to the heap from the beginning on some platforms like EFI. As this isn't needed for most configurations, it is inefficient and may even negatively impact some usecases when, e.g., chainloading. Nonetheless, allocating big chunks of memory is required sometimes, where one example is the upcoming support for the Argon2 key derival function in LUKS2. In order to avoid pre-allocating big chunks of memory, this commit implements a runtime mechanism to add more pages to the system. When a given allocation cannot be currently satisfied, we'll call a given callback set up by the platform's own memory management subsystem, asking it to add a memory area with at least "n" bytes. If this succeeds, we retry searching for a valid memory region, which should now succeed. If this fails, we try asking for "n" bytes, possibly spread across multiple regions, in hopes that region merging means that we end up with enough memory for things to work out. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Daniel Axtens <dja@axtens.net> Tested-by: Stefan Berger <stefanb@linux.ibm.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
Patrick Steinhardt authored
In grub_memalign(), there's a commented section which would allow for unloading of unneeded modules in case where there is not enough free memory available to satisfy a request. Given that this code is never compiled in, let's remove it together with grub_dl_unload_unneeded(). Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
Daniel Axtens authored
This is handy for debugging. Enable with "set debug=regions". Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
Daniel Axtens authored
On x86_64-efi (at least) regions seem to be added from top down. The mm code will merge a new region with an existing region that comes immediately before the new region. This allows larger allocations to be satisfied that would otherwise be the case. On powerpc-ieee1275, however, regions are added from bottom up. So if we add 3x 32MB regions, we can still only satisfy a 32MB allocation, rather than the 96MB allocation we might otherwise be able to satisfy. * Define 'post_size' as being bytes lost to the end of an allocation due to being given weird sizes from firmware that are not multiples of GRUB_MM_ALIGN. * Allow merging of regions immediately _after_ existing regions, not just before. As with the other approach, we create an allocated block to represent the new space and the pass it to grub_free() to get the metadata right. Signed-off-by: Daniel Axtens <dja@axtens.net> Tested-by: Stefan Berger <stefanb@linux.ibm.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
- Jun 29, 2022
-
-
Daniel Axtens authored
grub_mm_region_init() does: h = (grub_mm_header_t) (r + 1); where h is a grub_mm_header_t and r is a grub_mm_region_t. Cells are supposed to be GRUB_MM_ALIGN aligned, but while grub_mm_dump ensures this vs the region header, grub_mm_region_init() does not. It's better to be explicit than implicit here: rather than changing grub_mm_region_init() to ALIGN_UP(), require that the struct is explicitly a multiple of the header size. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Patrick Steinhardt <ps@pks.im>
-
- Jun 28, 2022
-
-
Daniel Axtens authored
This breaks the tests on pseries - just restrict it to x86 platforms that don't specify an EFI. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
- Jun 07, 2022
-
-
Darren Kenny authored
The corpus was generating issues in grub_btrfs_read_logical() when attempting to iterate over stripe entries in the superblock's bootmapping. In most cases the reason for the failure was that the number of stripes in chunk->nstripes exceeded the possible space statically allocated in superblock bootmapping space. Each stripe entry in the bootmapping block consists of a grub_btrfs_key followed by a grub_btrfs_chunk_stripe. Another issue that came up was that while calculating the chunk size, in an earlier piece of code in that function, depending on the data provided in the btrfs file system, it would end up calculating a size that was too small to contain even 1 grub_btrfs_chunk_item, which is obviously invalid too. Signed-off-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Darren Kenny authored
The fuzzer is generating btrfs file systems that have chunks with invalid combinations of stripes and substripes for the given RAID configurations. After examining the Linux kernel fs/btrfs/tree-checker.c code, it appears that sub-stripes should only be applied to RAID10, and in that case there should only ever be 2 of them. Similarly, RAID single should only have 1 stripe, and RAID1/1C3/1C4 should have 2. 3 or 4 stripes respectively, which is what redundancy corresponds. Some of the chunks ended up with a size of 0, which grub_malloc() still returned memory for and in turn generated ASAN errors later when accessed. While it would be possible to specifically limit the number of stripes, a more correct test was on the combination of the chunk item, and the number of stripes by the size of the chunk stripe structure in comparison to the size of the chunk itself. Signed-off-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Darren Kenny authored
According to the btrfs code in Linux, the structure of a directory item leaf should be of the form: |struct btrfs_dir_item|name|data| in GRUB the name len and data len are in the grub_btrfs_dir_item structure's n and m fields respectively. The combined size of the structure, name and data should be less than the allocated memory, a difference to the Linux kernel's struct btrfs_dir_item is that the grub_btrfs_dir_item has an extra field for where the name is stored, so we adjust for that too. Signed-off-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Sudhakar Kuppusamy authored
A corrupt f2fs file system might specify a name length which is greater than the maximum name length supported by the GRUB f2fs driver. We will allocate enough memory to store the overly long name, but there are only F2FS_NAME_LEN bytes in the source, so we would read past the end of the source. While checking directory entries, do not copy a file name with an invalid length. Signed-off-by: Sudhakar Kuppusamy <sudhakar@linux.ibm.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Sudhakar Kuppusamy authored
A corrupt f2fs filesystem could have a block offset or a bitmap offset that would cause us to read beyond the bounds of the nat bitmap. Introduce the nat_bitmap_size member in grub_f2fs_data which holds the size of nat bitmap. Set the size when loading the nat bitmap in nat_bitmap_ptr(), and catch when an invalid offset would create a pointer past the end of the allocated space. Check against the bitmap size in grub_f2fs_test_bit() test bit to avoid reading past the end of the nat bitmap. Signed-off-by: Sudhakar Kuppusamy <sudhakar@linux.ibm.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Sudhakar Kuppusamy authored
A corrupt f2fs file system could specify a nat journal entry count that is beyond the maximum NAT_JOURNAL_ENTRIES. Check if the specified nat journal entry count before accessing the array, and throw an error if it is too large. Signed-off-by: Sudhakar Kuppusamy <sudhakar@linux.ibm.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
In a similar vein to the previous patch, parse_line() would write a NUL byte past the end of the buffer if there was an HTTP header with a LF rather than a CRLF. RFC-2616 says: Many HTTP/1.1 header field values consist of words separated by LWS or special characters. These special characters MUST be in a quoted string to be used within a parameter value (as defined in section 3.6). We don't support quoted sections or continuation lines, etc. If we see an LF that's not part of a CRLF, bail out. Fixes: CVE-2022-28734 Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
GRUB has special code for handling an http header that is split across two packets. The code tracks the end of line by looking for a "\n" byte. The code for split headers has always advanced the pointer just past the end of the line, whereas the code that handles unsplit headers does not advance the pointer. This extra advance causes the length to be one greater, which breaks an assumption in parse_line(), leading to it writing a NUL byte one byte past the end of the buffer where we reconstruct the line from the two packets. It's conceivable that an attacker controlled set of packets could cause this to zero out the first byte of the "next" pointer of the grub_mm_region structure following the current_line buffer. Do not advance the pointer in the split header case. Fixes: CVE-2022-28734 Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
It's possible for data->sock to get torn down in tcp error handling. If we unconditionally tear it down again we will end up doing writes to an offset of the NULL pointer when we go to tear it down again. Detect if it has been torn down and don't do it again. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
Under tftp errors, we print a tftp error message from the tftp header. However, the tftph pointer is a pointer inside nb, the netbuff. Previously, we were freeing the nb and then dereferencing it. Don't do that, use it and then free it later. This isn't really _bad_ per se, especially as we're single-threaded, but it trips up fuzzers. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
A malicious tftp server can cause UAFs and a double free. An attempt to read from a network file is handled by grub_net_fs_read(). If the read is at an offset other than the current offset, grub_net_seek_real() is invoked. In grub_net_seek_real(), if a backwards seek cannot be satisfied from the currently received packets, and the underlying transport does not provide a seek method, then grub_net_seek_real() will close and reopen the network protocol layer. For tftp, the ->close() call goes to tftp_close() and frees the tftp_data_t file->data. The file->data pointer is not nulled out after the free. If the ->open() call fails, the file->data will not be reallocated and will continue point to a freed memory block. This could happen from a server refusing to send the requisite ack to the new tftp request, for example. The seek and the read will then fail, but the grub_file continues to exist: the failed seek does not necessarily cause the entire file to be thrown away (e.g. where the file is checked to see if it is gzipped/lzio/xz/etc., a read failure is interpreted as a decompressor passing on the file, not as an invalidation of the entire grub_file_t structure). This means subsequent attempts to read or seek the file will use the old file->data after free. Eventually, the file will be close()d again and file->data will be freed again. Mark a net_fs file that doesn't reopen as broken. Do not permit read() or close() on a broken file (seek is not exposed directly to the file API - it is only called as part of read, so this blocks seeks as well). As an additional defence, null out the ->data pointer if tftp_open() fails. That would have lead to a simple null pointer dereference rather than a mess of UAFs. This may affect other protocols, I haven't checked. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
I don't really understand what's going on here but fuzzing found a bug where we read past the end of check_with. That's a C string, so use grub_strlen() to make sure we don't overread it. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
grub_net_dns_lookup() takes as inputs a pointer to an array of addresses ("addresses") for the given name, and pointer to a number of addresses ("naddresses"). grub_net_dns_lookup() is responsible for allocating "addresses", and the caller is responsible for freeing it if "naddresses" > 0. The DNS recv_hook will sometimes set and free the addresses array, for example if the packet is too short: if (ptr + 10 >= nb->tail) { if (!*data->naddresses) grub_free (*data->addresses); grub_netbuff_free (nb); return GRUB_ERR_NONE; } Later on the nslookup command code unconditionally frees the "addresses" array. Normally this is fine: the array is either populated with valid data or is NULL. But in these sorts of error cases it is neither NULL nor valid and we get a double-free. Only free "addresses" if "naddresses" > 0. It looks like the other use of grub_net_dns_lookup() is not affected. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
A netbuff shouldn't be too huge. It's bounded by MTU and TCP segment reassembly. If we are asked to create one that is unreasonably big, refuse. This is a hardening measure: if we hit this code, there's a bug somewhere else that we should catch and fix. This commit: - stops the bug propagating any further. - provides a spot to instrument in e.g. fuzzing to try to catch these bugs. I have put instrumentation (e.g. __builtin_trap() to force a crash) here and have not been able to find any more crashes. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
We can receive packets with invalid IP fragmentation information. This can lead to rsm->total_len underflowing and becoming very large. Then, in grub_netbuff_alloc(), we add to this very large number, which can cause it to overflow and wrap back around to a small positive number. The allocation then succeeds, but the resulting buffer is too small and subsequent operations can write past the end of the buffer. Catch the underflow here. Fixes: CVE-2022-28733 Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
In some cases attempting to display arbitrary binary strings leads to ASAN splats reading the widthspec array out of bounds. Check the index. If it would be out of bounds, return a width of 1. I don't know if that's strictly correct, but we're not really expecting great display of arbitrary binary data, and it's certainly not worse than an OOB read. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
Certain 1 px wide images caused a wild pointer write in grub_jpeg_ycrcb_to_rgb(). This was caused because in grub_jpeg_decode_data(), we have the following loop: for (; data->r1 < nr1 && (!data->dri || rst); data->r1++, data->bitmap_ptr += (vb * data->image_width - hb * nc1) * 3) We did not check if vb * width >= hb * nc1. On a 64-bit platform, if that turns out to be negative, it will underflow, be interpreted as unsigned 64-bit, then be added to the 64-bit pointer, so we see data->bitmap_ptr jump, e.g.: 0x6180_0000_0480 to 0x6181_0000_0498 ^ ~--- carry has occurred and this pointer is now far away from any object. On a 32-bit platform, it will decrement the pointer, creating a pointer that won't crash but will overwrite random data. Catch the underflow and error out. Fixes: CVE-2021-3697 Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
An invalid file could contain multiple start of stream blocks, which would cause us to reallocate and leak our bitmap. Refuse to handle multiple start of streams. Additionally, fix a grub_error() call formatting. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-
Daniel Axtens authored
Fix a memory leak where an invalid file could cause us to reallocate memory for a huffman table we had already allocated memory for. Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
-