Skip to content
  • Ingo Molnar's avatar
    x86: Pack loops tightly as well · 52648e83
    Ingo Molnar authored
    Packing loops tightly (-falign-loops=1) is beneficial to code size:
    
         text        data    bss     dec              filename
     12566391        1617840 1089536 15273767         vmlinux.align.16-byte
     12224951        1617840 1089536 14932327         vmlinux.align.1-byte
     11976567        1617840 1089536 14683943         vmlinux.align.1-byte.funcs-1-byte
     11903735        1617840 1089536 14611111         vmlinux.align.1-byte.funcs-1-byte.loops-1-byte
    
    Which reduces the size of the kernel by another 0.6%, so the
    the total combined size reduction of the alignment-packing
    patches is ~5.5%.
    
    The x86 decoder bandwidth and caching arguments laid out in:
    
      be6cb027
    
     ("x86: Align jump targets to 1-byte boundaries")
    
    apply to loop alignment as well.
    
    Furtermore, modern CPU uarchs have a loop cache/buffer that
    is a L0 cache before even any uop cache, covering a few
    dozen most recently executed instructions.
    
    This loop cache generally does not have the 16-byte alignment
    restrictions of the uop cache.
    
    Now loop alignment can still be beneficial if:
    
     - a loop is cache-hot and its surroundings are not.
    
     - if the loop is so cache hot that the instruction
       flow becomes x86 decoder bandwidth limited
    
    But loop alignment is harmful if:
    
     - a loop is cache-cold
    
     - a loop's surroundings are cache-hot as well
    
     - two cache-hot loops are close to each other
    
     - if the loop fits into the loop cache
    
     - if the code flow is not decoder bandwidth limited
    
    and I'd argue that the latter five scenarios are much
    more common in the kernel, as our hottest loops are
    typically:
    
     - pointer chasing: this should fit into the loop cache
       in most cases and is typically data cache and address
       generation limited
    
     - generic memory ops (memset, memcpy, etc.): these generally
       fit into the loop cache as well, and are likewise data
       cache limited.
    
    So this patch packs loop addresses tightly as well.
    
    Acked-by: default avatarDenys Vlasenko <dvlasenk@redhat.com>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Aswin Chandramouleeswaran <aswin@hp.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Jason Low <jason.low2@hp.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Link: http://lkml.kernel.org/r/20150410123017.GB19918@gmail.com
    
    
    Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
    52648e83