Skip to content
  • Peter Zijlstra's avatar
    sched/fair: Rewrite group_imb trigger · 6263322c
    Peter Zijlstra authored
    
    
    Change the group_imb detection from the old 'load-spike' detector to
    an actual imbalance detector. We set it from the lower domain balance
    pass when it fails to create a balance in the presence of task
    affinities.
    
    The advantage is that this should no longer generate the false
    positive group_imb conditions generated by transient load spikes from
    the normal balancing/bulk-wakeup etc. behaviour.
    
    While I haven't actually observed those they could happen.
    
    I'm not entirely happy with this patch; it somehow feels a little
    fragile.
    
    Nor does it solve the biggest issue I have with the group_imb code; it
    it still a fragile construct in that once we 'fixed' the imbalance
    we'll not detect the group_imb again and could end up re-creating it.
    
    That said, this patch does seem to preserve behaviour for the
    described degenerate case. In particular on my 2*6*2 wsm-ep:
    
      taskset -c 3-11 bash -c 'for ((i=0;i<9;i++)) do while :; do :; done & done'
    
    ends up with 9 spinners, each on their own CPU; whereas if you disable
    the group_imb code that typically doesn't happen (you'll get one pair
    sharing a CPU most of the time).
    
    Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
    Link: http://lkml.kernel.org/n/tip-36fpbgl39dv4u51b6yz2ypz5@git.kernel.org
    
    
    Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
    6263322c