1. 28 Feb, 2017 2 commits
  2. 08 Oct, 2016 2 commits
    • Chris Metcalf's avatar
      nmi_backtrace: do a local dump_stack() instead of a self-NMI · 67766489
      Chris Metcalf authored
      Currently on arm there is code that checks whether it should call
      dump_stack() explicitly, to avoid trying to raise an NMI when the
      current context is not preemptible by the backtrace IPI.  Similarly, the
      forthcoming arch/tile support uses an IPI mechanism that does not
      support generating an NMI to self.
      
      Accordingly, move the code that guards this case into the generic
      mechanism, and invoke it unconditionally whenever we want a backtrace of
      the current cpu.  It seems plausible that in all cases, dump_stack()
      will generate better information than generating a stack from the NMI
      handler.  The register state will be missing, but that state is likely
      not particularly helpful in any case.
      
      Or, if we think it is helpful, we should be capturing and emitting the
      current register state in all cases when regs == NULL is passed to
      nmi_cpu_backtrace().
      
      Link: http://lkml.kernel.org/r/1472487169-14923-3-git-send-email-cmetcalf@mellanox.com
      
      Signed-off-by: default avatarChris Metcalf <cmetcalf@mellanox.com>
      Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
      Reviewed-by: default avatarPetr Mladek <pmladek@suse.com>
      Acked-by: default avatarAaron Tomlin <atomlin@redhat.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      67766489
    • Chris Metcalf's avatar
      nmi_backtrace: add more trigger_*_cpu_backtrace() methods · 9a01c3ed
      Chris Metcalf authored
      Patch series "improvements to the nmi_backtrace code" v9.
      
      This patch series modifies the trigger_xxx_backtrace() NMI-based remote
      backtracing code to make it more flexible, and makes a few small
      improvements along the way.
      
      The motivation comes from the task isolation code, where there are
      scenarios where we want to be able to diagnose a case where some cpu is
      about to interrupt a task-isolated cpu.  It can be helpful to see both
      where the interrupting cpu is, and also an approximation of where the
      cpu that is being interrupted is.  The nmi_backtrace framework allows us
      to discover the stack of the interrupted cpu.
      
      I've tested that the change works as desired on tile, and build-tested
      x86, arm, mips, and sparc64.  For x86 I confirmed that the generic
      cpuidle stuff as well as the architecture-specific routines are in the
      new cpuidle section.  For arm, mips, and sparc I just build-tested it
      and made sure the generic cpuidle routines were in the new cpuidle
      section, but I didn't attempt to figure out which the platform-specific
      idle routines might be.  That might be more usefully done by someone
      with platform experience in follow-up patches.
      
      This patch (of 4):
      
      Currently you can only request a backtrace of either all cpus, or all
      cpus but yourself.  It can also be helpful to request a remote backtrace
      of a single cpu, and since we want that, the logical extension is to
      support a cpumask as the underlying primitive.
      
      This change modifies the existing lib/nmi_backtrace.c code to take a
      cpumask as its basic primitive, and modifies the linux/nmi.h code to use
      the new "cpumask" method instead.
      
      The existing clients of nmi_backtrace (arm and x86) are converted to
      using the new cpumask approach in this change.
      
      The other users of the backtracing API (sparc64 and mips) are converted
      to use the cpumask approach rather than the all/allbutself approach.
      The mips code ignored the "include_self" boolean but with this change it
      will now also dump a local backtrace if requested.
      
      Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.com
      
      Signed-off-by: default avatarChris Metcalf <cmetcalf@mellanox.com>
      Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
      Reviewed-by: default avatarAaron Tomlin <atomlin@redhat.com>
      Reviewed-by: default avatarPetr Mladek <pmladek@suse.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9a01c3ed
  3. 12 Aug, 2016 1 commit
  4. 14 Jun, 2016 1 commit
    • Paul E. McKenney's avatar
      arm: Use _rcuidle for smp_cross_call() tracepoints · 7c64cc05
      Paul E. McKenney authored
      Further testing with false negatives suppressed by commit 293e2421
      
      
      ("rcu: Remove superfluous versions of rcu_read_lock_sched_held()")
      identified another unprotected use of RCU from the idle loop.  Because RCU
      actively ignores idle-loop code (for energy-efficiency reasons, among
      other things), using RCU from the idle loop can result in too-short
      grace periods, in turn resulting in arbitrary misbehavior.
      
      The resulting lockdep-RCU splat is as follows:
      
      ------------------------------------------------------------------------
      
      ===============================
      [ INFO: suspicious RCU usage. ]
      4.6.0-rc5-next-20160426+ #1112 Not tainted
      -------------------------------
      include/trace/events/ipi.h:35 suspicious rcu_dereference_check() usage!
      
      other info that might help us debug this:
      
      RCU used illegally from idle CPU!
      rcu_scheduler_active = 1, debug_locks = 0
      RCU used illegally from extended quiescent state!
      no locks held by swapper/0/0.
      
      stack backtrace:
      CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6.0-rc5-next-20160426+ #1112
      Hardware name: Generic OMAP4 (Flattened Device Tree)
      [<c0110308>] (unwind_backtrace) from [<c010c3a8>] (show_stack+0x10/0x14)
      [<c010c3a8>] (show_stack) from [<c047fec8>] (dump_stack+0xb0/0xe4)
      [<c047fec8>] (dump_stack) from [<c010dcfc>] (smp_cross_call+0xbc/0x188)
      [<c010dcfc>] (smp_cross_call) from [<c01c9e28>] (generic_exec_single+0x9c/0x15c)
      [<c01c9e28>] (generic_exec_single) from [<c01ca0a0>] (smp_call_function_single_async+0 x38/0x9c)
      [<c01ca0a0>] (smp_call_function_single_async) from [<c0603728>] (cpuidle_coupled_poke_others+0x8c/0xa8)
      [<c0603728>] (cpuidle_coupled_poke_others) from [<c0603c10>] (cpuidle_enter_state_coupled+0x26c/0x390)
      [<c0603c10>] (cpuidle_enter_state_coupled) from [<c0183c74>] (cpu_startup_entry+0x198/0x3a0)
      [<c0183c74>] (cpu_startup_entry) from [<c0b00c0c>] (start_kernel+0x354/0x3c8)
      [<c0b00c0c>] (start_kernel) from [<8000807c>] (0x8000807c)
      
      ------------------------------------------------------------------------
      Reported-by: default avatarTony Lindgren <tony@atomide.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: default avatarTony Lindgren <tony@atomide.com>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: <linux-omap@vger.kernel.org>
      Cc: <linux-arm-kernel@lists.infradead.org>
      7c64cc05
  5. 21 May, 2016 1 commit
    • Petr Mladek's avatar
      printk/nmi: generic solution for safe printk in NMI · 42a0bb3f
      Petr Mladek authored
      printk() takes some locks and could not be used a safe way in NMI
      context.
      
      The chance of a deadlock is real especially when printing stacks from
      all CPUs.  This particular problem has been addressed on x86 by the
      commit a9edc880 ("x86/nmi: Perform a safe NMI stack trace on all
      CPUs").
      
      The patchset brings two big advantages.  First, it makes the NMI
      backtraces safe on all architectures for free.  Second, it makes all NMI
      messages almost safe on all architectures (the temporary buffer is
      limited.  We still should keep the number of messages in NMI context at
      minimum).
      
      Note that there already are several messages printed in NMI context:
      WARN_ON(in_nmi()), BUG_ON(in_nmi()), anything being printed out from MCE
      handlers.  These are not easy to avoid.
      
      This patch reuses most of the code and makes it generic.  It is useful
      for all messages and architectures that support NMI.
      
      The alternative printk_func is set when entering and is reseted when
      leaving NMI context.  It queues IRQ work to copy the messages into the
      main ring buffer in a safe context.
      
      __printk_nmi_flush() copies all available messages and reset the buffer.
      Then we could use a simple cmpxchg operations to get synchronized with
      writers.  There is also used a spinlock to get synchronized with other
      flushers.
      
      We do not longer use seq_buf because it depends on external lock.  It
      would be hard to make all supported operations safe for a lockless use.
      It would be confusing and error prone to make only some operations safe.
      
      The code is put into separate printk/nmi.c as suggested by Steven
      Rostedt.  It needs a per-CPU buffer and is compiled only on
      architectures that call nmi_enter().  This is achieved by the new
      HAVE_NMI Kconfig flag.
      
      The are MN10300 and Xtensa architectures.  We need to clean up NMI
      handling there first.  Let's do it separately.
      
      The patch is heavily based on the draft from Peter Zijlstra, see
      
        https://lkml.org/lkml/2015/6/10/327
      
      
      
      [arnd@arndb.de: printk-nmi: use %zu format string for size_t]
      [akpm@linux-foundation.org: min_t->min - all types are size_t here]
      Signed-off-by: default avatarPetr Mladek <pmladek@suse.com>
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Suggested-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Jan Kara <jack@suse.cz>
      Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>	[arm part]
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jiri Kosina <jkosina@suse.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42a0bb3f
  6. 01 Mar, 2016 1 commit
    • Thomas Gleixner's avatar
      arch/hotplug: Call into idle with a proper state · fc6d73d6
      Thomas Gleixner authored
      
      
      Let the non boot cpus call into idle with the corresponding hotplug state, so
      the hotplug core can handle the further bringup. That's a first step to
      convert the boot side of the hotplugged cpus to do all the synchronization
      with the other side through the state machine. For now it'll only start the
      hotplug thread and kick the full bringup of the cpu.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.de
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      fc6d73d6
  7. 22 Dec, 2015 2 commits
  8. 03 Oct, 2015 1 commit
    • Daniel Thompson's avatar
      ARM: 8439/1: Fix backtrace generation when IPI is masked · 0768330d
      Daniel Thompson authored
      Currently on ARM when <SysRq-L> is triggered from an interrupt handler
      (e.g. a SysRq issued using UART or kbd) the main CPU will wedge for ten
      seconds with interrupts masked before issuing a backtrace for every CPU
      except itself.
      
      The new backtrace code introduced by commit 96f0e003
      
       ("ARM: add
      basic support for on-demand backtrace of other CPUs") does not work
      correctly when run from an interrupt handler because IPI_CPU_BACKTRACE
      is used to generate the backtrace on all CPUs but cannot preempt the
      current calling context.
      
      This can be fixed by detecting that the calling context cannot be
      preempted and issuing the backtrace directly in this case. Issuing
      directly leaves us without any pt_regs to pass to nmi_cpu_backtrace()
      so we also modify the generic code to call dump_stack() when its
      argument is NULL.
      Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      0768330d
  9. 22 Sep, 2015 2 commits
  10. 25 Aug, 2015 1 commit
  11. 31 Jul, 2015 1 commit
    • Stephen Boyd's avatar
      ARM: 8392/3: smp: Only expose /sys/.../cpuX/online if hotpluggable · 787047ee
      Stephen Boyd authored
      
      
      Writes to /sys/.../cpuX/online fail if we determine the platform
      doesn't support hotplug for that CPU. Furthermore, if the cpu_die
      op isn't specified the system hangs when we try to offline a CPU
      and it comes right back online unexpectedly. Let's figure this
      stuff out before we make the sysfs nodes so that the online file
      doesn't even exist if it isn't (at least sometimes) possible to
      hotplug the CPU.
      
      Add a new 'cpu_can_disable' op and repoint all 'cpu_disable'
      implementations at it because all implementers use the op to
      indicate if a CPU can be hotplugged or not in a static fashion.
      With PSCI we may need to add a 'cpu_disable' op so that the
      secure OS can be migrated off the CPU we're trying to hotplug.
      In this case, the 'cpu_can_disable' op will indicate that all
      CPUs are hotpluggable by returning true, but the 'cpu_disable' op
      will make a PSCI migration call and occasionally fail, denying
      the hotplug of a CPU. This shouldn't be any worse than x86 where
      we may indicate that all CPUs are hotpluggable but occasionally
      we can't offline a CPU due to check_irq_vectors_for_cpu_disable()
      failing to find a CPU to move vectors to.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Dave Martin <Dave.Martin@arm.com>
      Acked-by: Simon Horman <horms@verge.net.au> [shmobile portion]
      Tested-by: default avatarSimon Horman <horms@verge.net.au>
      Cc: Magnus Damm <magnus.damm@gmail.com>
      Cc: <linux-sh@vger.kernel.org>
      Tested-by: default avatarTyler Baker <tyler.baker@linaro.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      787047ee
  12. 17 Jul, 2015 1 commit
    • Russell King's avatar
      ARM: add basic support for on-demand backtrace of other CPUs · 96f0e003
      Russell King authored
      
      
      As we now have generic infrastructure to support backtracing of other
      CPUs in the system on lockups, we can start to implement this for ARM.
      Initially, we add an IPI based implementation, as the GIC code needs
      modification to support the generation of FIQ IPIs, and not all ARM
      platforms have the ability to raise a FIQ in the non-secure world.
      
      This provides us with a "best efforts" implementation in the absence
      of FIQs.
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      96f0e003
  13. 29 Jun, 2015 1 commit
    • Stephen Boyd's avatar
      ARM: 8393/1: smp: Fix suspicious RCU usage with ipi tracepoints · 398f7456
      Stephen Boyd authored
      John Stultz reports an RCU splat on boot with ARM ipi trace
      events enabled.
      
      ===============================
      [ INFO: suspicious RCU usage. ]
      4.1.0-rc7-00033-gb5bed2f #153 Not tainted
      -------------------------------
      include/trace/events/ipi.h:68 suspicious rcu_dereference_check() usage!
      
      other info that might help us debug this:
      
      RCU used illegally from idle CPU!
      rcu_scheduler_active = 1, debug_locks = 0
      RCU used illegally from extended quiescent state!
      no locks held by swapper/0/0.
      
      stack backtrace:
      CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.1.0-rc7-00033-gb5bed2f #153
      Hardware name: Qualcomm (Flattened Device Tree)
      [<c0216b08>] (unwind_backtrace) from [<c02136e8>] (show_stack+0x10/0x14)
      [<c02136e8>] (show_stack) from [<c075e678>] (dump_stack+0x70/0xbc)
      [<c075e678>] (dump_stack) from [<c0215a80>] (handle_IPI+0x428/0x604)
      [<c0215a80>] (handle_IPI) from [<c020942c>] (gic_handle_irq+0x54/0x5c)
      [<c020942c>] (gic_handle_irq) from [<c0766604>] (__irq_svc+0x44/0x7c)
      Exception stack(0xc09f3f48 to 0xc09f3f90)
      3f40:                   00000001 00000001 00000000 c09f73b8 c09f4528 c0a5de9c
      3f60: c076b4f0 00000000 00000000 c09ef108 c0a5cec1 00000001 00000000 c09f3f90
      3f80: c026bf60 c0210ab8 20000113 ffffffff
      [<c0766604>] (__irq_svc) from [<c0210ab8>] (arch_cpu_idle+0x20/0x3c)
      [<c0210ab8>] (arch_cpu_idle) from [<c02647f0>] (cpu_startup_entry+0x2c0/0x5dc)
      [<c02647f0>] (cpu_startup_entry) from [<c099bc1c>] (start_kernel+0x358/0x3c4)
      [<c099bc1c>] (start_kernel) from [<8020807c>] (0x8020807c)
      
      At this point in the IPI handling path we haven't called
      irq_enter() yet, so RCU doesn't know that we're about to exit
      idle and properly warns that we're using RCU from an idle CPU.
      Use trace_ipi_entry_rcuidle() instead of trace_ipi_entry() so
      that RCU is informed about our exit from idle.
      
      Fixes: 365ec7b1
      
       ("ARM: add IPI tracepoints")
      Reported-by: default avatarJohn Stultz <john.stultz@linaro.org>
      Tested-by: default avatarJohn Stultz <john.stultz@linaro.org>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      398f7456
  14. 01 Jun, 2015 1 commit
    • Russell King's avatar
      ARM: redo TTBR setup code for LPAE · b2c3e38a
      Russell King authored
      
      
      Re-engineer the LPAE TTBR setup code.  Rather than passing some shifted
      address in order to fit in a CPU register, pass either a full physical
      address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1).
      
      This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of
      cpu_set_ttbr() in the secondary CPU startup code path (which was there
      to re-set TTBR1 to the appropriate high physical address space on
      Keystone2.)
      Tested-by: default avatarMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      b2c3e38a
  15. 02 Apr, 2015 1 commit
    • Geert Uytterhoeven's avatar
      ARM: 8338/1: kexec: Relax SMP validation to improve DT compatibility · fee3fd4f
      Geert Uytterhoeven authored
      When trying to kexec into a new kernel on a platform where multiple CPU
      cores are present, but no SMP bringup code is available yet, the
      kexec_load system call fails with:
      
          kexec_load failed: Invalid argument
      
      The SMP test added to machine_kexec_prepare() in commit 2103f6cb
      
      
      ("ARM: 7807/1: kexec: validate CPU hotplug support") wants to prohibit
      kexec on SMP platforms where it cannot disable secondary CPUs.
      However, this test is too strict: if the secondary CPUs couldn't be
      enabled in the first place, there's no need to disable them later at
      kexec time.  Hence skip the test in the absence of SMP bringup code.
      
      This allows to add all CPU cores to the DTS from the beginning, without
      having to implement SMP bringup first, improving DT compatibility.
      Signed-off-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Acked-by: default avatarStephen Warren <swarren@nvidia.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      fee3fd4f
  16. 04 Jan, 2015 1 commit
  17. 21 Nov, 2014 3 commits
  18. 26 Sep, 2014 1 commit
  19. 13 Sep, 2014 1 commit
  20. 27 Aug, 2014 1 commit
  21. 08 Aug, 2014 1 commit
  22. 19 Mar, 2014 1 commit
  23. 21 Jan, 2014 1 commit
  24. 29 Dec, 2013 1 commit
  25. 09 Nov, 2013 1 commit
    • Stephen Boyd's avatar
      ARM: 7887/1: Don't smp_cross_call() on UP devices in arch_irq_work_raise() · c682e51d
      Stephen Boyd authored
      If we're running a kernel compiled with SMP_ON_UP=y and the
      hardware only supports UP operation there isn't any
      smp_cross_call function assigned. Unfortunately, we call
      smp_cross_call() unconditionally in arch_irq_work_raise() and
      crash the kernel on UP devices. Check to make sure we're running
      on an SMP device before calling smp_cross_call() here.
      
      Unable to handle kernel NULL pointer dereference at virtual address 00000000
      pgd = c0004000
      [00000000] *pgd=00000000
      Internal error: Oops: 80000005 [#1] SMP ARM
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.12.0-rc6-00018-g8d451442-dirty #16
      task: de05b440 ti: de05c000 task.ti: de05c000
      PC is at 0x0
      LR is at arch_irq_work_raise+0x3c/0x48
      pc : [<00000000>]    lr : [<c0019590>]    psr: 60000193
      sp : de05dd60  ip : 00000001  fp : 00000000
      r10: c085e2f0  r9 : de05c000  r8 : c07be0a4
      r7 : de05c000  r6 : de05c000  r5 : c07c5778  r4 : c0824554
      r3 : 00000000  r2 : 00000000  r1 : 00000006  r0 : c0529a58
      Flags: nZCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM Segment kernel
      Control: 10c5387d  Table: 80004019  DAC: 00000017
      Process swapper/0 (pid: 1, stack limit = 0xde05c248)
      Stack: (0xde05dd60 to 0xde05e000)
      dd60: c07b9dbc c00cb2dc 00000001 c08242c0 c08242c0 60000113 c07be0a8 c00b0590
      dd80: de05c000 c085e2f0 c08242c0 c08242c0 c1414c28 c00b07cc de05b440 c1414c28
      dda0: c08242c0 c00b0af8 c0862bb0 c0862db0 c1414cd8 de05c028 c0824840 de05ddb8
      ddc0: 00000000 00000009 00000001 00000024 c07be0a8 c07be0a4 de05c000 c085e2f0
      dde0: 00000000 c004a4b0 00000010 de00d2dc 00000054 00000100 00000024 00000000
      de00: de05c028 0000000a ffff8ae7 00200040 00000016 de05c000 60000193 de05c000
      de20: 00000054 00000000 00000000 00000000 00000000 c004a704 00000000 de05c008
      de40: c07ba254 c004aa1c c07c5778 c0014b70 fa200000 00000054 de05de80 c0861244
      de60: 00000000 c0008634 de05b440 c051c778 20000113 ffffffff de05deb4 c051d0a4
      de80: 00000001 00000001 00000000 de05b440 c082afac de057ac0 de057ac0 de0443c0
      dea0: 00000000 00000000 00000000 00000000 c082afbc de05dec8 c009f2a0 c051c778
      dec0: 20000113 ffffffff 00000000 c016edb0 00000000 000002b0 de057ac0 de057ac0
      dee0: 00000000 c016ee40 c0875e50 de05df2e de057ac0 00000000 00000013 00000000
      df00: 00000000 c016f054 de043600 de0443c0 c008eb38 de004ec0 c0875e50 c008eb44
      df20: 00000012 00000000 00000000 3931f0f8 00000000 00000000 00000014 c0822e84
      df40: 00000000 c008ed2c 00000000 00000000 00000000 c07b7490 c07b7490 c075ab3c
      df60: 00000000 c00701ac 00000002 00000000 c0070160 dffadb73 7bf8edb4 00000000
      df80: c051092c 00000000 00000000 00000000 00000000 00000000 00000000 c0510934
      dfa0: de05aa40 00000000 c051092c c0013ce8 00000000 00000000 00000000 00000000
      dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 07efffe5 4dfac6f5
      [<c0019590>] (arch_irq_work_raise+0x3c/0x48) from [<c00cb2dc>] (irq_work_queue+0xe4/0xf8)
      [<c00cb2dc>] (irq_work_queue+0xe4/0xf8) from [<c00b0590>] (rcu_accelerate_cbs+0x1d4/0x1d8)
      [<c00b0590>] (rcu_accelerate_cbs+0x1d4/0x1d8) from [<c00b07cc>] (rcu_start_gp+0x34/0x48)
      [<c00b07cc>] (rcu_start_gp+0x34/0x48) from [<c00b0af8>] (rcu_process_callbacks+0x318/0x608)
      [<c00b0af8>] (rcu_process_callbacks+0x318/0x608) from [<c004a4b0>] (__do_softirq+0x114/0x2a0)
      [<c004a4b0>] (__do_softirq+0x114/0x2a0) from [<c004a704>] (do_softirq+0x6c/0x74)
      [<c004a704>] (do_softirq+0x6c/0x74) from [<c004aa1c>] (irq_exit+0xac/0x100)
      [<c004aa1c>] (irq_exit+0xac/0x100) from [<c0014b70>] (handle_IRQ+0x54/0xb4)
      [<c0014b70>] (handle_IRQ+0x54/0xb4) from [<c0008634>] (omap3_intc_handle_irq+0x60/0x74)
      [<c0008634>] (omap3_intc_handle_irq+0x60/0x74) from [<c051d0a4>] (__irq_svc+0x44/0x5c)
      Exception stack(0xde05de80 to 0xde05dec8)
      de80: 00000001 00000001 00000000 de05b440 c082afac de057ac0 de057ac0 de0443c0
      dea0: 00000000 00000000 00000000 00000000 c082afbc de05dec8 c009f2a0 c051c778
      dec0: 20000113 ffffffff
      [<c051d0a4>] (__irq_svc+0x44/0x5c) from [<c051c778>] (_raw_spin_unlock_irq+0x28/0x2c)
      [<c051c778>] (_raw_spin_unlock_irq+0x28/0x2c) from [<c016edb0>] (proc_alloc_inum+0x30/0xa8)
      [<c016edb0>] (proc_alloc_inum+0x30/0xa8) from [<c016ee40>] (proc_register+0x18/0x130)
      [<c016ee40>] (proc_register+0x18/0x130) from [<c016f054>] (proc_mkdir_data+0x44/0x6c)
      [<c016f054>] (proc_mkdir_data+0x44/0x6c) from [<c008eb44>] (register_irq_proc+0x6c/0x128)
      [<c008eb44>] (register_irq_proc+0x6c/0x128) from [<c008ed2c>] (init_irq_proc+0x74/0xb0)
      [<c008ed2c>] (init_irq_proc+0x74/0xb0) from [<c075ab3c>] (kernel_init_freeable+0x84/0x1c8)
      [<c075ab3c>] (kernel_init_freeable+0x84/0x1c8) from [<c0510934>] (kernel_init+0x8/0x150)
      [<c0510934>] (kernel_init+0x8/0x150) from [<c0013ce8>] (ret_from_fork+0x14/0x2c)
      Code: bad PC value
      
      Fixes: bf18525f
      
       "ARM: 7872/1: Support arch_irq_work_raise() via self IPIs"
      Reported-by: default avatarOlof Johansson <olof@lixom.net>
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Tested-by: default avatarOlof Johansson <olof@lixom.net>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      c682e51d
  26. 07 Nov, 2013 1 commit
    • Stephen Boyd's avatar
      ARM: 7872/1: Support arch_irq_work_raise() via self IPIs · bf18525f
      Stephen Boyd authored
      
      
      By default, IRQ work is run from the tick interrupt (see
      irq_work_run() in update_process_times()). When we're in full
      NOHZ mode, restarting the tick requires the use of IRQ work and
      if the only place we run IRQ work is in the tick interrupt we
      have an unbreakable cycle. Implement arch_irq_work_raise() via
      self IPIs to break this cycle and get the tick started again.
      Note that we implement this via IPIs which are only available on
      SMP builds. This shouldn't be a problem because full NOHZ is only
      supported on SMP builds anyway.
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Reviewed-by: default avatarKevin Hilman <khilman@linaro.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      bf18525f
  27. 11 Oct, 2013 1 commit
    • Santosh Shilimkar's avatar
      ARM: mm: Introduce virt_to_idmap() with an arch hook · 4dc9a817
      Santosh Shilimkar authored
      
      
      On some PAE systems (e.g. TI Keystone), memory is above the
      32-bit addressable limit, and the interconnect provides an
      aliased view of parts of physical memory in the 32-bit addressable
      space.  This alias is strictly for boot time usage, and is not
      otherwise usable because of coherency limitations. On such systems,
      the idmap mechanism needs to take this aliased mapping into account.
      
      This patch introduces virt_to_idmap() and a arch function pointer which
      can be populated by platform which needs it. Also populate necessary
      idmap spots with now available virt_to_idmap(). Avoided #ifdef approach
      to be compatible with multi-platform builds.
      
      Most architecture won't touch it and in that case virt_to_idmap()
      fall-back to existing virt_to_phys() macro.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarSantosh Shilimkar <santosh.shilimkar@ti.com>
      4dc9a817
  28. 23 Sep, 2013 1 commit
    • Nicolas Pitre's avatar
      ARM: SMP: basic IPI triggered completion support · 5135d875
      Nicolas Pitre authored
      
      
      We need a mechanism to let an inbound CPU signal that it is alive before
      even getting into the kernel environment i.e. from early assembly code.
      Using an IPI is the simplest way to achieve that.
      
      This adds some basic infrastructure to register a struct completion
      pointer to be "completed" when the dedicated IPI for this task is
      received.
      Signed-off-by: default avatarNicolas Pitre <nico@linaro.org>
      5135d875
  29. 02 Sep, 2013 1 commit
  30. 13 Aug, 2013 1 commit
    • Stephen Warren's avatar
      ARM: 7807/1: kexec: validate CPU hotplug support · 2103f6cb
      Stephen Warren authored
      
      
      Architectures should fully validate whether kexec is possible as part of
      machine_kexec_prepare(), so that user-space's kexec_load() operation can
      report any problems. Performing validation in machine_kexec() itself is
      too late, since it is not allowed to return.
      
      Prior to this patch, ARM's machine_kexec() was testing after-the-fact
      whether machine_kexec_prepare() was able to disable all but one CPU.
      Instead, modify machine_kexec_prepare() to validate all conditions
      necessary for machine_kexec_prepare()'s to succeed. BUG if the validation
      succeeded, yet disabling the CPUs didn't actually work.
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      Acked-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      2103f6cb
  31. 14 Jul, 2013 1 commit
    • Paul Gortmaker's avatar
      arm: delete __cpuinit/__CPUINIT usage from all ARM users · 8bd26e3a
      Paul Gortmaker authored
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      and are flagged as __cpuinit  -- so if we remove the __cpuinit from
      the arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      related content into no-ops as early as possible, since that will get
      rid of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the ARM uses of the __cpuinit macros from C code,
      and all __CPUINIT from assembly code.  It also had two ".previous"
      section statements that were paired off against __CPUINIT
      (aka .section ".cpuinit.text") that also get removed here.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      8bd26e3a
  32. 25 Jun, 2013 2 commits
  33. 17 Jun, 2013 1 commit
    • Stephen Warren's avatar
      ARM: 7759/1: decouple CPU offlining from reboot/shutdown · 19ab428f
      Stephen Warren authored
      
      
      Add comments to machine_shutdown()/halt()/power_off()/restart() that
      describe their purpose and/or requirements re: CPUs being active/not.
      
      In machine_shutdown(), replace the call to smp_send_stop() with a call to
      disable_nonboot_cpus(). This completely disables all but one CPU, thus
      satisfying the requirement that only a single CPU be active for kexec.
      Adjust Kconfig dependencies for this change.
      
      In machine_halt()/power_off()/restart(), call smp_send_stop() directly,
      rather than via machine_shutdown(); these functions don't need to
      completely de-activate all CPUs using hotplug, but rather just quiesce
      them.
      
      Remove smp_kill_cpus(), and its call from smp_send_stop().
      smp_kill_cpus() was indirectly calling smp_ops.cpu_kill() without calling
      smp_ops.cpu_die() on the target CPUs first. At least some implementations
      of smp_ops had issues with this; it caused cpu_kill() to hang on Tegra,
      for example. Since smp_send_stop() is only used for shutdown, halt, and
      power-off, there is no need to attempt any kind of CPU hotplug here.
      
      Adjust Kconfig to reflect that machine_shutdown() (and hence kexec)
      relies upon disable_nonboot_cpus(). However, this alone doesn't guarantee
      that hotplug will work, or even that hotplug is implemented for a
      particular piece of HW that a multi-platform zImage runs on. Hence, add
      error-checking to machine_kexec() to determine whether it did work.
      Suggested-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarZhangfei Gao <zhangfei.gao@gmail.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      19ab428f