<div dir="auto">unsubscribe</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jul 12, 2019, 9:21 AM Yang Liu <<a href="mailto:yliu@cybertec.com.au">yliu@cybertec.com.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi, meta-freescale gurus,<br>
<br>
I am testing a simple VPN (openswan) configuration on a customised<br>
i.mx287 board. The Linux kernel is Linux version<br>
5.0.7-fslc+g39f695df6f5f. There is a kernel dump printed on screen<br>
after several seconds. It looks like a deadlock happened in mxs-dcp<br>
driver.<br>
<br>
<br>
[ 4176.961195] ================================<br>
[ 4176.965485] WARNING: inconsistent lock state<br>
[ 4176.969775] 5.0.7-fslc+g39f695df6f5f #1 Not tainted<br>
[ 4176.974667] --------------------------------<br>
[ 4176.978954] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.<br>
[ 4176.984984] swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes:<br>
[ 4176.989973] 6c0f210d (&(&sdcp->lock[i])->rlock){+.?.}, at:<br>
mxs_dcp_aes_enqueue+0x4c/0x98<br>
[ 4176.998137] {SOFTIRQ-ON-W} state was registered at:<br>
[ 4177.003049]  Â _raw_spin_lock+0x28/0x38<br>
[ 4177.006828]  Â dcp_chan_thread_sha+0x4c/0x2e4<br>
[ 4177.011133]  Â kthread+0x120/0x138<br>
[ 4177.014475]  Â ret_from_fork+0x14/0x24<br>
[ 4177.018155]  Â  Â (null)<br>
[ 4177.020531] irq event stamp: 2503736<br>
[ 4177.024150] hardirqs last  enabled at (2503736): [<c0073f74>]<br>
ktime_get_real_seconds+0x90/0xb0<br>
[ 4177.032795] hardirqs last disabled at (2503735): [<c0073f3c>]<br>
ktime_get_real_seconds+0x58/0xb0<br>
[ 4177.041438] softirqs last  enabled at (2503728): [<c001f85c>]<br>
irq_enter+0x64/0x80<br>
[ 4177.048947] softirqs last disabled at (2503729): [<c001f9c0>]<br>
irq_exit+0x148/0x19c<br>
[ 4177.056528]<br>
[ 4177.056528] other info that might help us debug this:<br>
[ 4177.063064]  Possible unsafe locking scenario:<br>
[ 4177.063064]<br>
[ 4177.068991]  Â  Â  Â  CPU0<br>
[ 4177.071446]  Â  Â  Â  ----<br>
[ 4177.073901]  Â lock(&(&sdcp->lock[i])->rlock);<br>
[ 4177.078281]  Â <Interrupt><br>
[ 4177.080910]  Â  Â lock(&(&sdcp->lock[i])->rlock);<br>
[ 4177.085462]<br>
[ 4177.085462]  *** DEADLOCK ***<br>
[ 4177.085462]<br>
[ 4177.091397] 2 locks held by swapper/0:<br>
[ 4177.095155]  #0: 9c353098 (rcu_read_lock){....}, at:<br>
netif_receive_skb_internal+0x28/0x19c<br>
[ 4177.103503]  #1: 9c353098 (rcu_read_lock){....}, at:<br>
ip_local_deliver_finish+0x2c/0xb8<br>
[ 4177.111500]<br>
[ 4177.111500] stack backtrace:<br>
[ 4177.115883] CPU: 0 PID: 0 Comm: swapper Not tainted<br>
5.0.7-fslc+g39f695df6f5f #1<br>
[ 4177.123208] Hardware name: Freescale MXS (Device Tree)<br>
[ 4177.128415] [<c0010df8>] (unwind_backtrace) from [<c000e730>]<br>
(show_stack+0x10/0x14)<br>
[ 4177.136211] [<c000e730>] (show_stack) from [<c0055a74>]<br>
(mark_lock+0x534/0x6f0)<br>
[ 4177.143560] [<c0055a74>] (mark_lock) from [<c005612c>]<br>
(__lock_acquire+0x484/0x1870)<br>
[ 4177.151339] [<c005612c>] (__lock_acquire) from [<c0057e54>]<br>
(lock_acquire+0xb4/0x178)<br>
[ 4177.159210] [<c0057e54>] (lock_acquire) from [<c068adb8>]<br>
(_raw_spin_lock+0x28/0x38)<br>
[ 4177.166996] [<c068adb8>] (_raw_spin_lock) from [<c049e024>]<br>
(mxs_dcp_aes_enqueue+0x4c/0x98)<br>
[ 4177.175416] [<c049e024>] (mxs_dcp_aes_enqueue) from [<bf0dc52c>]<br>
(esp_input+0x1d8/0x300 [esp4])<br>
[ 4177.184190] [<bf0dc52c>] (esp_input [esp4]) from [<c05d9540>]<br>
(xfrm_input+0x83c/0xad0)<br>
[ 4177.192171] [<c05d9540>] (xfrm_input) from [<c05c8bd0>]<br>
(xfrm4_esp_rcv+0x68/0x120)<br>
[ 4177.199798] [<c05c8bd0>] (xfrm4_esp_rcv) from [<c0564a1c>]<br>
(ip_protocol_deliver_rcu+0x7c/0x2cc)<br>
[ 4177.208547] [<c0564a1c>] (ip_protocol_deliver_rcu) from<br>
[<c0564cf4>] (ip_local_deliver_finish+0x88/0xb8)<br>
[ 4177.218072] [<c0564cf4>] (ip_local_deliver_finish) from<br>
[<c0564e54>] (ip_local_deliver+0x130/0x1a8)<br>
[ 4177.227162] [<c0564e54>] (ip_local_deliver) from [<c0564fe8>]<br>
(ip_rcv+0x11c/0x174)<br>
[ 4177.234778] [<c0564fe8>] (ip_rcv) from [<c05139a8>]<br>
(__netif_receive_skb_one_core+0x4c/0x6c)<br>
[ 4177.243263] [<c05139a8>] (__netif_receive_skb_one_core) from<br>
[<c0519370>] (netif_receive_skb_internal+0x58/0x19c)<br>
[ 4177.253568] [<c0519370>] (netif_receive_skb_internal) from<br>
[<c0519f10>] (napi_gro_receive+0x148/0x1d0)<br>
[ 4177.262924] [<c0519f10>] (napi_gro_receive) from [<c042164c>]<br>
(fec_enet_rx_napi+0x3d0/0x9b8)<br>
[ 4177.271406] [<c042164c>] (fec_enet_rx_napi) from [<c051a668>]<br>
(net_rx_action+0xe4/0x3dc)<br>
[ 4177.279542] [<c051a668>] (net_rx_action) from [<c000a15c>]<br>
(__do_softirq+0x134/0x434)<br>
[ 4177.287416] [<c000a15c>] (__do_softirq) from [<c001f9c0>]<br>
(irq_exit+0x148/0x19c)<br>
[ 4177.294853] [<c001f9c0>] (irq_exit) from [<c006345c>]<br>
(__handle_domain_irq+0x50/0xa8)<br>
[ 4177.302720] [<c006345c>] (__handle_domain_irq) from [<c00099cc>]<br>
(__irq_svc+0x6c/0x8c)<br>
[ 4177.310658] Exception stack(0xc0919f40 to 0xc0919f88)<br>
[ 4177.315744] 9f40: 00000001 00000001 00000000 20000013 ffffe000<br>
c09230c4 c09a453f c07e839c<br>
[ 4177.323953] 9f60: c0923060 c7ee8e80 c090aa50 00000000 00000000<br>
c0919f90 c0058020 c000bc30<br>
[ 4177.332147] 9f80: 20000013 ffffffff<br>
[ 4177.335675] [<c00099cc>] (__irq_svc) from [<c000bc30>]<br>
(arch_cpu_idle+0x28/0x38)<br>
[ 4177.343115] [<c000bc30>] (arch_cpu_idle) from [<c0047078>]<br>
(do_idle+0x8c/0xec)<br>
[ 4177.350374] [<c0047078>] (do_idle) from [<c004745c>]<br>
(cpu_startup_entry+0xc/0x10)<br>
[ 4177.357909] [<c004745c>] (cpu_startup_entry) from [<c08c9d8c>]<br>
(start_kernel+0x3cc/0x474)<br>
[ 4177.460138] NOHZ: local_softirq_pending 08<br>
[ 4177.960322] NOHZ: local_softirq_pending 08<br>
[ 4178.045081] NOHZ: local_softirq_pending 08<br>
[ 4178.126861] NOHZ: local_softirq_pending 08<br>
[ 4178.157288] NOHZ: local_softirq_pending 08<br>
[ 4178.191966] NOHZ: local_softirq_pending 08<br>
[ 4178.218160] NOHZ: local_softirq_pending 08<br>
[ 4178.237144] NOHZ: local_softirq_pending 08<br>
[ 4178.267594] NOHZ: local_softirq_pending 08<br>
[ 4178.286991] NOHZ: local_softirq_pending 08<br>
<br>
Can someone give me some suggestions about this bug?<br>
<br>
Best Regards,<br>
yliu<br>
-- <br>
_______________________________________________<br>
meta-freescale mailing list<br>
<a href="mailto:meta-freescale@yoctoproject.org" target="_blank" rel="noreferrer">meta-freescale@yoctoproject.org</a><br>
<a href="https://lists.yoctoproject.org/listinfo/meta-freescale" rel="noreferrer noreferrer" target="_blank">https://lists.yoctoproject.org/listinfo/meta-freescale</a><br>
</blockquote></div>