[meta-freescale] Page allocation error on bulk data transfer

Abdul Ahad abdulahad at iwavesystems.com
Wed Feb 18 04:22:16 PST 2015


Hi,

I am trying to do Bulk data transfer test with yocto file system, 
running on i.mx6 dual-lite board with 512MB RAM.The file size is just 
500MB.I ran the test for 24hrs but i am getting following error in the 
terminal.

cp: page allocation failure: order:0, mode:0x200020

[2015-02-03 12:31:11.555] CPU: 1 PID: 2692 Comm: cp Not tainted 
3.10.17-1.0.0_ga+g2a69800 #3

[2015-02-03 12:31:11.561] [<80014620>] (unwind_backtrace+0x0/0xf4) from 
[<80011444>] (show_stack+0x10/0x14)

[2015-02-03 12:31:11.569] [<80011444>] (show_stack+0x10/0x14) from 
[<8008e230>] (warn_alloc_failed+0xe0/0x118)

[2015-02-03 12:31:11.578] [<8008e230>] (warn_alloc_failed+0xe0/0x118) 
from [<80091044>] (__alloc_pages_nodemask+0x634/0x890)

..

..

..

kworker/u4:1: page allocation failure: order:0, mode:0x200020

[2015-02-03 20:23:58.471] CPU: 1 PID: 9527 Comm: kworker/u4:1 Not 
tainted 3.10.17-1.0.0_ga+g2a69800 #3

[2015-02-03 20:23:58.478] Workqueue: writeback bdi_writeback_workfn 
(flush-8:0)

..

..

..

aiurdemux0:sink: page allocation failure: order:0, mode:0x200020

[2015-02-04 01:10:56.110] CPU: 0 PID: 23438 Comm: aiurdemux0:sink Not 
tainted 3.10.17-1.0.0_ga+g2a69800 #3

[2015-02-04 01:10:56.117] [<80014620>] (unwind_backtrace+0x0/0xf4) from 
[<80011444>] (show_stack+0x10/0x14)

..

..

Is there any way to prevent the page allocation error.
I have tried the work around specified in sabresd Linux release notes, 
but with out any success.The work around suggested to run the below 
command "echo1 > /proc/sys/vm/drop_caches".

Thank you,
Regards,
Abdul Ahad

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/meta-freescale/attachments/20150218/30da6746/attachment.html>


More information about the meta-freescale mailing list