On Sun, 2021-04-18 at 00:17 +0200, Gmane Admin wrote:
Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average isDuring today's Yocto technical call (1),
too high, so it's only natural to add a RAM limiter too.
-l [N], --load-average[=N], --max-load[=N]
Don't start multiple jobs unless load is
In any case, patches welcome :)
we talked about approaches to limiting the system load and avoiding
swap and/or OOM events. Here's what (little!) i recall from the
discussion, 9 busy hours later.
In the short run, instead of independently maintaining changes to
configurations to limit parallelism or xz memory usage, etc, we
could develop an optional common include file where such limits
are shared across the community.
It would need to be:
PARALLEL_MAKE_pn-nodejs = "-j 1"
So I watched it run for a while. It compiles with g++ and as at about
0.5GB per thread, which is OK. In the end it does ld taking 4GB and it
tries to do 4 in parallel. And then swapping becomes so heavy the
desktop becomes unresponsive. Like I mentioned before ssh from another
machine allows me to STOP one of them, allowing the remaining to
complete. And then CONT the last one.
I worked around it now, by creating a bbappend for nodejs with only
PARALLEL_MAKE = "-j 2"
If that works, the override above should also work. You do need the "pn-"
prefix to the recipe name though.