On Sun, 2021-04-18 at 00:17 +0200, Gmane Admin wrote:
Hi, Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average is too high, so it's only natural to add a RAM limiter too.
-l [N], --load-average[=N], --max-load[=N] Don't start multiple jobs unless load is below N.
In any case, patches welcome :)
During today's Yocto technical call (1), we talked about approaches to limiting the system load and avoiding swap and/or OOM events. Here's what (little!) i recall from the discussion, 9 busy hours later.
In the short run, instead of independently maintaining changes to configurations to limit parallelism or xz memory usage, etc, we could develop an optional common include file where such limits are shared across the community.
I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.
It would need to be:
PARALLEL_MAKE_pn-nodejs = "-j 1"
So I watched it run for a while. It compiles with g++ and as at about 0.5GB per thread, which is OK. In the end it does ld taking 4GB and it tries to do 4 in parallel. And then swapping becomes so heavy the desktop becomes unresponsive. Like I mentioned before ssh from another machine allows me to STOP one of them, allowing the remaining to complete. And then CONT the last one.
I worked around it now, by creating a bbappend for nodejs with only PARALLEL_MAKE = "-j 2"
If that works, the override above should also work. You do need the "pn-" prefix to the recipe name though.