This has been discussed several times in the past - the conclusion usually is that first this needs to be addresses upstream in make and ninja, e.g. that they support throttling new compiler instances when memory gets tight. Once that is available, bitbake can follow by throttling its own tasks as well. Alex
Yes, and make project doesn't care, because make is called with -j 16 so that is what it does.
So here's my pitch: bitbake can stop processes spawned by make, because it knows that it started make on 4 recipies, each with -j 16. The individual makes don't know about each other.
And normally it's not an issue (compiling c), but sometimes compiling c++ it explodes (I saw 4GB ram in use for 1 g++ and there were 4 of them).
On Sun, 11 Apr 2021 at 17:23, Gmane Admin <gley-yocto@m.gmane-mx.org <mailto:gley-yocto@m.gmane-mx.org>> wrote: My build machine has 8 cores + HT so bitbake enthusiastically assumes 16. However I have (only?) 16GB of RAM (+24GB swap space). 16GB of RAM has always been more then enough with 4 core + HT, but now building certain recipes (nodejs, but rust will probably do it as well) eats up more memory then there actually is RAM causing excessive swapping. In fact the swapping is so excessive that just this single recipe determines largely the total image build time. However restricting bitbake (or actually parallel make) to use only 4 cores + HT sounds also like a waste. I know this has been looked at in the past, but I think it needs fundamental resolving (other then, hé, why not add just another stick of memory). What I do manually when I run into swapping so bad my desktop becomes unresponsive is ssh from another machine and then stop (not kill) the processes that use the most memory. These then get swapped out, but not back in, allowing the remaining tasks to complete without swapping. Then when sufficient memory becomes free again I continue the stopped processes. Isn't this something that could be added to bitbake to automate using memory efficiently?