On Thu, 2020-03-19 at 08:05 +0000, Mikko Rapeli wrote:
Once this is done, IO still happens when anything calls sync() and fsync() and worst offenders are package management tools. In yocto builds, package manager actions to flush to disk are always useless since rootfs images are going to be compressed and original ones wiped by rm_work anyway. I've tried to hook eatmydata library into the build which makes sync() and fsync() calls no-ops but I've still failed to fix all the tools and processes called during build from python code. For shell based tasks this does it:
Doesn't pseudo intercept and stop these sync calls already? Its supposed to so if its not, we should fix that.
The effect is clearly visible during build time using Performance Co- Pilot (pcp) or similar tools to monitor CPU, memory, IO and network IO. The usage of RAM as page cache grows until limits are hit and only then writes to disk start, except for the python image classes... Hints to fix this are welcome!
To my knowledge of monitoring our builds, there is a lot of optimization potential to better build times. CPU are under utilized during bitbake recipe parsing
Recipe parsing should hit 100% CPU, its one of the few places we can do that.
, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
Memory is not fully utilized either since IO through sync()/fsync() happens everywhere