On Wed, Mar 18, 2020 at 10:56:50PM +0000, Ross Burton wrote:
On 18/03/2020 14:09, Mike Looijmans wrote:
Harddisk speed has very little impact on your build time. It helps with
the "setscene" parts, but doesn't affect actual compile time at all. I
recall someone did a build from RAM disks only on a rig, and it was only
about 1 minute faster on a one hour build compared to rotating disks.
My build machine has lots of RAM and I do builds in a 32GB tmpfs with
rm_work (and no, I don't build webkit, which would make this impractical).
As you say, with sufficient RAM the build speed is practically the same as
on disks due to the caching (especially if you tune the mount options), so
I'd definitely spend money on more RAM instead of super-fast disks. I just
prefer doing tmpfs builds because it saves my spinning rust. :)
Alternative for tmpfs with hard size limit is to keep file system caches in
memory as long as possible and only start writes to disks when page cache gets
too full. This scales but still uses all the RAM available. Here's how to do this:
$ cat /etc/sysctl.d/99-build_server_fs_ops_to_memory.conf
# fs cache can use 90% of memory before system starts io to disk,
# keep as much as possible in RAM
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 90
# keep stuff for 12h in memory before writing to disk,
# allows reusing data as much as possible between builds
vm.dirty_expire_centisecs = 4320000
vm.dirtytime_expire_seconds = 432000
# allow single process to use 60% of system RAM for file caches, e.g. image build
vm.dirty_bytes = 0
vm.dirty_ratio = 60
# disable periodic background writes, only write when running out of RAM
vm.dirty_writeback_centisecs = 0
Once this is done, IO still happens when anything calls sync() and fsync()
and worst offenders are package management tools. In yocto builds, package
manager actions to flush to disk are always useless since rootfs images
are going to be compressed and original ones wiped by rm_work anyway.
I've tried to hook eatmydata library into the build which makes sync() and fsync()
calls no-ops but I've still failed to fix all the tools and processes called
during build from python code. For shell based tasks this does it:
$ export LD_LIBRARY_PATH=/usr/lib/libeatmydata
$ export LD_PRELOAD=libeatmydata.so
$ grep -rn LD_PRELOAD conf/local.conf
conf/local.conf:305:BB_HASHBASE_WHITELIST_append = " LD_PRELOAD"
conf/local.conf:306:BB_HASHCONFIG_WHITELIST_append = " LD_PRELOAD"
The effect is clearly visible during build time using Performance Co-Pilot (pcp)
or similar tools to monitor CPU, memory, IO and network IO. The usage of RAM
as page cache grows until limits are hit and only then writes to disk
start, except for the python image classes... Hints to fix this are welcome!
To my knowledge of monitoring our builds, there is a lot of optimization
potential to better build times. CPU are under utilized during bitbake recipe
parsing, fetch, configure, package and rootfs tasks. Memory is not fully utilized
either since IO through sync()/fsync() happens everywhere, and due to background
writes by default on ext4 etc file systems. Only do_compile() tasks are saturating
all CPUs and when linking lots of C++ also all of RAM. Then dependencies between
various recipes and tasks leaves large gaps in CPU utilization too.
-Mikko