Re: what to expect from distributed sstate cache?


Mans Zigher <mans.zigher@...>
 

Hi,

Thanks for the input. Regarding docker we are building the docker
image and we are using the same image for all nodes so should they not
be identical when the nodes start the containers?

Thanks,

Den ons 27 maj 2020 kl 11:16 skrev <Mikko.Rapeli@...>:


Hi,

On Wed, May 27, 2020 at 10:58:55AM +0200, Mans Zigher wrote:
This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.
We do something similar except we rsync a sstate mirror to build
nodes from latest release before a build (and topic from gerrit
are merged to latest release too to avoid sstate and build tree getting
too out of sync).

bitbake-diffsigs can tell you why things get rebuild. The answers
should be there.

Also note that docker images are not reproducible by default
and might end up having different patch versions of openssl etc
depending on who build them and when. One way to work around this
is to use e.g. snapshots.debian.org repos for Debian containers
with a timestamped state of the full package repo used to generate
the container. I've done something similar but manual on top of
debootstrap to create a build rootfs tarball for lxc.

Hope this helps,

-Mikko

Join yocto@lists.yoctoproject.org to automatically receive all group messages.