Re: Switching between multiple DISTROs without "contamination"


Nicolas Jeker
 

Thanks Martin and Mike for your explanations and tips.

So, I've done a lot of testing today and it seems I simplified the
example in my first email a bit too much. The example as-is works fine
when switching DISTROs as far as I can tell. The problem only arises
when wildcards are used.

Changing my initial example like this should trigger the behaviour I've
initially described:

SRC_URI:append:mydistro-dev = " file://application-dbg.service"

do_install {
# ...snip...
# systemd service
install -d ${D}${systemd_system_unitdir}
install -m 0644 ${WORKDIR}/*.service ${D}${systemd_system_unitdir}
}

do_install:append:mydistro-dev() {
# debug systemd services
install -d ${D}${systemd_system_unitdir}
install -m 0644 ${WORKDIR}/application-dbg.service
${D}${systemd_system_unitdir}
}

Notice the *.service in do_install.

From my testing, this is how contamination happens:

1) Build with 'DISTRO=mydistro bitbake application'. All tasks for the
recipe are run and the directories in WORKDIR are populated, including
the "application.service" file.
2) Build with 'DISTRO=mydistro-dev bitbake application'. do_unpack is
rerun and places the additional "application-dbg.service" file in
WORKDIR.
3) Switching back to 'mydistro' will get the recipe from sstate cache,
which works fine.
4) Changing application.bb and rebuilding with 'DISTRO=mydistro bitbake
application' reruns do_install (as expected). This leads to the
packages do_install picking up the additional "application-dbg.service"
file left behind by the invocation in step 2).

Mike, Martin: Do you remember in which cases you encountered problems
when sharing the build directory?

On Tue, 2022-07-12 at 09:15 -0700, Khoi Dinh Trinh wrote:
Thank you Nicolas for asking this question since I will probably run
into this issue soon if not for this email thread. The answers so far
have been very helpful but I just want to clarify a bit more on why
doesn't the package get rebuilt? From my understanding, Yocto should
rerun a task when the signature of the task changes and since
do_install has an override on mydistro-dev, shouldn't the content and
thus the signature of do_install change when switching distro and so
Yocto should rerun it? 
As far as I can tell, all the relevant tasks are rerun correctly when
something is changed. Relevant tasks meaning only the tasks that are
actually different.

The specific issue I experienced is due to the WORKDIR not being
cleaned between different task invocations and the recipe (probably
wrongfully) relying on wildcards to gather files, see example above.

I have a lot of tasks with override not just on DISTRO but other
things like MACHINE or custom variables so I want to understand the
rebuild mechanism as best I can.
There's surely someone more knowledgeable here that could clarify the
inner workings of this mechanism a lot better than me.

Best,
Khoi Trinh

On Tue, Jul 12, 2022 at 8:05 AM Mike Looijmans
<mike.looijmans@...> wrote:
Quick answer: Don't build multiple distros in one build directory.
It's really a bummer that it's not reliably possible to switch between
DISTROs inside one build directory.

You might get away with setting TMPDIR = "tmp-${DISTRO}" to give
each
its own.

But I'd rather advice to set up two separate builds and just point
the
downloads and sstate-cache to the same location. It'll be faster
than
the TMPDIR option.

Or figure out how to put the difference in the IMAGE only. Then you
can
just build both images (in parallel, woot), which is faster, more
convenient and saves on diskspace.
As I'm currently working with something close to what you describe, I
think I'll try to stay away from multiple DISTROs if possible and
improve on what I'm already doing.


What I often do is have my-application.bb generate a
my-application-utils package that only gets installed in the "dev"
image
but not in the production one, which only installs "my-
application".
This is probably what Martin meant with his A.bb and A-xx.bb example.
It's so far one of the best approaches I've seen, thanks.


You could also create a "my-application-dev.bb" recipe that
includes
my-application.bb and just changes what it needs to be different.




Met vriendelijke groet / kind regards,

Mike Looijmans
System Expert


TOPIC Embedded Products B.V.
Materiaalweg 4, 5681 RJ Best
The Netherlands

T: +31 (0) 499 33 69 69
E: mike.looijmans@...
W: www.topic.nl

Please consider the environment before printing this e-mail
On 12-07-2022 15:37, Nicolas Jeker via lists.yoctoproject.org
wrote:
Hi all,

I'm currently using an additional layer and image to
differentiate
between a release and development build (enabling NFS, SSH, root
login,
etc.). To create a development build, I manually add the layer to
bblayers.conf. This works quite well, but feels a bit clumsy to
integrate into a CI/CD pipeline.

Per these past discussions here [1][2], I'm now trying to migrate
to
multiple DISTROs, something like "mydistro" and "mydistro-dev".

While migrating some of the changes, I discovered that I run into
caching(?) issues. I have a recipe for an internal application
and want
to include additional systemd service files in the development
image.

What I did was this:

Added "application-dbg.service" to recipes-
internal/application/files

Adapted application.bb recipe:

SRC_URI:append:mydistro-dev = " file://application-dbg.service"

do_install {
      # ...snip...
      # systemd service
      install -d ${D}${systemd_system_unitdir}
      install -m 0644 ${WORKDIR}/application.service
${D}${systemd_system_unitdir}
}

do_install:append:mydistro-dev() {
      # debug systemd services
      install -d ${D}${systemd_system_unitdir}
      install -m 0644 ${WORKDIR}/application-dbg.service
${D}${systemd_system_unitdir}
}


When I run "DISTRO=mydistro-dev bitbake application" followed by
"DISTRO=mydistro bitbake application", the debug service file is
still
present in the package. This seems to be caused by the "image"
directory in the recipe WORKDIR not being cleaned between
subsequent
do_install runs. Is this expected behaviour? What's the best
solution?

Kind regards,
Nicolas

[1]:
https://lore.kernel.org/yocto/CAH9dsRiArf_9GgQS4hCg5=J_Jk6cd3eiGaOiQd788+iSLTuU+g@mail.gmail.com/
[2]:
https://lore.kernel.org/yocto/VI1PR0602MB3549F83AC93A53785DE48677D3FD9@VI1PR0602MB3549.eurprd06.prod.outlook.com/


--
Mike Looijmans



Join yocto@lists.yoctoproject.org to automatically receive all group messages.