Date   

Yocto Project Newcomer & Unassigned Bugs - Help Needed

Stephen Jolley
 

All,

 

The triage team is starting to try and collect up and classify bugs which a newcomer to the project would be able to work on in a way which means people can find them. They're being listed on the triage page under the appropriate heading:

https://wiki.yoctoproject.org/wiki/Bug_Triage#Newcomer_Bugs  Also please review: https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded and how to create a bugzilla account at: https://bugzilla.yoctoproject.org/createaccount.cgi

The idea is these bugs should be straight forward for a person to help work on who doesn't have deep experience with the project.  If anyone can help, please take ownership of the bug and send patches!  If anyone needs help/advice there are people on irc who can likely do so, or some of the more experienced contributors will likely be happy to help too.

 

Also, the triage team meets weekly and does its best to handle the bugs reported into the Bugzilla. The number of people attending that meeting has fallen, as have the number of people available to help fix bugs. One of the things we hear users report is they don't know how to help. We (the triage team) are therefore going to start reporting out the currently 356 unassigned or newcomer bugs.

 

We're hoping people may be able to spare some time now and again to help out with these.  Bugs are split into two types, "true bugs" where things don't work as they should and "enhancements" which are features we'd want to add to the system.  There are also roughly four different "priority" classes right now, “3.2”, “3.3, "3.99" and "Future", the more pressing/urgent issues being in "3.2" and then “3.3”.

 

Please review this link and if a bug is something you would be able to help with either take ownership of the bug, or send me (sjolley.yp.pm@...) an e-mail with the bug number you would like and I will assign it to you (please make sure you have a Bugzilla account).  The list is at: https://wiki.yoctoproject.org/wiki/Bug_Triage_Archive#Unassigned_or_Newcomer_Bugs

 

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

(    Cell:                (208) 244-4460

* Email:              sjolley.yp.pm@...

 


Re: bitbake controlling memory use

Gmane Admin
 

Op 12-04-2021 om 10:13 schreef Robert Berger@...:
Hi,
My comments are in-line.
On 11/04/2021 18:23, Gmane Admin wrote:
My build machine has 8 cores + HT so bitbake enthusiastically assumes 16. However I have (only?) 16GB of RAM (+24GB swap space).

16GB of RAM has always been more then enough with 4 core + HT, but now building certain recipes (nodejs, but rust will probably do it as well) eats up more memory then there actually is RAM causing excessive swapping.
The RAM usage depends heavily on what you are building ;) - It could be some crazy C++ stuff ;)
Yeah, C++. But apearently it's during the LTO phase where it eat my memory.

Is Linux your build host?
Yup.

Then you can use cgroups to limit resources like memory.
So then it would just fail the build?

Do you use a build container? It uses cgroups underneath.
Nope.

e.g. docker:
https://docs.docker.com/config/containers/resource_constraints/
Regards,
Robert


Re: bitbake controlling memory use

Gmane Admin
 

Hi,

Op 12-04-2021 om 04:25 schreef ChenQi:
I think you write a script to do all the WATCH-STOP-CONTINUE stuff.
e.g.
memwatch bitbake core-image-minimal
Options could be added.
e.g.
memwatch --interval 5 --logfile test.log bitbake core-image-minimal
This script first becomes a session leader, then forks to start the 'bitbake' command as its child process.
Then, every several seconds (say 10s by default), it checks the current memory usage, and according to its policy, determines whether to stop/continue some process or not.
For stopping the process, you can first get all its child process by simply using the 'ps' command.
e.g.
$ ps o vsize,comm,pid -s 28303 | sort -n -r
317284 emacs           12883
 28912 ps              36302
 26248 sort            36303
 21432 man             24797
 17992 bash            28303
  9852 pager           24807
   VSZ COMMAND           PID
Then skip the bitbake processes, stop the first one that uses the largest memory, record its PID.
For continuing processes, just start it according to the records. Maybe using FILO by default?
Best Regards,
Chen Qi
Yeah, so we would be having memwatch as a baby sitter.

I would be nicer to have it built into bitbake, but this would work too.

On 04/11/2021 11:23 PM, Gmane Admin wrote:
My build machine has 8 cores + HT so bitbake enthusiastically assumes 16. However I have (only?) 16GB of RAM (+24GB swap space).

16GB of RAM has always been more then enough with 4 core + HT, but now building certain recipes (nodejs, but rust will probably do it as well) eats up more memory then there actually is RAM causing excessive swapping.

In fact the swapping is so excessive that just this single recipe determines largely the total image build time. However restricting bitbake (or actually parallel make) to use only 4 cores + HT sounds also like a waste.

I know this has been looked at in the past, but I think it needs fundamental resolving (other then, hé, why not add just another stick of memory).

What I do manually when I run into swapping so bad my desktop becomes unresponsive is ssh from another machine and then stop (not kill) the processes that use the most memory.

These then get swapped out, but not back in, allowing the remaining tasks to complete without swapping. Then when sufficient memory becomes free again I continue the stopped processes.

Isn't this something that could be added to bitbake to automate using memory efficiently?




[meta-security][PATCH] Clearly define clang toolchain in Parsec recipes

Anton Antonov
 

Signed-off-by: Anton Antonov <Anton.Antonov@...>
---
.../recipes-parsec/parsec-service/parsec-service_0.7.0.bb | 4 ++--
meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.3.0.bb | 3 +--
2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/meta-parsec/recipes-parsec/parsec-service/parsec-service_0.7.0.bb b/meta-parsec/recipes-parsec/parsec-service/parsec-service_0.7.0.bb
index b3f7b21..0e14955 100644
--- a/meta-parsec/recipes-parsec/parsec-service/parsec-service_0.7.0.bb
+++ b/meta-parsec/recipes-parsec/parsec-service/parsec-service_0.7.0.bb
@@ -10,8 +10,8 @@ SRC_URI += "crate://crates.io/parsec-service/${PV} \
file://parsec-tmpfiles.conf \
"

-DEPENDS = "clang-native tpm2-tss"
-INSANE_SKIP_${PN} += "dev-deps"
+DEPENDS = "tpm2-tss"
+TOOLCHAIN = "clang"

CARGO_BUILD_FLAGS += " --features all-providers,cryptoki/generate-bindings,tss-esapi/generate-bindings"

diff --git a/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.3.0.bb b/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.3.0.bb
index 939e771..35c65c0 100644
--- a/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.3.0.bb
+++ b/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.3.0.bb
@@ -7,8 +7,7 @@ inherit cargo
SRC_URI += "crate://crates.io/parsec-tool/${PV} \
"

-DEPENDS = "clang-native"
-INSANE_SKIP_${PN} += "dev-deps"
+TOOLCHAIN = "clang"

do_install() {
install -d ${D}/${bindir}
--
2.20.1


Re: #golang: go fetches dependencies in compile phase

Alexander Kanavin
 

I'd suggest you place that tarball into some artefact storage, and have it listed in SRC_URI. Then the standard Yocto mechanism for fetching, checksumming, caching and unpacking tarballs kicks in, so you only need to make sure the tree is set up correctly after unpacking, maybe with some simple post-processing.

Alex


On Mon, 12 Apr 2021 at 16:02, Juergen Landwehr <juergen.landwehr@...> wrote:
Hi Alex,

OK, understood.

If the "local download cache path" is well-known (this is not by any chance $DL_DIR?), then we could create a tar from the vendor directory (which is created when you call "go mod vendor" and contains all the downloaded dependencies) and put this tar file into the download cache.

Before actually calling "go mod vendor", we would first check, if there is a tar file for the vendor directory in the download cache and if so simply unpack the tar.

Does this make sense, or am I too naive and this is just nonsense?

Jürgen



#golang: go fetches dependencies in compile phase

Juergen Landwehr
 

Hi Alex,

OK, understood.

If the "local download cache path" is well-known (this is not by any chance $DL_DIR?), then we could create a tar from the vendor directory (which is created when you call "go mod vendor" and contains all the downloaded dependencies) and put this tar file into the download cache.

Before actually calling "go mod vendor", we would first check, if there is a tar file for the vendor directory in the download cache and if so simply unpack the tar.

Does this make sense, or am I too naive and this is just nonsense?

Jürgen


Re: #golang: go fetches dependencies in compile phase

Alexander Kanavin
 

On Mon, 12 Apr 2021 at 13:47, Juergen Landwehr <juergen.landwehr@...> wrote:
But dependency management in go is not that arbitrary as it may seem. Dependencies and their version is stored in "go.mod". To ensure reproducable builds, hashes for each dependency and version are stored in "go.sum". Both files are in git and together with a local golang proxy, this should ensure reproducable builds, right?

Reproducibility means anyone can run a build at any point in the future even if the upstream repositories are gone, so all inputs must be stored in a local download cache, which is the other thing SRC_URI guarantees, in addition to verifying integrity of the inputs.

Alex


#golang: go fetches dependencies in compile phase

Juergen Landwehr
 

Hi Robert,

thanks for your thoughts.

I see your point and the last thing I want is "NOT reproducable builds".

But dependency management in go is not that arbitrary as it may seem. Dependencies and their version is stored in "go.mod". To ensure reproducable builds, hashes for each dependency and version are stored in "go.sum". Both files are in git and together with a local golang proxy, this should ensure reproducable builds, right?

To ensure that licences are valid and did not change over time, we developed a little tool, that goes through all dependencies and creates the following output, which we insert into our recipe:

LICENSE = "Apache-2.0 & MIT & MIT & BSD-2-Clause & BSD-3-Clause & ...

LIC_FILES_CHKSUM = " \
file://${S}/src/${GO_IMPORT}/vendor/github.com/coreos/go-oidc/LICENSE;md5=d2794c0df5b907fdace235a619d80314 \
file://${S}/src/${GO_IMPORT}/vendor/github.com/go-playground/locales/LICENSE;md5=3ccbda375ee345400ad1da85ba522301 \
file://${S}/src/${GO_IMPORT}/vendor/github.com/go-playground/universal-translator/LICENSE;md5=2e2b21ef8f61057977d27c727c84bef1 \
file://${S}/src/${GO_IMPORT}/vendor/github.com/godbus/dbus/v5/LICENSE;md5=09042bd5c6c96a2b9e45ddf1bc517eed \
file://${S}/src/${GO_IMPORT}/vendor/github.com/golang/gddo/LICENSE;md5=4c728948788b1a02f33ae4e906546eef \
...
This is a manual step and whenever dependencies change we have to create this list again and add it to the recipe, but it contains all the required license information and makes them visible in the recipe.

It might pollute the recipe a bit, but luckily we do not have thousands of dependencies like some npm projects. So I think it is still manageable. And is it much less work, than defining a recipe for each dependency.

So we would
* guarantee reproducable builds
* use the programming language (in our case golang) to handle dependency management
* ensure that licenses are visible and correct

I mean it is not perfect and still a compromise, but it should work without breaking reproducable builds or causing license issues?
What do you think?

Regards,
Jürgen

PS: Thanks for the link. I was not aware of this discussion (it is quite a bit to read:))


Re: #golang: go fetches dependencies in compile phase

Robert Berger
 

On 09/04/2021 16:53, Juergen Landwehr wrote:
Hi,
we are developing an application in go and use the "go-mod" bb-class from poky. This works fine.
However, the problem is, that the dependencies in go.mod are now fetched in the compile phase and not in the fetch phase.
Alexander (whom you cc'ed here as well) wrote a long time ago on the mailing list[1] about this stuff.

We have a couple of issues with those "modern" languages like go, which behave quite differently from good old Unix stuff. So the paradigm on which the whole Bitbake/OpenEmbedded/Yocto system was built does not quite apply for those.

Don't get me wrong, you can also do stupidities like git clone via make or cmake, but this is most likely not the way those tools should be used.

1) "Guaranteed __NOT__ reproducible builds"(C)

Go pulls in dependencies by itself, which might explain why it's in the compile phase and not it the fetch phase.

This leads to many interesting issues, like "guaranteed __NOT__ reproducible builds"(C). The problem is, that you don't have any influence on dependencies of dependencies. Someone changes somewhere a dependency on github and the party starts.

1.1) One sane solution to "please give me always the same input" is what Konrad already mentioned in combination with a local golang proxy.

2) Open Source License Compliance

If you pull in all those funny dependencies and also link things statically, like with go, Open Source License Compliance is getting very interesting.

People typically just use the license of the top level "main" entry point. In reality you would need to check all the dependency tree for licenses. You should also check if it's legally allowed to combine the licenses and in case it's allowed what are implications of GPLvx, LGPLv2, LGPLv3 without exceptions for your product.

[1] https://www.yoctoproject.org/pipermail/yocto/2017-March/035028.html

Regards,

Robert


#golang: go fetches dependencies in compile phase

Juergen Landwehr
 

Hello Konrad,

thanks for your reply.

It is interesting that you mention vendoring, because we tried this approach as well. We started to develop a "go-mod-vendor.bbclass", which retrieves the
application source code in "do_fetch_append" and then calls "go mod vendor".

The big problem was, that in that phase there is not yet a go executable available for the recipe, so we downloaded our own go version, which makes this approach a bit ugly.

But there is already a go version installed on the build host, right? In "poky/meta/recipes-devtools/go" I found a "go-native_1.14.bb" recipe.
So can we use this go exectuable somehow by adding a dependency (e.g. DEPENDS="go-native") and then use a path to the go exectuable that is guaranteed to work?

The other alternative (defining a recipe for each dependent go module) sounds really painful. Especially as we have quite a few dependencies, which have dependencies as well.
I see the point, that doing it that way makes it more visible and more Yocto like, but thinking about it gives me headaches already :)

Thanks again.


Re: bitbake controlling memory use

Mike Looijmans
 

Met vriendelijke groet / kind regards,

Mike Looijmans
System Expert


TOPIC Embedded Products B.V.
Materiaalweg 4, 5681 RJ Best
The Netherlands

T: +31 (0) 499 33 69 69
E: mike.looijmans@...
W: www.topic.nl

Please consider the environment before printing this e-mail
On 11-04-2021 17:23, Gmane Admin via lists.yoctoproject.org wrote:
My build machine has 8 cores + HT so bitbake enthusiastically assumes 16. However I have (only?) 16GB of RAM (+24GB swap space).
Been there, done that (8/16 cores with 8GB RAM).

The major cause is that bitbake will not just spawn 16 compiler threads, it will actually spawn up to 16 "do_compile" tasks which may spawn 16 compiler processes each, thus you'll be running a whopping 256 compilers. Bitbake may spawn a total of n^2 threads by default with "n" the detected number of cores.

The workaround I used was to limit the number of bitbake threads but leave the make threads at 16. Since most software these days is using parallel make, the impact on the build time of reducing the bb threads to 8 or 4 is negligible.

As for translating this into a bitbake feature, an implementation that comes to mind is to add a "weight" to each task. Most tasks would get a weight of just "1", but a (multithreaded) compile would be weighted "8" (some formula involving the number of CPUs) or so. This would stop bitbake spawning more processes early so that you don't get into the n^2 threads problem, and the number of processed that bitbake will spawn will be close to linear agin instead of quadratic as is is now.

Recipes that have excessive memory loads (usually C++ template meta-programming) can then just increase the weighting for the compile task.


--
Mike Looijmans


Re: bitbake controlling memory use

Robert Berger
 

Hi,

My comments are in-line.

On 11/04/2021 18:23, Gmane Admin wrote:
My build machine has 8 cores + HT so bitbake enthusiastically assumes 16. However I have (only?) 16GB of RAM (+24GB swap space).
16GB of RAM has always been more then enough with 4 core + HT, but now building certain recipes (nodejs, but rust will probably do it as well) eats up more memory then there actually is RAM causing excessive swapping.
The RAM usage depends heavily on what you are building ;) - It could be some crazy C++ stuff ;)

Is Linux your build host?

Then you can use cgroups to limit resources like memory.

Do you use a build container? It uses cgroups underneath.

e.g. docker:

https://docs.docker.com/config/containers/resource_constraints/

Regards,

Robert


Re: bitbake controlling memory use

Chen Qi
 

I think you write a script to do all the WATCH-STOP-CONTINUE stuff.
e.g.
memwatch bitbake core-image-minimal
Options could be added.
e.g.
memwatch --interval 5 --logfile test.log bitbake core-image-minimal

This script first becomes a session leader, then forks to start the 'bitbake' command as its child process.
Then, every several seconds (say 10s by default), it checks the current memory usage, and according to its policy, determines whether to stop/continue some process or not.

For stopping the process, you can first get all its child process by simply using the 'ps' command.
e.g.
$ ps o vsize,comm,pid -s 28303 | sort -n -r
317284 emacs           12883
 28912 ps              36302
 26248 sort            36303
 21432 man             24797
 17992 bash            28303
  9852 pager           24807
   VSZ COMMAND           PID

Then skip the bitbake processes, stop the first one that uses the largest memory, record its PID.

For continuing processes, just start it according to the records. Maybe using FILO by default?

Best Regards,
Chen Qi

On 04/11/2021 11:23 PM, Gmane Admin wrote:
My build machine has 8 cores + HT so bitbake enthusiastically assumes 16. However I have (only?) 16GB of RAM (+24GB swap space).

16GB of RAM has always been more then enough with 4 core + HT, but now building certain recipes (nodejs, but rust will probably do it as well) eats up more memory then there actually is RAM causing excessive swapping.

In fact the swapping is so excessive that just this single recipe determines largely the total image build time. However restricting bitbake (or actually parallel make) to use only 4 cores + HT sounds also like a waste.

I know this has been looked at in the past, but I think it needs fundamental resolving (other then, hé, why not add just another stick of memory).

What I do manually when I run into swapping so bad my desktop becomes unresponsive is ssh from another machine and then stop (not kill) the processes that use the most memory.

These then get swapped out, but not back in, allowing the remaining tasks to complete without swapping. Then when sufficient memory becomes free again I continue the stopped processes.

Isn't this something that could be added to bitbake to automate using memory efficiently?







Re: bitbake controlling memory use

Alexander Kanavin
 

make already has -l option for limiting new instances if load average is too high, so it's only natural to add a RAM limiter too.

  -l [N], --load-average[=N], --max-load[=N]
                              Don't start multiple jobs unless load is below N.

In any case, patches welcome :)

Alex


On Sun, 11 Apr 2021 at 18:08, Gmane Admin <gley-yocto@m.gmane-mx.org> wrote:
Op 11-04-2021 om 17:55 schreef Alexander Kanavin:
> On Sun, 11 Apr 2021 at 17:49, Gmane Admin <gley-yocto@m.gmane-mx.org
> <mailto:gley-yocto@m.gmane-mx.org>> wrote:
>
>     Yes, and make project doesn't care, because make is called with -j
>     16 so
>     that is what it does.
>
>     So here's my pitch: bitbake can stop processes spawned by make, because
>     it knows that it started make on 4 recipies, each with -j 16. The
>     individual makes don't know about each other.
>
>
> And neither they should. They can simply abstain from spawning new
> compilers if used RAM is, say, at 90% total. Then bitbake does not have
> to get involved in babysitting those makes.
>
> Alex
Bitbake does a lot of babysitting anyway :-) And is pretty good at it too.

To me, fixing make et al. is more work and less effective then adding a
feature to bitbake. The only way to know how much memory the compiler
will use for each spawned compiler is to let it run. And then it's too late.

This memory issue is all over our eco system and nobody cares (kernel,
make etc.) The only thing moving is systemd's oom killer will arrive and
start killing processes. So that will just stop our builds from completing.

Yeah, I prefer a babysitter over a child murderer :-)

Ferry





Re: bitbake controlling memory use

Gmane Admin
 

Op 11-04-2021 om 17:55 schreef Alexander Kanavin:
On Sun, 11 Apr 2021 at 17:49, Gmane Admin <gley-yocto@m.gmane-mx.org <mailto:gley-yocto@m.gmane-mx.org>> wrote:
Yes, and make project doesn't care, because make is called with -j
16 so
that is what it does.
So here's my pitch: bitbake can stop processes spawned by make, because
it knows that it started make on 4 recipies, each with -j 16. The
individual makes don't know about each other.
And neither they should. They can simply abstain from spawning new compilers if used RAM is, say, at 90% total. Then bitbake does not have to get involved in babysitting those makes.
Alex
Bitbake does a lot of babysitting anyway :-) And is pretty good at it too.

To me, fixing make et al. is more work and less effective then adding a feature to bitbake. The only way to know how much memory the compiler will use for each spawned compiler is to let it run. And then it's too late.

This memory issue is all over our eco system and nobody cares (kernel, make etc.) The only thing moving is systemd's oom killer will arrive and start killing processes. So that will just stop our builds from completing.

Yeah, I prefer a babysitter over a child murderer :-)

Ferry


Re: bitbake controlling memory use

Alexander Kanavin
 

On Sun, 11 Apr 2021 at 17:49, Gmane Admin <gley-yocto@m.gmane-mx.org> wrote:
Yes, and make project doesn't care, because make is called with -j 16 so
that is what it does.

So here's my pitch: bitbake can stop processes spawned by make, because
it knows that it started make on 4 recipies, each with -j 16. The
individual makes don't know about each other.

And neither they should. They can simply abstain from spawning new compilers if used RAM is, say, at 90% total. Then bitbake does not have to get involved in babysitting those makes.

Alex


Re: bitbake controlling memory use

Gmane Admin
 

Op 11-04-2021 om 17:27 schreef Alexander Kanavin:
This has been discussed several times in the past - the conclusion usually is that first this needs to be addresses upstream in make and ninja, e.g. that they support throttling new compiler instances when memory gets tight. Once that is available, bitbake can follow by throttling its own tasks as well.
Alex
Yes, and make project doesn't care, because make is called with -j 16 so that is what it does.

So here's my pitch: bitbake can stop processes spawned by make, because it knows that it started make on 4 recipies, each with -j 16. The individual makes don't know about each other.

And normally it's not an issue (compiling c), but sometimes compiling c++ it explodes (I saw 4GB ram in use for 1 g++ and there were 4 of them).

On Sun, 11 Apr 2021 at 17:23, Gmane Admin <gley-yocto@m.gmane-mx.org <mailto:gley-yocto@m.gmane-mx.org>> wrote:
My build machine has 8 cores + HT so bitbake enthusiastically assumes
16. However I have (only?) 16GB of RAM (+24GB swap space).
16GB of RAM has always been more then enough with 4 core + HT, but now
building certain recipes (nodejs, but rust will probably do it as well)
eats up more memory then there actually is RAM causing excessive
swapping.
In fact the swapping is so excessive that just this single recipe
determines largely the total image build time. However restricting
bitbake (or actually parallel make) to use only 4 cores + HT sounds
also
like a waste.
I know this has been looked at in the past, but I think it needs
fundamental resolving (other then, hé, why not add just another
stick of
memory).
What I do manually when I run into swapping so bad my desktop becomes
unresponsive is ssh from another machine and then stop (not kill) the
processes that use the most memory.
These then get swapped out, but not back in, allowing the remaining
tasks to complete without swapping. Then when sufficient memory becomes
free again I continue the stopped processes.
Isn't this something that could be added to bitbake to automate using
memory efficiently?


Re: bitbake controlling memory use

Alexander Kanavin
 

This has been discussed several times in the past - the conclusion usually is that first this needs to be addresses upstream in make and ninja, e.g. that they support throttling new compiler instances when memory gets tight. Once that is available, bitbake can follow by throttling its own tasks as well.

Alex


On Sun, 11 Apr 2021 at 17:23, Gmane Admin <gley-yocto@m.gmane-mx.org> wrote:
My build machine has 8 cores + HT so bitbake enthusiastically assumes
16. However I have (only?) 16GB of RAM (+24GB swap space).

16GB of RAM has always been more then enough with 4 core + HT, but now
building certain recipes (nodejs, but rust will probably do it as well)
eats up more memory then there actually is RAM causing excessive swapping.

In fact the swapping is so excessive that just this single recipe
determines largely the total image build time. However restricting
bitbake (or actually parallel make) to use only 4 cores + HT sounds also
like a waste.

I know this has been looked at in the past, but I think it needs
fundamental resolving (other then, hé, why not add just another stick of
memory).

What I do manually when I run into swapping so bad my desktop becomes
unresponsive is ssh from another machine and then stop (not kill) the
processes that use the most memory.

These then get swapped out, but not back in, allowing the remaining
tasks to complete without swapping. Then when sufficient memory becomes
free again I continue the stopped processes.

Isn't this something that could be added to bitbake to automate using
memory efficiently?





bitbake controlling memory use

Gmane Admin
 

My build machine has 8 cores + HT so bitbake enthusiastically assumes 16. However I have (only?) 16GB of RAM (+24GB swap space).

16GB of RAM has always been more then enough with 4 core + HT, but now building certain recipes (nodejs, but rust will probably do it as well) eats up more memory then there actually is RAM causing excessive swapping.

In fact the swapping is so excessive that just this single recipe determines largely the total image build time. However restricting bitbake (or actually parallel make) to use only 4 cores + HT sounds also like a waste.

I know this has been looked at in the past, but I think it needs fundamental resolving (other then, hé, why not add just another stick of memory).

What I do manually when I run into swapping so bad my desktop becomes unresponsive is ssh from another machine and then stop (not kill) the processes that use the most memory.

These then get swapped out, but not back in, allowing the remaining tasks to complete without swapping. Then when sufficient memory becomes free again I continue the stopped processes.

Isn't this something that could be added to bitbake to automate using memory efficiently?


Re: #golang: go fetches dependencies in compile phase

Konrad Weihmann <kweihmann@...>
 

Well AFAIK your observation is correct, go-mod.bbclass doesn't work a in BB_NO_NETWORK-safe manner.
What you could do is to provide all the dependencies manually and add them via DEPENDS, which is grant the behavior expected.

The bad thing about go is that chances of circular dependencies of go modules are relatively high.

I've also seen a few approaches using vendoring to escape this, but I'm afraid none of the code is pretty well tested nor publicly available.

If you'd ask me, I would go the long painful road of providing each required go module as a recipe of it's own and inject it into the final recipe - this also makes it pretty visible what you're pulling into your build in terms of licensing a.s.o.

On 09.04.21 15:53, Juergen Landwehr wrote:
Hi,
we are developing an application in go and use the "go-mod" bb-class from poky. This works fine.
However, the problem is, that the dependencies in go.mod are now fetched in the compile phase and not in the fetch phase.
This is not allowed in our project setup and kind of contradicts the Yocto paradigmn of having a fetch and a compile phase.
Is there a way to avoid this or some other solution that we could use?
Thanks,
Jürgen

4721 - 4740 of 57772