Re: bitbake controlling memory use
Hi Alex, Op 03-01-2023 om 16:37 schreef Alexander Kanavin: On Tue, 3 Jan 2023 at 16:24, Alexander Kanavin via lists.yoctoproject.org <alex.kanavin=gmail.com@...> wrote:
Sorry guys, but this is venturing into the ridiculous. If you embark on building a whole Linux distribution, you need to start with the correct tools for it. I mean, one can use a virtual machine inside a windows laptop if that's all they got. But then you should provide them with a sstate cache server, which fulfils the same role as binary package feeds on traditional Linux distros in that case. You are completely right for professional use cases. Another way would be to give ssh access to a sufficiently large build server. But the Yocto project (to me) is also the correct tool for smaller (hobbyist?) projects. These smaller projects don't build the world of packages and are typically built on a private laptop (bwugh) or desktop. Some of these packages do require a lot of resources> nodejs is one, rust another. That's not the fault of Yocto of course. What these projects show is that bitbake is a bit enthusiastic in some cases and tries to build multiple packages simultaneously that are already hard to do by themselves. I think it's a fundamental, but not necessarily a high priority problem. I know you don't want to spend time on this. So don't. Others may have an interest or a tip to improve Yocto in this area. I'm just trying to get an old patch from Richard Purdie to work ( http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237) with later fixes by Trevor Gamblin. Or alternatively something based on https://github.com/olsner/jobclient. Alex
|
|
Re: Remove kernel image and modules from rootfs
On Mon, Jan 02, 2023 at 05:11:35PM +0100, Quentin Schulz wrote: but this is now deprecated for kirkstone and should be done this way:
RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base = ""
This makes sense, I'll send a patch updating the documentation to reflect this change. I thought we already had discussed about this and someone sent a patch but doesn't seem so :/
Thank you :-) So I believe you need to add: MACHINE_EXTRA_RRECOMMENDS:beaglebone-yocto = "" MACHINE_ESSENTIAL_EXTRA_RDEPENDS:remove:beaglebone-yocto = "kernel-image kernel-devicetree" RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base = "" to your local.conf Dear Quentin, this is correct. It worked this way. I admit, I was not aware the config snippets I copied need to be modified for the local.conf in a way the machine name has to be appended! I suggest you create your own machine configuration file which requires beaglebone-yocto.conf where you'll be able to set: MACHINE_EXTRA_RRECOMMENDS = "" MACHINE_ESSENTIAL_EXTRA_RDEPENDS = "" RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base = "" Meanwhile I did something similair, I cloned the machine beaglebone-yocto into my tree with thos modifications on top and this works too. Thanks for your enormous useful hint to the modification needed for local.conf and the tip to create a machine, inherit beaglebone-yocto and modify those three variables. You can check the value of a variable by running bitbake-getvar -r virtual/kernel MACHINE_EXTRA_RRECOMMENDS for example. Thank you, very useful. Meanwhile I found out, for my puspose it could be useful to do something like CORE_IMAGE_BASE_INSTALL:remove = " packagegroup-core-boot packagegroup-base-extended" in the image config, which might be even more useful for my approach to my use case. I now can approch my goal with local.conf, an additional machine or the CORE_IMAGE_BASE_INSTALL modification. One question though: can the MACHINE variable only be modified in the local.conf (the reference manual - glossary does not mention other places)? If I go with the additional machine approach I am searching for a way to build different images in my distro based on different machines. Is that possible? Kind Regards Konstantin Kletschke -- INSIDE M2M GmbH Konstantin Kletschke Berenbosteler Straße 76 B 30823 Garbsen Telefon: +49 (0) 5137 90950136 Mobil: +49 (0) 151 15256238 Fax: +49 (0) 5137 9095010 konstantin.kletschke@... http://www.inside-m2m.de Geschäftsführung: Michael Emmert, Ingo Haase, Dr. Fred Könemann, Derek Uhlig HRB: 111204, AG Hannover
|
|
Yocto Project Status 3 January 2023 (WW01)
Current Dev Position: YP 4.2 M2 Next Deadline: 23rd January 2023 YP 4.2 M2 Build Next Team Meetings: Key Status/Updates: - YP 4.2 M1 and YP 4.0.6 were released.
- Builds on the autobuilder are problematic at present due to connection losses with the controller machine. We understand the cause and will be migrating the controller to a new server/location in the next maintenance window to address this.
- A number of invasive bitbake changes have merged including a change to add threading to bitbake’s server/cooker. This will allow us to attempt to resolve various UI/server hang related open bugs and to improve the user experience, e.g. Ctrl+C handling. If people do see issues with hangs, please report them and include the tail end of the bitbake-cookerdaemon.log file.
- The bitbake cache changes to optionally include more hash debugging information did merge. This currently triggers only for the bitbake -S operations but once that mode is triggered for memory resident bitbake, it is “sticky” and will remain set until it unloads. We have seen some OOM issues on autobuilder workers which may be related to this, more debugging is needed to check on bitbake’s memory usage.
- Automatic bitbake function library dependency code has also merged and is active. This should let us migrate class code to become library code for performance improvements.
- A bug in the layer.conf variable changes was identified which was causing re-parsing when it wasn’t necessary. A fix has been merged to address that.
- The OE TSC decided in favour of adding export flag expansion despite the performance impact so that patch will be queued.
- We merged a number of neat rust improvements which add on target cargo support and added automated QA tests for this functionality so we can ensure it continues to work and doesn’t regress. Thanks Alex Kiernan, we hope to see more series like this one!
- CVE levels in master have improved but still need a little work
- The autobuilder maintenance window is moving from Friday to Monday.
- We have a growing number of bugs in bugzilla, any help with them is appreciated.
Ways to contribute: - As people are likely aware, the project has a number of components which are either unmaintained, or have people with little to no time trying to keep them alive. These components include: patchtest, layerindex, devtool, toaster, wic, oeqa, autobuilder, CROPs containers, pseudo and more. Many have open bugs. Help is welcome in trying to better look after these components!
- There are bugs identified as possible for newcomers to the project: https://wiki.yoctoproject.org/wiki/Newcomers
- There are bugs that are currently unassigned for YP 4.2. See: https://wiki.yoctoproject.org/wiki/Bug_Triage#Medium.2B_4.2_Unassigned_Enhancements.2FBugs
- We’d welcome new maintainers for recipes in OE-Core. Please see the list at: http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/conf/distro/include/maintainers.inc and discuss with the existing maintainer, or ask on the OE-Core mailing list. We will likely move a chunk of these to “Unassigned” soon to help facilitate this.
- Help is very much welcome in trying to resolve our autobuilder intermittent issues. You can see the list of failures we’re continuing to see by searching for the “AB-INT” tag in bugzilla: https://bugzilla.yoctoproject.org/buglist.cgi?quicksearch=AB-INT.
- Help us resolve CVE issues: CVE metrics
YP 4.2 Milestone Dates: - YP 4.2 M1 is released
- YP 4.2 M2 build date 2023/01/23
- YP 4.2 M2 Release date 2023/02/03
- YP 4.2 M3 build date 2023/02/20
- YP 4.2 M3 Release date 2023/03/03
- YP 4.2 M4 build date 2023/04/03
- YP 4.2 M4 Release date 2023/04/28
Upcoming dot releases: - YP 4.0.6 is released
- YP 4.1.2 build date 2023/01/09
- YP 4.1.2 Release date 2023/01/20
- YP 3.1.22 build date 2023/01/16
- YP 3.1.22 Release date 2023/01/27
- YP 4.0.7 build date 2023/01/30
- YP 4.0.7 Release date 2023/02/10
- YP 3.1.23 build date 2023/02/13
- YP 3.1.23 Release date 2023/02/24
- YP 4.0.8 build date 2023/02/27
- YP 4.0.8 Release date 2023/03/10
- YP 4.1.3 build date 2023/03/06
- YP 4.1.3 Release date 2023/03/17
- YP 3.1.24 build date 2023/03/20
- YP 3.1.24 Release date 2023/03/31
- YP 4.0.9 build date 2023/04/10
- YP 4.0.9 Release date 2023/04/21
- YP 4.1.4 build date 2023/05/01
- YP 4.1.4 Release date 2023/05/13
- YP 3.1.25 build date 2023/05/08
- YP 3.1.25 Release date 2023/05/19
- YP 4.0.10 build date 2023/05/15
- YP 4.0.10 Release date 2023/05/26
Tracking Metrics: The Yocto Project’s technical governance is through its Technical Steering Committee, more information is available at: https://wiki.yoctoproject.org/wiki/TSC The Status reports are now stored on the wiki at: https://wiki.yoctoproject.org/wiki/Weekly_Status [If anyone has suggestions for other information you’d like to see on this weekly status update, let us know!] Thanks, Stephen K. Jolley Yocto Project Program Manager ( Cell: (208) 244-4460 * Email: sjolley.yp.pm@...
|
|
Re: bitbake controlling memory use
On Tue, Jan 3, 2023 at 3:29 PM Ferry Toth < fntoth@...> wrote: Op 03-01-2023 om 15:18 schreef Alexander Kanavin:
> I have to note that even the most expensive 16 Gb RAM module is less
> than 100 Euro, and can be obtained for half that much. Surely you
> value your time more than that?
Of course. And if I didn't I could reduce the number of `PARALLEL_MAKE`.
But I have also seen bitbake attempting to build nodejs, nodejs-native
and binutils in parallel.
I think by default there are 4 tasks, with 16 threads, each could
require 1GB RAM. Would you say, in such a case Yocto requires 64GB RAM?
And with increasing #cores it gets worse.
Not that it is too expensive to throw hardware at it, but it seems to be
a fundamental problem that I would like to resolve. In my spare time of
course.
Hi Ferry,
don't get discouraged from working on jobserver related issues by some remarks. Yes your HW configuration is a bit extreme, but similar issues are easily reproducible on beefier HW as well (seeing it often enough on 32c 64t AMD 3970X with 128G ram).
bitbake pressure regulation helps a bit, but it's often triggered too late and also a bit too coarse grained, when single chromium do_compile task can take over an hour.
I agree that make job server (or there is similar solution proposed for ninja from Andrei https://gitlab.eclipse.org/eclipse/oniro-core/oniro/-/merge_requests/196/commits ) should be more efficient solution (as it will hold off additional "shorter" jobs only during the "critical" time when some other jobs take all resources - while bitbake pressure cannot know that triggering nodejs+nodejs-native+chromium do_compile task might end badly, because the pressure was still low when these 3 were queued for execution).
Also check this thread about pressure regulation in bitbake-devel:
Cheers,
|
|
Re: bitbake controlling memory use
On Tue, 3 Jan 2023 at 16:24, Alexander Kanavin via lists.yoctoproject.org <alex.kanavin=gmail.com@...> wrote: Sorry guys, but this is venturing into the ridiculous. If you embark on building a whole Linux distribution, you need to start with the correct tools for it. I mean, one can use a virtual machine inside a windows laptop if that's all they got. But then you should provide them with a sstate cache server, which fulfils the same role as binary package feeds on traditional Linux distros in that case. Alex
|
|
Re: bitbake controlling memory use
On Tue, 3 Jan 2023 at 16:12, Ferry Toth <fntoth@...> wrote: Yes, and I note that many people trying to build my image (meta-intel-edison) do so from a virtual machine as they only have a windows host. Sorry guys, but this is venturing into the ridiculous. If you embark on building a whole Linux distribution, you need to start with the correct tools for it. Alex
|
|
Re: bitbake controlling memory use
Hi Janne, Op 03-01-2023 om 16:03 schreef Janne Kiiskila: Hei,
Is not the easiest solution to build the actual full build in steps, rather than build the whole thing? So, instead of doing a Sure, that is easy enough. Setting PARALLEL_MAKE is easy enough too. I was hoping to find a fundamental solution. full bitbake <image>
Do
bitbake nodejs bitbake binutils bitbake rust (if you happen to need it) Yeah, building rust is a resource hog too. I didn't build that for a long time. and only after that do
bitbake <image>
That way you can control the memory load and build it piecemeal to avoid out-of-memory situations.
For cheapness of memory - it's not always that straight forward, a) some cloud build machines might go insanely up in price, if you add memory (it usually then adds also cores) OR b) some laptops etc. might simply not have any memory slots available, so you can't expand even if you want. Yes, and I note that many people trying to build my image (meta-intel-edison) do so from a virtual machine as they only have a windows host. And then there are even more limitations. Best Regards,
Janne Kiiskilä
-----Original Message----- From: yocto@... <yocto@...> On Behalf Of Ferry Toth via lists.yoctoproject.org Sent: tiistai 3. tammikuuta 2023 16.41 To: Quentin Schulz <quentin.schulz@...>; Alexander Kanavin <alex.kanavin@...> Cc: yocto@... Subject: Re: [yocto] bitbake controlling memory use
Hi Quentin,
Op 03-01-2023 om 15:36 schreef Quentin Schulz:
Hi Ferry,
On 1/3/23 15:29, Ferry Toth wrote:
Hi Alex,
Op 03-01-2023 om 15:18 schreef Alexander Kanavin:
I have to note that even the most expensive 16 Gb RAM module is less than 100 Euro, and can be obtained for half that much. Surely you value your time more than that? Of course. And if I didn't I could reduce the number of `PARALLEL_MAKE`.
But I have also seen bitbake attempting to build nodejs, nodejs-native and binutils in parallel.
I think by default there are 4 tasks, with 16 threads, each could require 1GB RAM. Would you say, in such a case Yocto requires 64GB RAM?
And with increasing #cores it gets worse.
Not that it is too expensive to throw hardware at it, but it seems to be a fundamental problem that I would like to resolve. In my spare time of course.
Just to add that bitbake now supports pressure thresholds (since Kirkstone release I believe): https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user -manual-ref-variables.html#term-BB_PRESSURE_MAX_MEMORY
If your recipes put enough pressure on RAM before one or two of nodejs, nodejs-native and binutils gets scheduled, it would prevent that. However I believe if the timing is just right (unfortunate) and there's not enough pressure when all three recipes do_compile start, they would all start and you would have the same issue.
Exactly
Cheers, Quentin
|
|
Re: bitbake controlling memory use
Hei,
Is not the easiest solution to build the actual full build in steps, rather than build the whole thing? So, instead of doing a
full bitbake <image>
Do
bitbake nodejs bitbake binutils bitbake rust (if you happen to need it)
and only after that do
bitbake <image>
That way you can control the memory load and build it piecemeal to avoid out-of-memory situations.
For cheapness of memory - it's not always that straight forward, a) some cloud build machines might go insanely up in price, if you add memory (it usually then adds also cores) OR b) some laptops etc. might simply not have any memory slots available, so you can't expand even if you want.
Best Regards,
Janne Kiiskilä
toggle quoted message
Show quoted text
-----Original Message----- From: yocto@... <yocto@...> On Behalf Of Ferry Toth via lists.yoctoproject.org Sent: tiistai 3. tammikuuta 2023 16.41 To: Quentin Schulz <quentin.schulz@...>; Alexander Kanavin <alex.kanavin@...> Cc: yocto@... Subject: Re: [yocto] bitbake controlling memory use Hi Quentin, Op 03-01-2023 om 15:36 schreef Quentin Schulz: Hi Ferry,
On 1/3/23 15:29, Ferry Toth wrote:
Hi Alex,
Op 03-01-2023 om 15:18 schreef Alexander Kanavin:
I have to note that even the most expensive 16 Gb RAM module is less than 100 Euro, and can be obtained for half that much. Surely you value your time more than that? Of course. And if I didn't I could reduce the number of `PARALLEL_MAKE`.
But I have also seen bitbake attempting to build nodejs, nodejs-native and binutils in parallel.
I think by default there are 4 tasks, with 16 threads, each could require 1GB RAM. Would you say, in such a case Yocto requires 64GB RAM?
And with increasing #cores it gets worse.
Not that it is too expensive to throw hardware at it, but it seems to be a fundamental problem that I would like to resolve. In my spare time of course.
Just to add that bitbake now supports pressure thresholds (since Kirkstone release I believe): https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user -manual-ref-variables.html#term-BB_PRESSURE_MAX_MEMORY
If your recipes put enough pressure on RAM before one or two of nodejs, nodejs-native and binutils gets scheduled, it would prevent that. However I believe if the timing is just right (unfortunate) and there's not enough pressure when all three recipes do_compile start, they would all start and you would have the same issue.
Exactly Cheers, Quentin
|
|
Hi Ron, On 12/24/22 20:17, Mistyron wrote: On 2022-12-24 08:38, Markus Volk wrote:
Am Sa, 24. Dez 2022 um 08:34:49 -0800 schrieb Mistyron <ron.eggler@...>:
$ grep usb path/to/package.manifest libusb-1.0-0 usbutils usbutils-python lsusb is part of usbutils, so it is not explicitly listed, but should be included in your image Oh, I think the cleanall didn't wipe the cache sufficiently, after I deleted contents in sstate-cache/ I get the following: $ tar -tvf path/to/myimage-20221224185313.rootfs.tar.gz | grep lsusb lrwxrwxrwx 0/0 0 2018-03-09 04:34 ./usr/bin/lsusb -> /usr/bin/lsusb.usbutils -rwxr-xr-x 0/0 14266 2018-03-09 04:34 ./usr/bin/lsusb.py -rwxr-xr-x 0/0 247976 2018-03-09 04:34 ./usr/bin/lsusb.usbutils -rw-r--r-- 0/0 43 2018-03-09 04:34 ./usr/lib/opkg/alternatives/lsusb I would expect "$ lsusb" to work now when I copy this to my board (cannot est right now) the package.manifest however still only lists: libusb-1.0-0 usbutils usbutils-python
The package.manifest only contains packages so it's normal that lsusb does not appear there since there's no package *named* lsusb. E.g. there's no lsusb package in Debian, but you can install it on your system via the usbutils package. However, one can check which package provides the libusb binary by running oe-pkgdata-util find-path '*/libusb' it should return usbutils I believe. Cheers, Quentin
|
|
Re: bitbake controlling memory use
Hi Quentin, Op 03-01-2023 om 15:36 schreef Quentin Schulz: Hi Ferry,
On 1/3/23 15:29, Ferry Toth wrote:
Hi Alex,
Op 03-01-2023 om 15:18 schreef Alexander Kanavin:
I have to note that even the most expensive 16 Gb RAM module is less than 100 Euro, and can be obtained for half that much. Surely you value your time more than that? Of course. And if I didn't I could reduce the number of `PARALLEL_MAKE`.
But I have also seen bitbake attempting to build nodejs, nodejs-native and binutils in parallel.
I think by default there are 4 tasks, with 16 threads, each could require 1GB RAM. Would you say, in such a case Yocto requires 64GB RAM?
And with increasing #cores it gets worse.
Not that it is too expensive to throw hardware at it, but it seems to be a fundamental problem that I would like to resolve. In my spare time of course.
Just to add that bitbake now supports pressure thresholds (since Kirkstone release I believe): https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user-manual-ref-variables.html#term-BB_PRESSURE_MAX_MEMORY
If your recipes put enough pressure on RAM before one or two of nodejs, nodejs-native and binutils gets scheduled, it would prevent that. However I believe if the timing is just right (unfortunate) and there's not enough pressure when all three recipes do_compile start, they would all start and you would have the same issue.
Exactly Cheers, Quentin
|
|
Re: bitbake controlling memory use
Hi Ferry, On 1/3/23 15:29, Ferry Toth wrote: Hi Alex, Op 03-01-2023 om 15:18 schreef Alexander Kanavin:
I have to note that even the most expensive 16 Gb RAM module is less than 100 Euro, and can be obtained for half that much. Surely you value your time more than that? Of course. And if I didn't I could reduce the number of `PARALLEL_MAKE`. But I have also seen bitbake attempting to build nodejs, nodejs-native and binutils in parallel. I think by default there are 4 tasks, with 16 threads, each could require 1GB RAM. Would you say, in such a case Yocto requires 64GB RAM? And with increasing #cores it gets worse. Not that it is too expensive to throw hardware at it, but it seems to be a fundamental problem that I would like to resolve. In my spare time of course.
Just to add that bitbake now supports pressure thresholds (since Kirkstone release I believe): https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user-manual-ref-variables.html#term-BB_PRESSURE_MAX_MEMORYIf your recipes put enough pressure on RAM before one or two of nodejs, nodejs-native and binutils gets scheduled, it would prevent that. However I believe if the timing is just right (unfortunate) and there's not enough pressure when all three recipes do_compile start, they would all start and you would have the same issue. Cheers, Quentin
|
|
Re: bitbake controlling memory use
Hi Alex, Op 03-01-2023 om 15:18 schreef Alexander Kanavin: I have to note that even the most expensive 16 Gb RAM module is less than 100 Euro, and can be obtained for half that much. Surely you value your time more than that? Of course. And if I didn't I could reduce the number of `PARALLEL_MAKE`. But I have also seen bitbake attempting to build nodejs, nodejs-native and binutils in parallel. I think by default there are 4 tasks, with 16 threads, each could require 1GB RAM. Would you say, in such a case Yocto requires 64GB RAM? And with increasing #cores it gets worse. Not that it is too expensive to throw hardware at it, but it seems to be a fundamental problem that I would like to resolve. In my spare time of course. Alex
On Tue, 3 Jan 2023 at 15:15, Ferry Toth <fntoth@...> wrote:
Op 13-06-2021 om 02:38 schreef Randy MacLeod:
On 2021-06-12 12:31 p.m., Ferry Toth wrote:
Hi
Op 10-06-2021 om 22:35 schreef Ferry Toth:
Hi,
Op 10-06-2021 om 21:06 schreef Trevor Gamblin:
On 2021-06-10 5:22 a.m., Ferry Toth wrote:
**[Please note: This e-mail is from an EXTERNAL e-mail address]
Hi Trevor,
Gmane is really messing things up here, sorry about that. I need to create a new thread I'm afraid.
I'd like to your reworked patch.
But note, I reworked it too (but maybe wrongly). I builds like 90% of my image, but fails building cmake-native. Or more accurately it fails do_configure while trying to build a small test program. Hi,
I've pushed the patch onto my fork of the poky repo at https://github.com/threexc/poky
Let me know how your testing turns out - I am still running tests as well, but it would be good to know how others' attempts turn out, and more changes could still end up being necessary.
Your patch didn't apply clean on Gatesgarth, but fix seemd trivial. With this it builds cmake-native fine, thanks!
You can find it here: https://github.com/htot/meta-intel-edison/commit/8abce2f6f752407c7b2831dabf37cc358ce55bc7
I will check if any other build errors occurs, and if not will try to time image build with and without the patch to compare performance and see if it worth the effort. It works fine. To measure time I first built https://github.com/htot/meta-intel-edison (gatesgarth), so everything needed is downloaded and cached. Then prior to each run I `rm -rf out` and `rm -rf bbcache/sstate-cache/*` to force everything to rebuild. And then `time bitbake -k edison-image`
With patch: real 218m12,686s user 0m24,058s sys 0m4,379s
Without: real 219m36,944s user 0m24,770s sys 0m4,266s
Strange, I expected more. I have a new machine now. I has 16 HT, but only 16GB RAM. So memory starvation has now become a serious issue, especially when building nodejs in parallel to nodejs-native.
There are 2 issues: 1 - nodejs tries to link 5 executables in parallel, each requires 4GB RAM. This I solved by serializing the linker using `flock` for the nodejs recipe. 2 - nodejs starts 16 compile submakes and so does nodejs-native. Each is between 0.5GB - 1GB RAM.
For the 2nd problem the "Add shared make jobserver support" patch would likely be effective. I've fixed it up for Kirkstone, but it is not working.
I noted: since make v4.2 we need to use `--jobserver-auth=fifo:" + fifoname`
But the real problem seems to be `if "BB_MAKEFIFO" in os.environ:` in make-intercept fails, even though `self.cfgData.setVar("BB_MAKEFIFO", fifoname)` succeeds in `def setup_make_fifo(self):`.
So, either BB_MAKEFIFO is not yet created at make time, or no longer exists.
Confused again.
Hi Ferry,
Thanks for the update.
Trevor and I saw similar (lack of ) results.
Trevor even trying getting kea, which uses 'make' to be done the 'configure' stage, for two builds in differect dirs. Then to run the two 'bitbake -c compile kea' with and with out the patch with the expectation that with the job server patch and the right number of jobs, the two builds would take longer. I don't know the exact timing but there was no noticeable difference.
We did strace things to confirm that the make wrapper was being called and the actual make was being called by the wrapper. I suspect that the next thing we try will be to patch 'make' to log when the jobserver kicks in or to play with some make jobserver demo such as: https://github.com/olsner/jobclient to get some experience with how things are supposed to work and to be able to strace a successful use of the job server feature.
A little RTFM / UTSL may also be required.
../Randy
This is on 4 core/8ht i7-3770 CPU @ 3.40GHz with 16Gb RAM and nodejs restricted to -j 2 (so that alone takes ~ 60min to build).
- Trevor
Ferry
|
|
Re: bitbake controlling memory use
I have to note that even the most expensive 16 Gb RAM module is less than 100 Euro, and can be obtained for half that much. Surely you value your time more than that?
Alex
toggle quoted message
Show quoted text
On Tue, 3 Jan 2023 at 15:15, Ferry Toth <fntoth@...> wrote: Op 13-06-2021 om 02:38 schreef Randy MacLeod:
On 2021-06-12 12:31 p.m., Ferry Toth wrote:
Hi
Op 10-06-2021 om 22:35 schreef Ferry Toth:
Hi,
Op 10-06-2021 om 21:06 schreef Trevor Gamblin:
On 2021-06-10 5:22 a.m., Ferry Toth wrote:
**[Please note: This e-mail is from an EXTERNAL e-mail address]
Hi Trevor,
Gmane is really messing things up here, sorry about that. I need to create a new thread I'm afraid.
I'd like to your reworked patch.
But note, I reworked it too (but maybe wrongly). I builds like 90% of my image, but fails building cmake-native. Or more accurately it fails do_configure while trying to build a small test program. Hi,
I've pushed the patch onto my fork of the poky repo at https://github.com/threexc/poky
Let me know how your testing turns out - I am still running tests as well, but it would be good to know how others' attempts turn out, and more changes could still end up being necessary.
Your patch didn't apply clean on Gatesgarth, but fix seemd trivial. With this it builds cmake-native fine, thanks!
You can find it here: https://github.com/htot/meta-intel-edison/commit/8abce2f6f752407c7b2831dabf37cc358ce55bc7
I will check if any other build errors occurs, and if not will try to time image build with and without the patch to compare performance and see if it worth the effort. It works fine. To measure time I first built https://github.com/htot/meta-intel-edison (gatesgarth), so everything needed is downloaded and cached. Then prior to each run I `rm -rf out` and `rm -rf bbcache/sstate-cache/*` to force everything to rebuild. And then `time bitbake -k edison-image`
With patch: real 218m12,686s user 0m24,058s sys 0m4,379s
Without: real 219m36,944s user 0m24,770s sys 0m4,266s
Strange, I expected more. I have a new machine now. I has 16 HT, but only 16GB RAM. So memory starvation has now become a serious issue, especially when building nodejs in parallel to nodejs-native.
There are 2 issues: 1 - nodejs tries to link 5 executables in parallel, each requires 4GB RAM. This I solved by serializing the linker using `flock` for the nodejs recipe. 2 - nodejs starts 16 compile submakes and so does nodejs-native. Each is between 0.5GB - 1GB RAM.
For the 2nd problem the "Add shared make jobserver support" patch would likely be effective. I've fixed it up for Kirkstone, but it is not working.
I noted: since make v4.2 we need to use `--jobserver-auth=fifo:" + fifoname`
But the real problem seems to be `if "BB_MAKEFIFO" in os.environ:` in make-intercept fails, even though `self.cfgData.setVar("BB_MAKEFIFO", fifoname)` succeeds in `def setup_make_fifo(self):`.
So, either BB_MAKEFIFO is not yet created at make time, or no longer exists.
Confused again.
Hi Ferry,
Thanks for the update.
Trevor and I saw similar (lack of ) results.
Trevor even trying getting kea, which uses 'make' to be done the 'configure' stage, for two builds in differect dirs. Then to run the two 'bitbake -c compile kea' with and with out the patch with the expectation that with the job server patch and the right number of jobs, the two builds would take longer. I don't know the exact timing but there was no noticeable difference.
We did strace things to confirm that the make wrapper was being called and the actual make was being called by the wrapper. I suspect that the next thing we try will be to patch 'make' to log when the jobserver kicks in or to play with some make jobserver demo such as: https://github.com/olsner/jobclient to get some experience with how things are supposed to work and to be able to strace a successful use of the job server feature.
A little RTFM / UTSL may also be required.
../Randy
This is on 4 core/8ht i7-3770 CPU @ 3.40GHz with 16Gb RAM and nodejs restricted to -j 2 (so that alone takes ~ 60min to build).
- Trevor
Ferry
|
|
Re: bitbake controlling memory use
Op 13-06-2021 om 02:38 schreef Randy MacLeod: On 2021-06-12 12:31 p.m., Ferry Toth wrote:
Hi
Op 10-06-2021 om 22:35 schreef Ferry Toth:
Hi,
Op 10-06-2021 om 21:06 schreef Trevor Gamblin:
On 2021-06-10 5:22 a.m., Ferry Toth wrote:
**[Please note: This e-mail is from an EXTERNAL e-mail address]
Hi Trevor,
Gmane is really messing things up here, sorry about that. I need to create a new thread I'm afraid.
I'd like to your reworked patch.
But note, I reworked it too (but maybe wrongly). I builds like 90% of my image, but fails building cmake-native. Or more accurately it fails do_configure while trying to build a small test program. Hi,
I've pushed the patch onto my fork of the poky repo at https://github.com/threexc/poky
Let me know how your testing turns out - I am still running tests as well, but it would be good to know how others' attempts turn out, and more changes could still end up being necessary.
Your patch didn't apply clean on Gatesgarth, but fix seemd trivial. With this it builds cmake-native fine, thanks!
You can find it here: https://github.com/htot/meta-intel-edison/commit/8abce2f6f752407c7b2831dabf37cc358ce55bc7
I will check if any other build errors occurs, and if not will try to time image build with and without the patch to compare performance and see if it worth the effort. It works fine. To measure time I first built https://github.com/htot/meta-intel-edison (gatesgarth), so everything needed is downloaded and cached. Then prior to each run I `rm -rf out` and `rm -rf bbcache/sstate-cache/*` to force everything to rebuild. And then `time bitbake -k edison-image`
With patch: real 218m12,686s user 0m24,058s sys 0m4,379s
Without: real 219m36,944s user 0m24,770s sys 0m4,266s
Strange, I expected more. I have a new machine now. I has 16 HT, but only 16GB RAM. So memory starvation has now become a serious issue, especially when building nodejs in parallel to nodejs-native. There are 2 issues: 1 - nodejs tries to link 5 executables in parallel, each requires 4GB RAM. This I solved by serializing the linker using `flock` for the nodejs recipe. 2 - nodejs starts 16 compile submakes and so does nodejs-native. Each is between 0.5GB - 1GB RAM. For the 2nd problem the "Add shared make jobserver support" patch would likely be effective. I've fixed it up for Kirkstone, but it is not working. I noted: since make v4.2 we need to use `--jobserver-auth=fifo:" + fifoname` But the real problem seems to be `if "BB_MAKEFIFO" in os.environ:` in make-intercept fails, even though `self.cfgData.setVar("BB_MAKEFIFO", fifoname)` succeeds in `def setup_make_fifo(self):`. So, either BB_MAKEFIFO is not yet created at make time, or no longer exists. Confused again. Hi Ferry, Thanks for the update. Trevor and I saw similar (lack of ) results. Trevor even trying getting kea, which uses 'make' to be done the 'configure' stage, for two builds in differect dirs. Then to run the two 'bitbake -c compile kea' with and with out the patch with the expectation that with the job server patch and the right number of jobs, the two builds would take longer. I don't know the exact timing but there was no noticeable difference. We did strace things to confirm that the make wrapper was being called and the actual make was being called by the wrapper. I suspect that the next thing we try will be to patch 'make' to log when the jobserver kicks in or to play with some make jobserver demo such as: https://github.com/olsner/jobclient to get some experience with how things are supposed to work and to be able to strace a successful use of the job server feature. A little RTFM / UTSL may also be required. ../Randy
This is on 4 core/8ht i7-3770 CPU @ 3.40GHz with 16Gb RAM and nodejs restricted to -j 2 (so that alone takes ~ 60min to build).
- Trevor
Ferry
|
|
Re: DISTRO_FEATURES in custom recipe does not override local.conf setting
On Thu, Dec 29, 2022 at 01:32 AM, Mistyron wrote:
On 12/28/22 13:24, Mistyron via lists.yoctoproject.org wrote:
On 2022-12-28 7:55 a.m., Alexander Kanavin wrote:
You can't. '_remove' has priority over everything else, and DISTRO_FEATURES cannot be set from a recipe, only from a global config.
Oh okay, that makes sense then! Thank you!
Although, as soon as I added
DISTRO_FEATURES_append = " x11"
to local.conf, I'm not able to build anymore but see an error like below instead:
| checking for GBM... no | configure: error: Glamor for Xorg requires gbm >= 10.2.0 | NOTE: The following config.log files may provide further information. | NOTE: /home/yocto/rzg_vlp_v3.0.0/build/tmp/work/aarch64-poky-linux/xserver-xorg/2_1.20.8-r0/build/config.log | ERROR: configure failed | WARNING: exit code 1 from a shell command. | ERROR: Execution of '/home/yocto/rzg_vlp_v3.0.0/build/tmp/work/aarch64-poky-linux/xserver-xorg/2_1.20.8-r0/temp/run.do_configure.480136' failed with exit code 1 ERROR: Task (/home/yocto/rzg_vlp_v3.0.0/build/../poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.20.8.bb:do_configure) failed with exit code '1'
I've Googled around but nbot been able to find a work around.
Hello, once we run into a similar issue with the xserver xorg v1.20.8 compilation, in our case, the gbm (libgles) from the BSP provider was in version `r8p0` and not `10.2.0`, which is required by xserver xorg. To solve this, we added a simple patch to `configure.ac` of xserver xorg to change dependency from `LIBGBM="gbm >= 10.2.0"` to `LIBGBM="gbm >= r8p0"` which allowed us to resolve the compilation error and built the package. Later we did not see any significant performance issues, so we went with that solution. Regards -- Tomasz Żyjewski Embedded Systems Engineer GPG: 5C495EA3EBEECA59 https://3mdeb.com | @3mdeb_com
You should edit local.conf, and complain to your BSP provider for using _remove instead of a direct assignment.
Alex
On Wed, 28 Dec 2022 at 16:51, Mistyron <ron.eggler@...> wrote:
Hi,
I'm using the provided local.conf from the BSP which contains:
DISTRO_FEATURES_remove = " x11"
but I need to install xrandr in my image which needs x11, i.e. I set
DISTRO_FEATURES_append = " x11" IMAGE_INSTALL_append = " xrandr"
in my custom recipe. However, upon building, I get the following error:
ERROR: Nothing RPROVIDES 'xrandr' (but /home/yocto/rzg_vlp_v3.0.0/build/../meta-mistysom/recipes-core/images/mistysom-image.bb
RDEPENDS on or otherwise requires it) xrandr was skipped: missing required distro feature 'x11' (not in DISTRO_FEATURES) NOTE: Runtime target 'xrandr' is unbuildable, removing... Missing or unbuildable dependency chain was: ['xrandr'] ERROR: Required build target 'mistysom-image' has no buildable providers. Missing or unbuildable dependency chain was: ['mistysom-image', 'xrandr']
Can I override the setting in locl.conf from my custom recipe? How?
Ron
-- RON EGGLER Firmware Engineer (he/him/his) www.mistywest.com
|
|
Current high bug count owners for Yocto Project 4.2
All,
Below is the list as of top 31 bug owners as of the end of WW01 of who have open medium or higher bugs and enhancements against YP 4.2. There are 79 possible work days left until the final release candidates for YP 4.2 needs to be released. Who | Count | michael.opdenacker@... | 35 | ross.burton@... | 30 | bruce.ashfield@... | 25 | randy.macleod@... | 25 | david.reyna@... | 23 | richard.purdie@... | 23 | JPEWhacker@... | 10 | saul.wold@... | 9 | sakib.sajal@... | 8 | pavel@... | 5 | Zheng.Qiu@... | 4 | tim.orling@... | 4 | alexandre.belloni@... | 4 | Naveen.Gowda@... | 2 | sgw@... | 2 | jon.mason@... | 2 | hongxu.jia@... | 2 | sundeep.kokkonda@... | 2 | akuster808@... | 2 | bluelightning@... | 2 | sundeep.kokkonda@... | 2 | yashinde145@... | 1 | Anton.Antonov@... | 1 | bst@... | 1 | tvgamblin@... | 1 | martin.beeger@... | 1 | Martin.Jansa@... | 1 | mathew.prokos@... | 1 | aehs29@... | 1 | thomas.perrot@... | 1 | mhalstead@... | 1 | Grand Total | 231 |
Thanks, Stephen K. Jolley Yocto Project Program Manager ( Cell: (208) 244-4460 * Email: sjolley.yp.pm@...
|
|
Yocto Project Newcomer & Unassigned Bugs - Help Needed
All, The triage team is starting to try and collect up and classify bugs which a newcomer to the project would be able to work on in a way which means people can find them. They're being listed on the triage page under the appropriate heading: https://wiki.yoctoproject.org/wiki/Bug_Triage#Newcomer_Bugs Also please review: https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded and how to create a bugzilla account at: https://bugzilla.yoctoproject.org/createaccount.cgi The idea is these bugs should be straight forward for a person to help work on who doesn't have deep experience with the project. If anyone can help, please take ownership of the bug and send patches! If anyone needs help/advice there are people on irc who can likely do so, or some of the more experienced contributors will likely be happy to help too. Also, the triage team meets weekly and does its best to handle the bugs reported into the Bugzilla. The number of people attending that meeting has fallen, as have the number of people available to help fix bugs. One of the things we hear users report is they don't know how to help. We (the triage team) are therefore going to start reporting out the currently 419 unassigned or newcomer bugs. We're hoping people may be able to spare some time now and again to help out with these. Bugs are split into two types, "true bugs" where things don't work as they should and "enhancements" which are features we'd want to add to the system. There are also roughly four different "priority" classes right now, “4.2”, “4.3”, "4.99" and "Future", the more pressing/urgent issues being in "4.2" and then “4.3”. Please review this link and if a bug is something you would be able to help with either take ownership of the bug, or send me (sjolley.yp.pm@...) an e-mail with the bug number you would like and I will assign it to you (please make sure you have a Bugzilla account). The list is at: https://wiki.yoctoproject.org/wiki/Bug_Triage_Archive#Unassigned_or_Newcomer_Bugs Thanks, Stephen K. Jolley Yocto Project Program Manager ( Cell: (208) 244-4460 * Email: sjolley.yp.pm@...
|
|
Re: Remove kernel image and modules from rootfs
Hi Konstantin, On 12/21/22 22:13, Konstantin Kletschke wrote: Hi, I am creating a rootfs/bootloader/kernel to run on a beaglebone black usually and it works great. So I have in conf/local.conf MACHINE ?= "beaglebone-yocto" and an own layer meta-insidem2m which defines some image settings in recipes-core/images/insidem2m-s.bb among other recipes for packages and package modification. Now I wan't to create a rootfs without the kernel image and the kernel modules to make it as small as possible to use it as a basis to run as a docker image. Now I wonder how to instruct bitbake to not put the kernel image (and modules) into the rootfs. I read this was done by RDEPENDS_${KERNEL_PACKAGE_NAME}-base = "" but this is now deprecated for kirkstone and should be done this way: RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base = ""
This makes sense, I'll send a patch updating the documentation to reflect this change. I thought we already had discussed about this and someone sent a patch but doesn't seem so :/ But rootfs always still is equipped with kernel and modules. I tried all permutations of #RDEPENDS_kernel-base = "" #MACHINE_ESSENTIAL_EXTRA_RDEPENDS = "" #RDEPENDS_kernel-base = "" #PREFERRED_PROVIDER_virtual/kernel = "linux-dummy" # Don't include kernels in standard images ##RDEPENDS:kernel-base = "" #RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base = "" #MACHINE_EXTRA_RRECOMMENDS = "" #RDEPENDS_${KERNEL_PACKAGE_NAME}-base = "" in my conf/local.conf but no avail... How is this done correctly? Are there any variables to check I might have (being) set preventing me to do this? Or is it necessary to split out a new MACHINE, i.e. can this only be done in an own created machine which has to be split out? I thought setting such at the bottom of conf/local.conf always "wins".
No. So I believe you need to add: MACHINE_EXTRA_RRECOMMENDS:beaglebone-yocto = "" MACHINE_ESSENTIAL_EXTRA_RDEPENDS:remove:beaglebone-yocto = "kernel-image kernel-devicetree" RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base = "" to your local.conf I suggest you create your own machine configuration file which requires beaglebone-yocto.conf where you'll be able to set: MACHINE_EXTRA_RRECOMMENDS = "" MACHINE_ESSENTIAL_EXTRA_RDEPENDS = "" RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base = "" since one is not supposed to share their local.conf :) You can check the value of a variable by running bitbake-getvar -r virtual/kernel MACHINE_EXTRA_RRECOMMENDS for example. Cheers, Quentin
|
|
Re: Strange behaviour with quilt and kernel
On Sat, 2022-12-31 at 16:02 +0100, Mauro Ziliani wrote: Hi all.
I'm working on a board with kirkstone.
I update every layers with latest kirkstone branch available, poky layer too.
I'm patching the kernel 6.0.8 for debug purpose: 0001-debug.patch
I make a patch with quilt as I do in the past (before update of poky)
The problem is this:
- I do "bitbake -c devshell virtual/kernel" and I get the terminal prompt on kernel-source folder: i check and the patch is applied
- I do "quilt top" and quilt tell me "No series file found"
Where is my patch?
Is the kernel using git to handle patches instead of quilt? Cheers, Richard
|
|
Strange behaviour with quilt and kernel
Hi all.
I'm working on a board with kirkstone.
I update every layers with latest kirkstone branch available, poky layer too.
I'm patching the kernel 6.0.8 for debug purpose: 0001-debug.patch
I make a patch with quilt as I do in the past (before update of poky)
The problem is this:
- I do "bitbake -c devshell virtual/kernel" and I get the terminal prompt on kernel-source folder: i check and the patch is applied
- I do "quilt top" and quilt tell me "No series file found"
Where is my patch?
Best regards,
MZ
|
|