Patching submodules
Emily
Hi all - I have a recipe that I'd like to patch - the source is in a repo which has a submodule, and the patch occurs in the submodule. Is there a way I can apply this patch without getting an error? I do kind of understand why it's a problem - the patch is changing the pointer of the submodule to a commit which doesn't actually exist. Do I need to build the submodule as a separate recipe and patch it separately maybe? I used devtool for the patch and if I don't run the devtool reset command, then everything builds, but I think this is just because the workspace created by devtool was added as a layer, which probably isn't a good long term solution. The error I get (pasted below) says I can "enforce with -f" but I'm not sure where that option goes exactly. Thanks for the help! Emily Error on build: ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Command Error: 'quilt --quiltrc /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0 Output: Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch File Poverty is not a regular file -- refusing to patch 1 out of 1 hunk ignored -- rejects in file Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not apply (enforce with -f) ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Function failed: patch_do_patch
|
|
Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)
Sangeeta Jain
toggle quoted messageShow quoted text
-----Original Message-----Updated some automated test results for more precise results from a repeated run. Where any tests skipped do to lack of physical access to h/w? A quick look atYes, skip some manual tests which can't run remotely.
|
|
Re: yocto on tinkerbaord using meta-rockchip
Trevor Woerner
Hi Karthik,
On Thu, Mar 19, 2020 at 9:36 PM karthik poduval <karthik.poduval@...> wrote: Thank you for you work on meta-rockchip layer. You were listed as theAccording to the log, your board is a "tinker-board-s", please try using that for your MACHINE instead of "tinker-board". Thanks and best regards, Trevor
|
|
Re: What are the key factors for yocto build speed?
On Thu, Mar 19, 2020 at 9:07 AM Mike Looijmans <mike.looijmans@...> wrote:
is it something like make -l ? that you are looking for here. In builds involving FPGA's I have tasks that take up about 48GB of RAM
|
|
Re: What are the key factors for yocto build speed?
Yann Dirson
Le jeu. 19 mars 2020 à 18:04, Richard Purdie <richard.purdie@...> a écrit : On Thu, 2020-03-19 at 17:29 +0100, Yann Dirson wrote: Awesome :) Sadly we never fixed all the issues to let that merge. The problems listed in the commit message don't amount to much at first glance. Any other issues that got visible only later ? --
|
|
Re: What are the key factors for yocto build speed?
Richard Purdie
On Thu, 2020-03-19 at 19:21 +0200, Adrian Bunk wrote:
On Thu, Mar 19, 2020 at 05:07:17PM +0100, Mike Looijmans wrote:You can test with:...This is a bug: do_compile[number_threads] = "1" Cheers, Richard
|
|
Re: What are the key factors for yocto build speed?
Adrian Bunk
On Thu, Mar 19, 2020 at 05:07:17PM +0100, Mike Looijmans wrote:
...This is a bug: http://bugzilla.yoctoproject.org/show_bug.cgi?id=13306 I sometimes wonder whether something basic like "no more than one compile task at a time" would be sufficient in practice to avoid overloading all cores. It would also help with RAM usage, there are some combinations of recipes where the build gets aborted by the oom killer on my laptop (8 cores, 32 GB RAM) when bitbake runs the compile tasks in parallel. cu Adrian
|
|
Re: What are the key factors for yocto build speed?
Richard Purdie
On Thu, 2020-03-19 at 17:29 +0100, Yann Dirson wrote:
You mean like: http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237 ? :) Sadly we never fixed all the issues to let that merge. Cheers, Richard
|
|
Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)
On 3/19/20 6:57 AM, Jain, Sangeeta wrote:
Hello all,Where any tests skipped do to lack of physical access to h/w? A quick look at some of the manual tests do imply physical access. - armin ======= Bugs ========
|
|
Re: What are the key factors for yocto build speed?
Yann Dirson
Le jeu. 19 mars 2020 à 17:07, Mike Looijmans <mike.looijmans@...> a écrit : On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote: This could be neatly handled by using the GNU-make job-server mechanism. If bitbake itself would provide a jub-server, all make-based recipes would automatically get their jobs properly limited. There is a (sadly not merged yet) MR [1] for ninja tu gain job-server support as well, through which we should have a pretty good coverage of the recipes set (as a backend for cmake, meson, and more). --
|
|
Re: What are the key factors for yocto build speed?
Mike Looijmans
On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
It would be really great if some sort of "weight" could be attached to a task. This relates to memory usage., fetch, configure, package and rootfs tasks.Sadly these tasks are much harder. My system has 16 cores but only 8GB RAM. With both parallelization options to "16", I might end up with 16 compile tasks running 16 compile threads each, i.e. 256 running processes. In practice this doesn't actually happen, but the memory load gets high sometimes, so I reduce the TASKS to 8 at most. That has kept my system out of swap trouble for the time being. The idea was that tasks get a "weight" in terms of cores they'll use, and the scheduler takes that into account. So it would run 16 do_configure tasks (weight=1) in parallel, but it would not start a new task that would push the weight over some number (say 40 for my case). So it would start a third compile, but not a fourth, but it would start a do_configure task. Does that make sense? In builds involving FPGA's I have tasks that take up about 48GB of RAM (my machine cannot run them) but only a single CPU core. Attempting to run multiple of these in parallel (happened to me when I changed some shared recipe content) will bring most machines to their knees. Currently my only way of handling that is manual interference... -- Mike Looijmans
|
|
Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)
Sangeeta Jain
Hello all,
toggle quoted messageShow quoted text
This is the full report for yocto-3.1_M3.rc1: https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults ======= Summary ======== No high milestone defects. No new defects are found in this cycle. Valgrind ptest failed (BUG id:13838). Note: Few failures are observed. These are setup issue since running tests remotely, not real yocto issue. ======= Bugs ======== https://bugzilla.yoctoproject.org/show_bug.cgi?id=13838 Thanks, Sangeeta
-----Original Message-----
|
|
Re: Issue while adding the support for TLS1.3 in existing krogoth yocto
#yocto
#apt
#raspberrypi
amaya jindal
Pls can any body guide and suggest the reason of issue Sent from my Huawei phone
-------- Original message -------- From: "amaya jindal via Lists.Yoctoproject.Org" <amayajindal786=gmail.com@...> Date: Wed, 18 Mar 2020, 4:30 pm To: "Khem Raj via Lists.Yoctoproject.Org" <raj.khem=gmail.com@...>, yocto@... Cc: yocto@... Subject: Re: [yocto] Issue while adding the support for TLS1.3 in existing krogoth yocto #yocto #yocto #yocto #yocto #apt #raspberrypi #yocto
|
|
Re: Private: Re: [yocto] Excluding kernel configuration fragment
Bruce Ashfield
On Thu, Mar 19, 2020 at 7:49 AM Fred Baksik <fdk17@...> wrote:
I've fixed the bug now, and am implementing something easier for master, so in the future .. it will be easy to remove that sort of warning. Cheers, Bruce
-- - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end - "Use the force Harry" - Gandalf, Star Trek II
|
|
Re: Private: Re: [yocto] Excluding kernel configuration fragment
Fred Baksik
Hello, I doesn't look like these last few emails made it to the mailing list. I don't mind creating my own BSP but I thought it might be easier to tweak an existing one. Thanks for your help.
On Wed, Mar 18, 2020, at 6:06 PM, Bruce Ashfield wrote:
|
|
Re: What are the key factors for yocto build speed?
Richard Purdie
On Thu, 2020-03-19 at 11:43 +0000, Mikko.Rapeli@... wrote:
On Thu, Mar 19, 2020 at 11:04:26AM +0000, Richard Purdie wrote:This isn't recipe parsing but runqueue setup and taskgraph calculationRecipe parsing should hit 100% CPU, its one of the few places weI'm not fully aware what bitbake does before starting task execution. which happens after parsing but before task execution. More recent bitbake is probably a bit better at it but it is unfortunately a single threaded process :( Cheers, Richard
|
|
Re: What are the key factors for yocto build speed?
Mikko Rapeli
On Thu, Mar 19, 2020 at 11:04:26AM +0000, Richard Purdie wrote:
On Thu, 2020-03-19 at 08:05 +0000, Mikko Rapeli wrote:I will double check, but I'm sure I see IO going to disk when plenty of RAMOnce this is done, IO still happens when anything calls sync() andDoesn't pseudo intercept and stop these sync calls already? Its is still available in page cache. I'm not fully aware what bitbake does before starting task execution.The effect is clearly visible during build time using Performance Co-Recipe parsing should hit 100% CPU, its one of the few places we can do With sumo, there is an initial spike in CPU use and then a long single thread wait where log shows "Initialising tasks..." and Cooker process is using a single core. For me this takes at least one minutes for every build. Same is visible with zeus too. Example graph from pmchart: https://mcfrisk.kapsi.fi/temp/bitbake_start_to_task_execution.png Yep., fetch, configure, package and rootfs tasks.Sadly these tasks are much harder. I'll try to check this case once more.Memory is not fully utilized either since IO through sync()/fsync()non-pseudo tasks? -Mikko
|
|
Re: What are the key factors for yocto build speed?
Richard Purdie
On Thu, 2020-03-19 at 08:05 +0000, Mikko Rapeli wrote:
Once this is done, IO still happens when anything calls sync() andDoesn't pseudo intercept and stop these sync calls already? Its supposed to so if its not, we should fix that. The effect is clearly visible during build time using Performance Co-Recipe parsing should hit 100% CPU, its one of the few places we can do that. , fetch, configure, package and rootfs tasks.Sadly these tasks are much harder. Memory is not fully utilized either since IO through sync()/fsync()non-pseudo tasks? Cheers, Richard
|
|
Re: What are the key factors for yocto build speed?
Mikko Rapeli
On Wed, Mar 18, 2020 at 10:56:50PM +0000, Ross Burton wrote:
On 18/03/2020 14:09, Mike Looijmans wrote:Alternative for tmpfs with hard size limit is to keep file system caches inHarddisk speed has very little impact on your build time. It helps withMy build machine has lots of RAM and I do builds in a 32GB tmpfs with memory as long as possible and only start writes to disks when page cache gets too full. This scales but still uses all the RAM available. Here's how to do this: $ cat /etc/sysctl.d/99-build_server_fs_ops_to_memory.conf # fs cache can use 90% of memory before system starts io to disk, # keep as much as possible in RAM vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 90 # keep stuff for 12h in memory before writing to disk, # allows reusing data as much as possible between builds vm.dirty_expire_centisecs = 4320000 vm.dirtytime_expire_seconds = 432000 # allow single process to use 60% of system RAM for file caches, e.g. image build vm.dirty_bytes = 0 vm.dirty_ratio = 60 # disable periodic background writes, only write when running out of RAM vm.dirty_writeback_centisecs = 0 Once this is done, IO still happens when anything calls sync() and fsync() and worst offenders are package management tools. In yocto builds, package manager actions to flush to disk are always useless since rootfs images are going to be compressed and original ones wiped by rm_work anyway. I've tried to hook eatmydata library into the build which makes sync() and fsync() calls no-ops but I've still failed to fix all the tools and processes called during build from python code. For shell based tasks this does it: $ export LD_LIBRARY_PATH=/usr/lib/libeatmydata $ export LD_PRELOAD=libeatmydata.so $ grep -rn LD_PRELOAD conf/local.conf conf/local.conf:305:BB_HASHBASE_WHITELIST_append = " LD_PRELOAD" conf/local.conf:306:BB_HASHCONFIG_WHITELIST_append = " LD_PRELOAD" The effect is clearly visible during build time using Performance Co-Pilot (pcp) or similar tools to monitor CPU, memory, IO and network IO. The usage of RAM as page cache grows until limits are hit and only then writes to disk start, except for the python image classes... Hints to fix this are welcome! To my knowledge of monitoring our builds, there is a lot of optimization potential to better build times. CPU are under utilized during bitbake recipe parsing, fetch, configure, package and rootfs tasks. Memory is not fully utilized either since IO through sync()/fsync() happens everywhere, and due to background writes by default on ext4 etc file systems. Only do_compile() tasks are saturating all CPUs and when linking lots of C++ also all of RAM. Then dependencies between various recipes and tasks leaves large gaps in CPU utilization too. -Mikko
|
|
Amrun Nisha.R
Hi all,
While trying to build core-image-base, I'm facing with the error "could not invoke dnf. command". Is there any solutions for this? Log file error: ERROR: Could not invoke dnf. Command '/home/titan/Documents/core-image-baseline/build_wayland/tmp/work/imx8mq_var_dart-poky-linux/core-image-base/1.0-r0/recipe-sysroot-native/usr/bin/dnf -y -c /home/titan/Documents/core-image-baseline/build_wayland/tmp/work/imx8mq_var_dart-poky-linux/core-image-base/1.0-r0/rootfs/etc/dnf/dnf.conf Failed:
hostapd.aarch64 2.6-r0
Error: Transaction failed
DEBUG: Python function do_rootfs finished
ERROR: Function failed: do_rootfs
|
|