Date   

Patching submodules

Emily
 

Hi all - 

I have a recipe that I'd like to patch - the source is in a repo which has a submodule, and the patch occurs in the submodule. Is there a way I can apply this patch without getting an error? I do kind of understand why it's a problem - the patch is changing the pointer of the submodule to a commit which doesn't actually exist. Do I need to build the submodule as a separate recipe and patch it separately maybe? 

I used devtool for the patch and if I don't run the devtool reset command, then everything builds, but I think this is just because the workspace created by devtool was added as a layer, which probably isn't a good long term solution. 

The error I get (pasted below) says I can "enforce with -f" but I'm not sure where that option goes exactly. Thanks for the help! 

Emily

Error on build: 
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Command Error: 'quilt --quiltrc /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0  Output:
Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch
File Poverty is not a regular file -- refusing to patch
1 out of 1 hunk ignored -- rejects in file
Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not apply (enforce with -f)
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Function failed: patch_do_patch


Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)

Sangeeta Jain
 

-----Original Message-----
From: akuster808 <akuster808@gmail.com>
Sent: Friday, 20 March, 2020 12:31 AM
To: Jain, Sangeeta <sangeeta.jain@intel.com>; pokybuild@ubuntu1804-ty-
2.yocto.io; yocto@lists.yoctoproject.org
Cc: otavio@ossystems.com.br; yi.zhao@windriver.com; Sangal, Apoorv
<apoorv.sangal@intel.com>; Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@intel.com>;
richard.purdie@linuxfoundation.org; sjolley.yp.pm@gmail.com
Subject: Re: [yocto] QA notification for completed autobuilder build (yocto-
3.1_M3.rc1)



On 3/19/20 6:57 AM, Jain, Sangeeta wrote:
Hello all,

This is the full report for yocto-3.1_M3.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/t
ree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.
No new defects are found in this cycle.
Valgrind ptest failed (BUG id:13838).

Note: Few failures are observed. These are setup issue since running tests
remotely, not real yocto issue.
Updated some automated test results for more precise results from a repeated run.

Where any tests skipped do to lack of physical access to h/w? A quick look at
some of the manual tests do imply physical access.
Yes, skip some manual tests which can't run remotely.

- armin
======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13838

Thanks,
Sangeeta

-----Original Message-----
From: yocto@lists.yoctoproject.org <yocto@lists.yoctoproject.org> On
Behalf Of pokybuild@ubuntu1804-ty-2.yocto.io
Sent: Monday, 16 March, 2020 5:40 PM
To: yocto@lists.yoctoproject.org
Cc: otavio@ossystems.com.br; yi.zhao@windriver.com; Sangal, Apoorv
<apoorv.sangal@intel.com>; Yeoh, Ee Peng <ee.peng.yeoh@intel.com>;
Chan, Aaron Chun Yew <aaron.chun.yew.chan@intel.com>;
richard.purdie@linuxfoundation.org; akuster808@gmail.com;
sjolley.yp.pm@gmail.com; Jain, Sangeeta <sangeeta.jain@intel.com>
Subject: [yocto] QA notification for completed autobuilder build
(yocto-
3.1_M3.rc1)


A build flagged for QA (yocto-3.1_M3.rc1) was completed on the
autobuilder and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.1_M3.rc1


Build hash information:

bitbake: e67dfa4a4d0d63e4752655f25367582e5a95f1da
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: 60773e8496370d821309e00f2c312128a130c22b
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 61d80b07bcfa4adf5f1feb2904fec0a8d09c89f6
poky: 6f02caa39985fb89d9ad49e1f788a9a8dd6e12d7



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org



Re: yocto on tinkerbaord using meta-rockchip

Trevor Woerner
 

Hi Karthik,

On Thu, Mar 19, 2020 at 9:36 PM karthik poduval
<karthik.poduval@gmail.com> wrote:
Thank you for you work on meta-rockchip layer. You were listed as the
maintainer for meta-rockchip so I though I will send you a mail with
an issue I was facing.

I was trying to flash an image on a Asus tinker board using
meta-rockchip. Here are the steps I followed.

git clone git://git.yoctoproject.org/poky
git clone git://git.yoctoproject.org/meta-rockchip
source poky/oe-init-build-env
bitbake-layers add-layer ../../meta-rockchip/
MACHINE=tinker-board bitbake core-image-minimal

#flahsed it using the following command to sdcard (my sdcard was dev/sde)
sudo dd if=tmp/deploy/images/tinker-board/core-image-minimal-tinker-board.wic
of=/dev/sde

after inserting sdcard and booting up I can see on the serial console
it attempts to boot, crosses bootloader and proceeds to linux boot but
then gets hing around dwmmc_rockchip loading.

Attached the complete serial log. Your help is greatly appreciated.
According to the log, your board is a "tinker-board-s", please try
using that for your MACHINE instead of "tinker-board".

Thanks and best regards,
Trevor


Re: What are the key factors for yocto build speed?

Khem Raj
 

On Thu, Mar 19, 2020 at 9:07 AM Mike Looijmans <mike.looijmans@topic.nl> wrote:

On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
It would be really great if some sort of "weight" could be attached to a
task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization
options to "16", I might end up with 16 compile tasks running 16 compile
threads each, i.e. 256 running processes. In practice this doesn't
actually happen, but the memory load gets high sometimes, so I reduce
the TASKS to 8 at most. That has kept my system out of swap trouble for
the time being.

The idea was that tasks get a "weight" in terms of cores they'll use,
and the scheduler takes that into account. So it would run 16
do_configure tasks (weight=1) in parallel, but it would not start a new
task that would push the weight over some number (say 40 for my case).
So it would start a third compile, but not a fourth, but it would start
a do_configure task.

Does that make sense?
is it something like make -l ? that you are looking for here.

In builds involving FPGA's I have tasks that take up about 48GB of RAM
(my machine cannot run them) but only a single CPU core. Attempting to
run multiple of these in parallel (happened to me when I changed some
shared recipe content) will bring most machines to their knees.
Currently my only way of handling that is manual interference...

--
Mike Looijmans


Re: What are the key factors for yocto build speed?

Yann Dirson
 



Le jeu. 19 mars 2020 à 18:04, Richard Purdie <richard.purdie@...> a écrit :
On Thu, 2020-03-19 at 17:29 +0100, Yann Dirson wrote:
>
>
> Le jeu. 19 mars 2020 à 17:07, Mike Looijmans <mike.looijmans@...> a écrit :
> > On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
> > >> , fetch, configure, package and rootfs tasks.
> > >
> > > Sadly these tasks are much harder.
> >
> > It would be really great if some sort of "weight" could be attached to a
> > task. This relates to memory usage.
> >
> > My system has 16 cores but only 8GB RAM. With both parallelization
> > options to "16", I might end up with 16 compile tasks running 16 compile
> > threads each, i.e. 256 running processes. In practice this doesn't
> > actually happen, but the memory load gets high sometimes, so I reduce
> > the TASKS to 8 at most. That has kept my system out of swap trouble for
> > the time being.
>
> This could be neatly handled by using the GNU-make job-server mechanism.
> If bitbake itself would provide a jub-server, all make-based recipes would
> automatically get their jobs properly limited.  There is a (sadly not merged yet)
> MR [1] for ninja tu gain job-server support as well, through which we should have
> a pretty good coverage of the recipes set (as a backend for cmake, meson, and more).
>
> [1] https://github.com/ninja-build/ninja/issues/1139

You mean like:

http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237

? :)


Awesome :)

Sadly we never fixed all the issues to let that merge.

The problems listed in the commit message don't amount to much at first glance. Any other issues
that got visible only later ?

--
Yann Dirson <yann@...>
Blade / Shadow -- http://shadow.tech


Re: What are the key factors for yocto build speed?

Richard Purdie
 

On Thu, 2020-03-19 at 19:21 +0200, Adrian Bunk wrote:
On Thu, Mar 19, 2020 at 05:07:17PM +0100, Mike Looijmans wrote:
...
With both parallelization options
to "16", I might end up with 16 compile tasks running 16 compile
threads
each, i.e. 256 running processes.
...
This is a bug:
http://bugzilla.yoctoproject.org/show_bug.cgi?id=13306

I sometimes wonder whether something basic like "no more than one
compile task at a time" would be sufficient in practice to avoid
overloading all cores.
You can test with:

do_compile[number_threads] = "1"

Cheers,

Richard


Re: What are the key factors for yocto build speed?

Adrian Bunk
 

On Thu, Mar 19, 2020 at 05:07:17PM +0100, Mike Looijmans wrote:
...
With both parallelization options
to "16", I might end up with 16 compile tasks running 16 compile threads
each, i.e. 256 running processes.
...
This is a bug:
http://bugzilla.yoctoproject.org/show_bug.cgi?id=13306

I sometimes wonder whether something basic like "no more than one
compile task at a time" would be sufficient in practice to avoid
overloading all cores.

It would also help with RAM usage, there are some combinations of
recipes where the build gets aborted by the oom killer on my laptop
(8 cores, 32 GB RAM) when bitbake runs the compile tasks in parallel.

cu
Adrian


Re: What are the key factors for yocto build speed?

Richard Purdie
 

On Thu, 2020-03-19 at 17:29 +0100, Yann Dirson wrote:


Le jeu. 19 mars 2020 à 17:07, Mike Looijmans <mike.looijmans@topic.nl> a écrit :
On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
It would be really great if some sort of "weight" could be attached to a
task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization
options to "16", I might end up with 16 compile tasks running 16 compile
threads each, i.e. 256 running processes. In practice this doesn't
actually happen, but the memory load gets high sometimes, so I reduce
the TASKS to 8 at most. That has kept my system out of swap trouble for
the time being.
This could be neatly handled by using the GNU-make job-server mechanism.
If bitbake itself would provide a jub-server, all make-based recipes would
automatically get their jobs properly limited. There is a (sadly not merged yet)
MR [1] for ninja tu gain job-server support as well, through which we should have
a pretty good coverage of the recipes set (as a backend for cmake, meson, and more).

[1] https://github.com/ninja-build/ninja/issues/1139
You mean like:

http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237

? :)

Sadly we never fixed all the issues to let that merge.

Cheers,

Richard


Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)

Armin Kuster
 

On 3/19/20 6:57 AM, Jain, Sangeeta wrote:
Hello all,

This is the full report for yocto-3.1_M3.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.
No new defects are found in this cycle.
Valgrind ptest failed (BUG id:13838).

Note: Few failures are observed. These are setup issue since running tests remotely, not real yocto issue.
Where any tests skipped do to lack of physical access to h/w? A quick
look at some of the manual tests do imply physical access.

- armin
======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13838

Thanks,
Sangeeta

-----Original Message-----
From: yocto@lists.yoctoproject.org <yocto@lists.yoctoproject.org> On Behalf
Of pokybuild@ubuntu1804-ty-2.yocto.io
Sent: Monday, 16 March, 2020 5:40 PM
To: yocto@lists.yoctoproject.org
Cc: otavio@ossystems.com.br; yi.zhao@windriver.com; Sangal, Apoorv
<apoorv.sangal@intel.com>; Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@intel.com>;
richard.purdie@linuxfoundation.org; akuster808@gmail.com;
sjolley.yp.pm@gmail.com; Jain, Sangeeta <sangeeta.jain@intel.com>
Subject: [yocto] QA notification for completed autobuilder build (yocto-
3.1_M3.rc1)


A build flagged for QA (yocto-3.1_M3.rc1) was completed on the autobuilder
and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.1_M3.rc1


Build hash information:

bitbake: e67dfa4a4d0d63e4752655f25367582e5a95f1da
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: 60773e8496370d821309e00f2c312128a130c22b
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 61d80b07bcfa4adf5f1feb2904fec0a8d09c89f6
poky: 6f02caa39985fb89d9ad49e1f788a9a8dd6e12d7



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org



Re: What are the key factors for yocto build speed?

Yann Dirson
 



Le jeu. 19 mars 2020 à 17:07, Mike Looijmans <mike.looijmans@...> a écrit :
On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
>> , fetch, configure, package and rootfs tasks.
>
> Sadly these tasks are much harder.

It would be really great if some sort of "weight" could be attached to a
task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization
options to "16", I might end up with 16 compile tasks running 16 compile
threads each, i.e. 256 running processes. In practice this doesn't
actually happen, but the memory load gets high sometimes, so I reduce
the TASKS to 8 at most. That has kept my system out of swap trouble for
the time being.

This could be neatly handled by using the GNU-make job-server mechanism.
If bitbake itself would provide a jub-server, all make-based recipes would
automatically get their jobs properly limited.  There is a (sadly not merged yet)
MR [1] for ninja tu gain job-server support as well, through which we should have
a pretty good coverage of the recipes set (as a backend for cmake, meson, and more).


--
Yann Dirson <yann@...>
Blade / Shadow -- http://shadow.tech


Re: What are the key factors for yocto build speed?

Mike Looijmans
 

On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
It would be really great if some sort of "weight" could be attached to a task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization options to "16", I might end up with 16 compile tasks running 16 compile threads each, i.e. 256 running processes. In practice this doesn't actually happen, but the memory load gets high sometimes, so I reduce the TASKS to 8 at most. That has kept my system out of swap trouble for the time being.

The idea was that tasks get a "weight" in terms of cores they'll use, and the scheduler takes that into account. So it would run 16 do_configure tasks (weight=1) in parallel, but it would not start a new task that would push the weight over some number (say 40 for my case). So it would start a third compile, but not a fourth, but it would start a do_configure task.

Does that make sense?

In builds involving FPGA's I have tasks that take up about 48GB of RAM (my machine cannot run them) but only a single CPU core. Attempting to run multiple of these in parallel (happened to me when I changed some shared recipe content) will bring most machines to their knees. Currently my only way of handling that is manual interference...

--
Mike Looijmans


Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)

Sangeeta Jain
 

Hello all,

This is the full report for yocto-3.1_M3.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.
No new defects are found in this cycle.
Valgrind ptest failed (BUG id:13838).

Note: Few failures are observed. These are setup issue since running tests remotely, not real yocto issue.
======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13838

Thanks,
Sangeeta

-----Original Message-----
From: yocto@lists.yoctoproject.org <yocto@lists.yoctoproject.org> On Behalf
Of pokybuild@ubuntu1804-ty-2.yocto.io
Sent: Monday, 16 March, 2020 5:40 PM
To: yocto@lists.yoctoproject.org
Cc: otavio@ossystems.com.br; yi.zhao@windriver.com; Sangal, Apoorv
<apoorv.sangal@intel.com>; Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@intel.com>;
richard.purdie@linuxfoundation.org; akuster808@gmail.com;
sjolley.yp.pm@gmail.com; Jain, Sangeeta <sangeeta.jain@intel.com>
Subject: [yocto] QA notification for completed autobuilder build (yocto-
3.1_M3.rc1)


A build flagged for QA (yocto-3.1_M3.rc1) was completed on the autobuilder
and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.1_M3.rc1


Build hash information:

bitbake: e67dfa4a4d0d63e4752655f25367582e5a95f1da
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: 60773e8496370d821309e00f2c312128a130c22b
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 61d80b07bcfa4adf5f1feb2904fec0a8d09c89f6
poky: 6f02caa39985fb89d9ad49e1f788a9a8dd6e12d7



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org



Re: Issue while adding the support for TLS1.3 in existing krogoth yocto #yocto #apt #raspberrypi

amaya jindal
 

Pls can any body guide and suggest the reason of issue

Sent from my Huawei phone


-------- Original message --------
From: "amaya jindal via Lists.Yoctoproject.Org" <amayajindal786=gmail.com@...>
Date: Wed, 18 Mar 2020, 4:30 pm
To: "Khem Raj via Lists.Yoctoproject.Org" <raj.khem=gmail.com@...>, yocto@...
Cc: yocto@...
Subject: Re: [yocto] Issue while adding the support for TLS1.3 in existing krogoth yocto #yocto #yocto #yocto #yocto #apt #raspberrypi #yocto
Hi All 

 while I tried to add Openssh 7.8p1 recipe in krogoth yocto, to add support for openssl 1.1.1b. Every thing compiled successfully but now I am getting issue when I tried to. test that on board, its getting restarted every time. Please suggest 

Sent from my Huawei phone


-------- Original message --------
From: amaya jindal <amayajindal786@...>
Date: Wed, 19 Feb 2020, 1:09 pm
To: Mikko.Rapeli@...
Cc: alex.kanavin@..., yocto@...
Subject: Re: [yocto] Issue while adding the support for TLS1.3 in existing krogoth yocto #yocto #yocto #yocto #yocto #apt #raspberrypi #yocto
Any kind of patch if available to directly apply? I am getting error in gobject-introspection native that sha1 sha256 etc not found in usr/lib/python2 7/hashlib.py

Sent from my Huawei phone


-------- Original message --------
From: Mikko.Rapeli@...
Date: Tue, 18 Feb 2020, 2:36 pm
To: amayajindal786@...
Cc: alex.kanavin@..., yocto@...
Subject: Re: [yocto] Issue while adding the support for TLS1.3 in existing krogoth yocto #yocto #yocto #yocto #yocto #apt #raspberrypi #yocto
Hi,

On Tue, Feb 18, 2020 at 01:20:25PM +0530, amaya jindal wrote:
>    Thanks for your prompt reply. But is not there any way similar to add
>    support for TLS1.3 instead of moving to new yocto releases

openssl is tricky to update and requires backporting fixes for many, many recipes
to get builds passing etc. Depending on project size, it may be possible
to update only those components which you use, e.g. backport commits from
poky master or release branches like warrior. The number of backported changes
will be large. I've ported openssl 1.1.1d patches to yocto 2.5 sumo but it wasn't
pretty. A strategy with regular yocto updates is much better and forces you
to think of your dependencies and patches much harder.

Hope this helps,

-Mikko


Re: Private: Re: [yocto] Excluding kernel configuration fragment

Bruce Ashfield
 

On Thu, Mar 19, 2020 at 7:49 AM Fred Baksik <fdk17@ftml.net> wrote:

Hello,

I doesn't look like these last few emails made it to the mailing list.
I don't mind creating my own BSP but I thought it might be easier to tweak an existing one.
I've fixed the bug now, and am implementing something easier for
master, so in the future .. it will be easy to remove that sort of
warning.

Cheers,

Bruce


Thanks for your help.

On Wed, Mar 18, 2020, at 6:06 PM, Bruce Ashfield wrote:

Hi Fred,

So I dug into this today, and as I suspected, there's currently not a
great way to inhibit the warning easily (it's broken).

I'm going to re-work some things and fix this in master .. it's
interesting that no one else has asked about this until now.

If you don't want to define your own BSP, my suggestion is to just
inhibit the warning with the KCONF_AUDIT_LEVEL and
KCONF_BSP_AUDIT_LEVEL variables. You know what you are doing with
those warnings, so they can be safely masked.

Bruce

On Tue, Mar 17, 2020 at 10:22 PM Bruce Ashfield
<bruce.ashfield@gmail.com> wrote:

Thanks Fred,

Let me fire up a build with this tomorrow and I'll follow up with the
best thing to do.

Bruce

On Tue, Mar 17, 2020 at 9:07 PM Fred Baksik <fdk17@ftml.net> wrote:

What you are doing, is the right way to do things, unless you modify
the source fragment directly.

I used 'bitbake linux-intel -c menuconfig' and 'bitbake linux-intel -c diffconfig'. This generated a fragment that contained the single line:

# CONFIG_SOUND is not set

I added this fragment to my "recipes-kernel/linux/linux-intel_%.bbappend".

Can you send me the exact layers/branches you are using, and I'll
confirm the right thing to do with a local build.

I'm using warrior 2.7.3 and MACHINE=intel-corei7-64
poky
meta-intel
meta-openembedded

It was easy enough to remove items, like alsa, from DISTRO_FEATURES to remove the features I wouldn't need.
I had wanted to remove the items, like audio support, from an existing machine instead of creating one from scratch.

Thanks,
Fred


--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


Re: Private: Re: [yocto] Excluding kernel configuration fragment

Fred Baksik
 

Hello,

I doesn't look like these last few emails made it to the mailing list.
I don't mind creating my own BSP but I thought it might be easier to tweak an existing one.

Thanks for your help.

On Wed, Mar 18, 2020, at 6:06 PM, Bruce Ashfield wrote:
Hi Fred,

So I dug into this today, and as I suspected, there's currently not a
great way to inhibit the warning easily (it's broken).

I'm going to re-work some things and fix this in master .. it's
interesting that no one else has asked about this until now.

If you don't want to define your own BSP, my suggestion is to just
inhibit the warning with the KCONF_AUDIT_LEVEL and
KCONF_BSP_AUDIT_LEVEL variables. You know what you are doing with
those warnings, so they can be safely masked.

Bruce

On Tue, Mar 17, 2020 at 10:22 PM Bruce Ashfield
<bruce.ashfield@...> wrote:
>
> Thanks Fred,
>
> Let me fire up a build with this tomorrow and I'll follow up with the
> best thing to do.
>
> Bruce
>
> On Tue, Mar 17, 2020 at 9:07 PM Fred Baksik <fdk17@...> wrote:
> >
> > What you are doing, is the right way to do things, unless you modify
> > the source fragment directly.
> >
> > I used 'bitbake linux-intel -c menuconfig' and 'bitbake linux-intel -c diffconfig'.  This generated a fragment that contained the single line:
> >
> > # CONFIG_SOUND is not set
> >
> > I added this fragment to my "recipes-kernel/linux/linux-intel_%.bbappend".
> >
> > Can you send me the exact layers/branches you are using, and I'll
> > confirm the right thing to do with a local build.
> >
> > I'm using warrior 2.7.3 and MACHINE=intel-corei7-64
> > poky
> > meta-intel
> > meta-openembedded
> >
> > It was easy enough to remove items, like alsa, from DISTRO_FEATURES to remove the features I wouldn't need.
> > I had wanted to remove the items, like audio support, from an existing machine instead of creating one from scratch.
> >
> > Thanks,
> > Fred




Re: What are the key factors for yocto build speed?

Richard Purdie
 

On Thu, 2020-03-19 at 11:43 +0000, Mikko.Rapeli@bmw.de wrote:
On Thu, Mar 19, 2020 at 11:04:26AM +0000, Richard Purdie wrote:
Recipe parsing should hit 100% CPU, its one of the few places we
can do
that.
I'm not fully aware what bitbake does before starting task execution.
With sumo, there is an initial spike in CPU use and then a long
single thread wait where log shows "Initialising tasks..." and Cooker
process is using a single core. For me this takes at least one
minutes for every build. Same is visible with zeus too.
This isn't recipe parsing but runqueue setup and taskgraph calculation
which happens after parsing but before task execution. More recent
bitbake is probably a bit better at it but it is unfortunately a
single threaded process :(

Cheers,

Richard


Re: What are the key factors for yocto build speed?

Mikko Rapeli
 

On Thu, Mar 19, 2020 at 11:04:26AM +0000, Richard Purdie wrote:
On Thu, 2020-03-19 at 08:05 +0000, Mikko Rapeli wrote:
Once this is done, IO still happens when anything calls sync() and
fsync() and worst offenders are package management tools. In yocto
builds, package manager actions to flush to disk are always useless
since rootfs images are going to be compressed and original ones
wiped by rm_work anyway.
I've tried to hook eatmydata library into the build which makes
sync() and fsync() calls no-ops but I've still failed to fix all the
tools and processes called during build from python code. For shell
based tasks this does it:

$ export LD_LIBRARY_PATH=/usr/lib/libeatmydata
$ export LD_PRELOAD=libeatmydata.so
$ grep -rn LD_PRELOAD conf/local.conf
conf/local.conf:305:BB_HASHBASE_WHITELIST_append = " LD_PRELOAD"
conf/local.conf:306:BB_HASHCONFIG_WHITELIST_append = " LD_PRELOAD"
Doesn't pseudo intercept and stop these sync calls already? Its
supposed to so if its not, we should fix that.
I will double check, but I'm sure I see IO going to disk when plenty of RAM
is still available in page cache.

The effect is clearly visible during build time using Performance Co-
Pilot (pcp) or similar tools to monitor CPU, memory, IO and network
IO. The usage of RAM as page cache grows until limits are hit and
only then writes to disk start, except for the python image
classes... Hints to fix this are welcome!

To my knowledge of monitoring our builds, there is a lot of
optimization
potential to better build times. CPU are under utilized during
bitbake recipe parsing
Recipe parsing should hit 100% CPU, its one of the few places we can do
that.
I'm not fully aware what bitbake does before starting task execution.
With sumo, there is an initial spike in CPU use and then a long single
thread wait where log shows "Initialising tasks..." and Cooker process
is using a single core. For me this takes at least one minutes for
every build. Same is visible with zeus too.

Example graph from pmchart:

https://mcfrisk.kapsi.fi/temp/bitbake_start_to_task_execution.png

, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
Yep.

Memory is not fully utilized either since IO through sync()/fsync()
happens everywhere
non-pseudo tasks?
I'll try to check this case once more.

-Mikko


Re: What are the key factors for yocto build speed?

Richard Purdie
 

On Thu, 2020-03-19 at 08:05 +0000, Mikko Rapeli wrote:
Once this is done, IO still happens when anything calls sync() and
fsync() and worst offenders are package management tools. In yocto
builds, package manager actions to flush to disk are always useless
since rootfs images are going to be compressed and original ones
wiped by rm_work anyway.
I've tried to hook eatmydata library into the build which makes
sync() and fsync() calls no-ops but I've still failed to fix all the
tools and processes called during build from python code. For shell
based tasks this does it:

$ export LD_LIBRARY_PATH=/usr/lib/libeatmydata
$ export LD_PRELOAD=libeatmydata.so
$ grep -rn LD_PRELOAD conf/local.conf
conf/local.conf:305:BB_HASHBASE_WHITELIST_append = " LD_PRELOAD"
conf/local.conf:306:BB_HASHCONFIG_WHITELIST_append = " LD_PRELOAD"
Doesn't pseudo intercept and stop these sync calls already? Its
supposed to so if its not, we should fix that.

The effect is clearly visible during build time using Performance Co-
Pilot (pcp) or similar tools to monitor CPU, memory, IO and network
IO. The usage of RAM as page cache grows until limits are hit and
only then writes to disk start, except for the python image
classes... Hints to fix this are welcome!

To my knowledge of monitoring our builds, there is a lot of
optimization
potential to better build times. CPU are under utilized during
bitbake recipe parsing
Recipe parsing should hit 100% CPU, its one of the few places we can do
that.

, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.

Memory is not fully utilized either since IO through sync()/fsync()
happens everywhere
non-pseudo tasks?

Cheers,

Richard


Re: What are the key factors for yocto build speed?

Mikko Rapeli
 

On Wed, Mar 18, 2020 at 10:56:50PM +0000, Ross Burton wrote:
On 18/03/2020 14:09, Mike Looijmans wrote:
Harddisk speed has very little impact on your build time. It helps with
the "setscene" parts, but doesn't affect actual compile time at all. I
recall someone did a build from RAM disks only on a rig, and it was only
about 1 minute faster on a one hour build compared to rotating disks.
My build machine has lots of RAM and I do builds in a 32GB tmpfs with
rm_work (and no, I don't build webkit, which would make this impractical).

As you say, with sufficient RAM the build speed is practically the same as
on disks due to the caching (especially if you tune the mount options), so
I'd definitely spend money on more RAM instead of super-fast disks. I just
prefer doing tmpfs builds because it saves my spinning rust. :)
Alternative for tmpfs with hard size limit is to keep file system caches in
memory as long as possible and only start writes to disks when page cache gets
too full. This scales but still uses all the RAM available. Here's how to do this:

$ cat /etc/sysctl.d/99-build_server_fs_ops_to_memory.conf
# fs cache can use 90% of memory before system starts io to disk,
# keep as much as possible in RAM
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 90
# keep stuff for 12h in memory before writing to disk,
# allows reusing data as much as possible between builds
vm.dirty_expire_centisecs = 4320000
vm.dirtytime_expire_seconds = 432000
# allow single process to use 60% of system RAM for file caches, e.g. image build
vm.dirty_bytes = 0
vm.dirty_ratio = 60
# disable periodic background writes, only write when running out of RAM
vm.dirty_writeback_centisecs = 0

Once this is done, IO still happens when anything calls sync() and fsync()
and worst offenders are package management tools. In yocto builds, package
manager actions to flush to disk are always useless since rootfs images
are going to be compressed and original ones wiped by rm_work anyway.
I've tried to hook eatmydata library into the build which makes sync() and fsync()
calls no-ops but I've still failed to fix all the tools and processes called
during build from python code. For shell based tasks this does it:

$ export LD_LIBRARY_PATH=/usr/lib/libeatmydata
$ export LD_PRELOAD=libeatmydata.so
$ grep -rn LD_PRELOAD conf/local.conf
conf/local.conf:305:BB_HASHBASE_WHITELIST_append = " LD_PRELOAD"
conf/local.conf:306:BB_HASHCONFIG_WHITELIST_append = " LD_PRELOAD"

The effect is clearly visible during build time using Performance Co-Pilot (pcp)
or similar tools to monitor CPU, memory, IO and network IO. The usage of RAM
as page cache grows until limits are hit and only then writes to disk
start, except for the python image classes... Hints to fix this are welcome!

To my knowledge of monitoring our builds, there is a lot of optimization
potential to better build times. CPU are under utilized during bitbake recipe
parsing, fetch, configure, package and rootfs tasks. Memory is not fully utilized
either since IO through sync()/fsync() happens everywhere, and due to background
writes by default on ext4 etc file systems. Only do_compile() tasks are saturating
all CPUs and when linking lots of C++ also all of RAM. Then dependencies between
various recipes and tasks leaves large gaps in CPU utilization too.

-Mikko


could not invoke dnf. command. Transaction failed #yocto #systemd

Amrun Nisha.R
 

Hi all, 

While trying to build core-image-base, I'm facing with the error "could not invoke dnf. command". Is there any solutions for this?

Log file error:

ERROR: Could not invoke dnf. Command '/home/titan/Documents/core-image-baseline/build_wayland/tmp/work/imx8mq_var_dart-poky-linux/core-image-base/1.0-r0/recipe-sysroot-native/usr/bin/dnf -y -c /home/titan/Documents/core-image-baseline/build_wayland/tmp/work/imx8mq_var_dart-poky-linux/core-image-base/1.0-r0/rootfs/etc/dnf/dnf.conf

Failed:
  hostapd.aarch64 2.6-r0                                                        
 
Error: Transaction failed
 
DEBUG: Python function do_rootfs finished
ERROR: Function failed: do_rootfs


8241 - 8260 of 57104