Date   

Re: Patching submodules

Emily
 

Hi Nicolas -

The recipe is already in a custom layer entirely, I just don’t have full control over the source. So I don’t think I need a .bbappend as I can just put it in the main recipe file.

Thanks,
Emily

On Mar 20, 2020, at 10:06 AM, Nicolas Jeker <n.jeker@...> wrote:

On Thu, 2020-03-19 at 23:10 -0500, Emily wrote:
Hi all -

I have a recipe that I'd like to patch - the source is in a repo
which has a submodule, and the patch occurs in the submodule. Is
there a way I can apply this patch without getting an error? I do
kind of understand why it's a problem - the patch is changing the
pointer of the submodule to a commit which doesn't actually exist. Do
I need to build the submodule as a separate recipe and patch it
separately maybe?
Is there a reason why you don't use a bbappend file with your patch in
it in a custom layer?

Something like this:

package_ver.bbappend
--------------------
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://abc.patch"


With this directory structure:

meta-custom-layer
├── package_ver.bbappend
└── package
└── abc.patch

Replace "package" and "ver" with the correct values (if you don't want
to set the version you can use "%" as a wildcard).

Maybe I missed something about your submodule situation and my advice
is completely wrong, if so, just disregard it.

I used devtool for the patch and if I don't run the devtool reset
command, then everything builds, but I think this is just because the
workspace created by devtool was added as a layer, which probably
isn't a good long term solution.
You should be able to get the above structure by using 'devtool finish
recipe meta-custom-layer'. If that doesn't work you can do it manually
as described above.

The error I get (pasted below) says I can "enforce with -f" but I'm
not sure where that option goes exactly. Thanks for the help!

Emily

Error on build:
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch:
Command Error: 'quilt --quiltrc
/local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-
linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-
native/etc/quiltrc push' exited with 0 Output:
Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch
File Poverty is not a regular file -- refusing to patch
1 out of 1 hunk ignored -- rejects in file
Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not
apply (enforce with -f)
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch:
Function failed: patch_do_patch
I don't know why this error occurs, maybe someone else knows more.


Re: Patching submodules

Yann Dirson
 

Hi Emily,

I'm not sure how the patch is generated, and (not using devtool myself) I may understood your problem wrongly
(showing the relevant part of your diff could help), but you could try to generate it yourself with
"git show --submodule=diff", that could be more palatable to quilt.


Le ven. 20 mars 2020 à 16:09, Paul Barker <pbarker@...> a écrit :
On Fri, 20 Mar 2020 at 04:10, Emily <easmith5555@...> wrote:
>
> Hi all -
>
> I have a recipe that I'd like to patch - the source is in a repo which has a submodule, and the patch occurs in the submodule. Is there a way I can apply this patch without getting an error? I do kind of understand why it's a problem - the patch is changing the pointer of the submodule to a commit which doesn't actually exist. Do I need to build the submodule as a separate recipe and patch it separately maybe?
>
> I used devtool for the patch and if I don't run the devtool reset command, then everything builds, but I think this is just because the workspace created by devtool was added as a layer, which probably isn't a good long term solution.
>
> The error I get (pasted below) says I can "enforce with -f" but I'm not sure where that option goes exactly. Thanks for the help!
>
> Emily
>
> Error on build:
> ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Command Error: 'quilt --quiltrc /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0  Output:
> Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch
> File Poverty is not a regular file -- refusing to patch
> 1 out of 1 hunk ignored -- rejects in file
> Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not apply (enforce with -f)
> ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Function failed: patch_do_patch

The issue appears to be that patches are applied using quilt which
doesn't understand a patch like this. I don't know of a good solution
to this other than making a new commit in the top level repository and
updating SRCREV.

Perhaps it's better to carry the diff within the submodule as a patch
- so you leave the submodule commit pointer where it is and instead
include all the necessary changes to the submodule in the patch. Would
that work for you?



--
Yann Dirson <yann@...>
Blade / Shadow -- http://shadow.tech


Re: yocto on tinkerbaord using meta-rockchip

Philip Balister
 

On 3/20/20 3:10 AM, Yann Dirson wrote:
Wow, I'm suprised to discover this meta-rockchip :)

Rockchip engineers also publish a meta-rockchip on Github (
https://github.com/rockchip-linux/meta-rockchip), although
nowadays it's mainly Jeffy Chen publishing on
https://github.com/JeffyCN/meta-rockchip/

Those two repos seem quite complementary, Jeffy being focussed on kernel
4.4, and Trevor on mainline.
Wouldn't it make sense to merge all this work in a single place ?
Yes :) But you observe the key issue, Trevor is correctly focused on the
upstream kernel and the vendor layer on an old vendor kernel. The vendor
kernels rapidly turn into maintenance nightmares over a product lifetime.

Philip


Le ven. 20 mars 2020 à 02:59, Trevor Woerner <twoerner@...> a écrit :

Hi Karthik,

On Thu, Mar 19, 2020 at 9:36 PM karthik poduval
<karthik.poduval@...> wrote:
Thank you for you work on meta-rockchip layer. You were listed as the
maintainer for meta-rockchip so I though I will send you a mail with
an issue I was facing.

I was trying to flash an image on a Asus tinker board using
meta-rockchip. Here are the steps I followed.

git clone git://git.yoctoproject.org/poky
git clone git://git.yoctoproject.org/meta-rockchip
source poky/oe-init-build-env
bitbake-layers add-layer ../../meta-rockchip/
MACHINE=tinker-board bitbake core-image-minimal

#flahsed it using the following command to sdcard (my sdcard was dev/sde)
sudo dd
if=tmp/deploy/images/tinker-board/core-image-minimal-tinker-board.wic
of=/dev/sde

after inserting sdcard and booting up I can see on the serial console
it attempts to boot, crosses bootloader and proceeds to linux boot but
then gets hing around dwmmc_rockchip loading.

Attached the complete serial log. Your help is greatly appreciated.
According to the log, your board is a "tinker-board-s", please try
using that for your MACHINE instead of "tinker-board".

Thanks and best regards,
Trevor





Re: What are the key factors for yocto build speed?

Mike Looijmans
 

On 19-03-2020 18:21, Adrian Bunk wrote:
On Thu, Mar 19, 2020 at 05:07:17PM +0100, Mike Looijmans wrote:
...
With both parallelization options
to "16", I might end up with 16 compile tasks running 16 compile threads
each, i.e. 256 running processes.
...
This is a bug:
http://bugzilla.yoctoproject.org/show_bug.cgi?id=13306
I sometimes wonder whether something basic like "no more than one
compile task at a time" would be sufficient in practice to avoid
overloading all cores.
It would also help with RAM usage, there are some combinations of
recipes where the build gets aborted by the oom killer on my laptop
(8 cores, 32 GB RAM) when bitbake runs the compile tasks in parallel.

I tried compiling octave on an ARM board with 1GB of RAM (because octave is virtually impossible to cross-compile). There was one C++ in there that triggered a huge memory load of about 1GB, thus failing to run fully in RAM. Had to create an extra 1GB swap on the SD card to get the build to succeed. The actual swap usage wasn't much, and I could finish the build with two threads even (a dual core ARM yay).

So the memory usage is dependent on what's in the C++ file. Especially heavy template programming can put a huge load on memory. There's no way to predict that beforehand.


--
Mike Looijmans


Re: Patching submodules

Emily
 

Hi Paul -

I’m not sure what you mean by “include all the necessary changes to the submodule in the patch”, because anytime I change something in the submodule then the git diff for the main repo just shows a change to the submodule as a whole, not a specific file inside the submodule.

I don’t have complete control over the source but maybe I’ll see if I can make a change to the submodule itself, that seems to be the easiest.

Thanks,
Emily

On Mar 20, 2020, at 6:18 AM, Paul Barker <pbarker@...> wrote:

On Fri, 20 Mar 2020 at 04:10, Emily <easmith5555@...> wrote:

Hi all -

I have a recipe that I'd like to patch - the source is in a repo which has a submodule, and the patch occurs in the submodule. Is there a way I can apply this patch without getting an error? I do kind of understand why it's a problem - the patch is changing the pointer of the submodule to a commit which doesn't actually exist. Do I need to build the submodule as a separate recipe and patch it separately maybe?

I used devtool for the patch and if I don't run the devtool reset command, then everything builds, but I think this is just because the workspace created by devtool was added as a layer, which probably isn't a good long term solution.

The error I get (pasted below) says I can "enforce with -f" but I'm not sure where that option goes exactly. Thanks for the help!

Emily

Error on build:
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Command Error: 'quilt --quiltrc /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0 Output:
Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch
File Poverty is not a regular file -- refusing to patch
1 out of 1 hunk ignored -- rejects in file
Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not apply (enforce with -f)
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Function failed: patch_do_patch
The issue appears to be that patches are applied using quilt which
doesn't understand a patch like this. I don't know of a good solution
to this other than making a new commit in the top level repository and
updating SRCREV.

Perhaps it's better to carry the diff within the submodule as a patch
- so you leave the submodule commit pointer where it is and instead
include all the necessary changes to the submodule in the patch. Would
that work for you?


Re: Patching submodules

 

On Fri, 20 Mar 2020 at 04:10, Emily <easmith5555@...> wrote:

Hi all -

I have a recipe that I'd like to patch - the source is in a repo which has a submodule, and the patch occurs in the submodule. Is there a way I can apply this patch without getting an error? I do kind of understand why it's a problem - the patch is changing the pointer of the submodule to a commit which doesn't actually exist. Do I need to build the submodule as a separate recipe and patch it separately maybe?

I used devtool for the patch and if I don't run the devtool reset command, then everything builds, but I think this is just because the workspace created by devtool was added as a layer, which probably isn't a good long term solution.

The error I get (pasted below) says I can "enforce with -f" but I'm not sure where that option goes exactly. Thanks for the help!

Emily

Error on build:
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Command Error: 'quilt --quiltrc /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0 Output:
Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch
File Poverty is not a regular file -- refusing to patch
1 out of 1 hunk ignored -- rejects in file
Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not apply (enforce with -f)
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Function failed: patch_do_patch
The issue appears to be that patches are applied using quilt which
doesn't understand a patch like this. I don't know of a good solution
to this other than making a new commit in the top level repository and
updating SRCREV.

Perhaps it's better to carry the diff within the submodule as a patch
- so you leave the submodule commit pointer where it is and instead
include all the necessary changes to the submodule in the patch. Would
that work for you?


Re: Patching submodules

Nicolas Jeker
 

On Thu, 2020-03-19 at 23:10 -0500, Emily wrote:
Hi all -

I have a recipe that I'd like to patch - the source is in a repo
which has a submodule, and the patch occurs in the submodule. Is
there a way I can apply this patch without getting an error? I do
kind of understand why it's a problem - the patch is changing the
pointer of the submodule to a commit which doesn't actually exist. Do
I need to build the submodule as a separate recipe and patch it
separately maybe?
Is there a reason why you don't use a bbappend file with your patch in
it in a custom layer?

Something like this:

package_ver.bbappend
--------------------
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://abc.patch"


With this directory structure:

meta-custom-layer
├── package_ver.bbappend
└── package
└── abc.patch

Replace "package" and "ver" with the correct values (if you don't want
to set the version you can use "%" as a wildcard).

Maybe I missed something about your submodule situation and my advice
is completely wrong, if so, just disregard it.

I used devtool for the patch and if I don't run the devtool reset
command, then everything builds, but I think this is just because the
workspace created by devtool was added as a layer, which probably
isn't a good long term solution.
You should be able to get the above structure by using 'devtool finish
recipe meta-custom-layer'. If that doesn't work you can do it manually
as described above.

The error I get (pasted below) says I can "enforce with -f" but I'm
not sure where that option goes exactly. Thanks for the help!

Emily

Error on build:
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch:
Command Error: 'quilt --quiltrc
/local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-
linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-
native/etc/quiltrc push' exited with 0 Output:
Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch
File Poverty is not a regular file -- refusing to patch
1 out of 1 hunk ignored -- rejects in file
Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not
apply (enforce with -f)
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch:
Function failed: patch_do_patch
I don't know why this error occurs, maybe someone else knows more.


Re: yocto on tinkerbaord using meta-rockchip

Yann Dirson
 

Wow, I'm suprised to discover this meta-rockchip :)

Rockchip engineers also publish a meta-rockchip on Github (https://github.com/rockchip-linux/meta-rockchip), although
nowadays it's mainly Jeffy Chen publishing on https://github.com/JeffyCN/meta-rockchip/

Those two repos seem quite complementary, Jeffy being focussed on kernel 4.4, and Trevor on mainline.
Wouldn't it make sense to merge all this work in a single place ?


Le ven. 20 mars 2020 à 02:59, Trevor Woerner <twoerner@...> a écrit :
Hi Karthik,

On Thu, Mar 19, 2020 at 9:36 PM karthik poduval
<karthik.poduval@...> wrote:
> Thank you for you work on meta-rockchip layer. You were listed as the
> maintainer for meta-rockchip so I though I will send you a mail with
> an issue I was facing.
>
> I was trying to flash an image on a Asus tinker board using
> meta-rockchip. Here are the steps I followed.
>
> git clone git://git.yoctoproject.org/poky
> git clone  git://git.yoctoproject.org/meta-rockchip
> source poky/oe-init-build-env
> bitbake-layers add-layer ../../meta-rockchip/
> MACHINE=tinker-board bitbake core-image-minimal
>
> #flahsed it using the following command to sdcard (my sdcard was dev/sde)
> sudo dd if=tmp/deploy/images/tinker-board/core-image-minimal-tinker-board.wic
> of=/dev/sde
>
> after inserting sdcard and booting up I can see on the serial console
> it attempts to boot, crosses bootloader and proceeds to linux boot but
> then gets hing around dwmmc_rockchip loading.
>
> Attached the complete serial log. Your help is greatly appreciated.

According to the log, your board is a "tinker-board-s", please try
using that for your MACHINE instead of "tinker-board".

Thanks and best regards,
    Trevor



--
Yann Dirson <yann@...>
Blade / Shadow -- http://shadow.tech


Patching submodules

Emily
 

Hi all - 

I have a recipe that I'd like to patch - the source is in a repo which has a submodule, and the patch occurs in the submodule. Is there a way I can apply this patch without getting an error? I do kind of understand why it's a problem - the patch is changing the pointer of the submodule to a commit which doesn't actually exist. Do I need to build the submodule as a separate recipe and patch it separately maybe? 

I used devtool for the patch and if I don't run the devtool reset command, then everything builds, but I think this is just because the workspace created by devtool was added as a layer, which probably isn't a good long term solution. 

The error I get (pasted below) says I can "enforce with -f" but I'm not sure where that option goes exactly. Thanks for the help! 

Emily

Error on build: 
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Command Error: 'quilt --quiltrc /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0  Output:
Applying patch 0001-Update-Poverty-to-point-to-boost-python3.patch
File Poverty is not a regular file -- refusing to patch
1 out of 1 hunk ignored -- rejects in file
Patch 0001-Update-Poverty-to-point-to-boost-python3.patch does not apply (enforce with -f)
ERROR: opc-ua-server-gfex-1.0+gitAUTOINC+921c563309-r0 do_patch: Function failed: patch_do_patch


Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)

Sangeeta Jain
 

-----Original Message-----
From: akuster808 <akuster808@...>
Sent: Friday, 20 March, 2020 12:31 AM
To: Jain, Sangeeta <sangeeta.jain@...>; pokybuild@ubuntu1804-ty-
2.yocto.io; yocto@...
Cc: otavio@...; yi.zhao@...; Sangal, Apoorv
<apoorv.sangal@...>; Yeoh, Ee Peng <ee.peng.yeoh@...>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@...>;
richard.purdie@...; sjolley.yp.pm@...
Subject: Re: [yocto] QA notification for completed autobuilder build (yocto-
3.1_M3.rc1)



On 3/19/20 6:57 AM, Jain, Sangeeta wrote:
Hello all,

This is the full report for yocto-3.1_M3.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/t
ree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.
No new defects are found in this cycle.
Valgrind ptest failed (BUG id:13838).

Note: Few failures are observed. These are setup issue since running tests
remotely, not real yocto issue.
Updated some automated test results for more precise results from a repeated run.

Where any tests skipped do to lack of physical access to h/w? A quick look at
some of the manual tests do imply physical access.
Yes, skip some manual tests which can't run remotely.

- armin
======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13838

Thanks,
Sangeeta

-----Original Message-----
From: yocto@... <yocto@...> On
Behalf Of pokybuild@...
Sent: Monday, 16 March, 2020 5:40 PM
To: yocto@...
Cc: otavio@...; yi.zhao@...; Sangal, Apoorv
<apoorv.sangal@...>; Yeoh, Ee Peng <ee.peng.yeoh@...>;
Chan, Aaron Chun Yew <aaron.chun.yew.chan@...>;
richard.purdie@...; akuster808@...;
sjolley.yp.pm@...; Jain, Sangeeta <sangeeta.jain@...>
Subject: [yocto] QA notification for completed autobuilder build
(yocto-
3.1_M3.rc1)


A build flagged for QA (yocto-3.1_M3.rc1) was completed on the
autobuilder and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.1_M3.rc1


Build hash information:

bitbake: e67dfa4a4d0d63e4752655f25367582e5a95f1da
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: 60773e8496370d821309e00f2c312128a130c22b
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 61d80b07bcfa4adf5f1feb2904fec0a8d09c89f6
poky: 6f02caa39985fb89d9ad49e1f788a9a8dd6e12d7



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...



Re: yocto on tinkerbaord using meta-rockchip

Trevor Woerner
 

Hi Karthik,

On Thu, Mar 19, 2020 at 9:36 PM karthik poduval
<karthik.poduval@...> wrote:
Thank you for you work on meta-rockchip layer. You were listed as the
maintainer for meta-rockchip so I though I will send you a mail with
an issue I was facing.

I was trying to flash an image on a Asus tinker board using
meta-rockchip. Here are the steps I followed.

git clone git://git.yoctoproject.org/poky
git clone git://git.yoctoproject.org/meta-rockchip
source poky/oe-init-build-env
bitbake-layers add-layer ../../meta-rockchip/
MACHINE=tinker-board bitbake core-image-minimal

#flahsed it using the following command to sdcard (my sdcard was dev/sde)
sudo dd if=tmp/deploy/images/tinker-board/core-image-minimal-tinker-board.wic
of=/dev/sde

after inserting sdcard and booting up I can see on the serial console
it attempts to boot, crosses bootloader and proceeds to linux boot but
then gets hing around dwmmc_rockchip loading.

Attached the complete serial log. Your help is greatly appreciated.
According to the log, your board is a "tinker-board-s", please try
using that for your MACHINE instead of "tinker-board".

Thanks and best regards,
Trevor


Re: What are the key factors for yocto build speed?

Khem Raj
 

On Thu, Mar 19, 2020 at 9:07 AM Mike Looijmans <mike.looijmans@...> wrote:

On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
It would be really great if some sort of "weight" could be attached to a
task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization
options to "16", I might end up with 16 compile tasks running 16 compile
threads each, i.e. 256 running processes. In practice this doesn't
actually happen, but the memory load gets high sometimes, so I reduce
the TASKS to 8 at most. That has kept my system out of swap trouble for
the time being.

The idea was that tasks get a "weight" in terms of cores they'll use,
and the scheduler takes that into account. So it would run 16
do_configure tasks (weight=1) in parallel, but it would not start a new
task that would push the weight over some number (say 40 for my case).
So it would start a third compile, but not a fourth, but it would start
a do_configure task.

Does that make sense?
is it something like make -l ? that you are looking for here.

In builds involving FPGA's I have tasks that take up about 48GB of RAM
(my machine cannot run them) but only a single CPU core. Attempting to
run multiple of these in parallel (happened to me when I changed some
shared recipe content) will bring most machines to their knees.
Currently my only way of handling that is manual interference...

--
Mike Looijmans


Re: What are the key factors for yocto build speed?

Yann Dirson
 



Le jeu. 19 mars 2020 à 18:04, Richard Purdie <richard.purdie@...> a écrit :
On Thu, 2020-03-19 at 17:29 +0100, Yann Dirson wrote:
>
>
> Le jeu. 19 mars 2020 à 17:07, Mike Looijmans <mike.looijmans@...> a écrit :
> > On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
> > >> , fetch, configure, package and rootfs tasks.
> > >
> > > Sadly these tasks are much harder.
> >
> > It would be really great if some sort of "weight" could be attached to a
> > task. This relates to memory usage.
> >
> > My system has 16 cores but only 8GB RAM. With both parallelization
> > options to "16", I might end up with 16 compile tasks running 16 compile
> > threads each, i.e. 256 running processes. In practice this doesn't
> > actually happen, but the memory load gets high sometimes, so I reduce
> > the TASKS to 8 at most. That has kept my system out of swap trouble for
> > the time being.
>
> This could be neatly handled by using the GNU-make job-server mechanism.
> If bitbake itself would provide a jub-server, all make-based recipes would
> automatically get their jobs properly limited.  There is a (sadly not merged yet)
> MR [1] for ninja tu gain job-server support as well, through which we should have
> a pretty good coverage of the recipes set (as a backend for cmake, meson, and more).
>
> [1] https://github.com/ninja-build/ninja/issues/1139

You mean like:

http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237

? :)


Awesome :)

Sadly we never fixed all the issues to let that merge.

The problems listed in the commit message don't amount to much at first glance. Any other issues
that got visible only later ?

--
Yann Dirson <yann@...>
Blade / Shadow -- http://shadow.tech


Re: What are the key factors for yocto build speed?

Richard Purdie
 

On Thu, 2020-03-19 at 19:21 +0200, Adrian Bunk wrote:
On Thu, Mar 19, 2020 at 05:07:17PM +0100, Mike Looijmans wrote:
...
With both parallelization options
to "16", I might end up with 16 compile tasks running 16 compile
threads
each, i.e. 256 running processes.
...
This is a bug:
http://bugzilla.yoctoproject.org/show_bug.cgi?id=13306

I sometimes wonder whether something basic like "no more than one
compile task at a time" would be sufficient in practice to avoid
overloading all cores.
You can test with:

do_compile[number_threads] = "1"

Cheers,

Richard


Re: What are the key factors for yocto build speed?

Adrian Bunk
 

On Thu, Mar 19, 2020 at 05:07:17PM +0100, Mike Looijmans wrote:
...
With both parallelization options
to "16", I might end up with 16 compile tasks running 16 compile threads
each, i.e. 256 running processes.
...
This is a bug:
http://bugzilla.yoctoproject.org/show_bug.cgi?id=13306

I sometimes wonder whether something basic like "no more than one
compile task at a time" would be sufficient in practice to avoid
overloading all cores.

It would also help with RAM usage, there are some combinations of
recipes where the build gets aborted by the oom killer on my laptop
(8 cores, 32 GB RAM) when bitbake runs the compile tasks in parallel.

cu
Adrian


Re: What are the key factors for yocto build speed?

Richard Purdie
 

On Thu, 2020-03-19 at 17:29 +0100, Yann Dirson wrote:


Le jeu. 19 mars 2020 à 17:07, Mike Looijmans <mike.looijmans@...> a écrit :
On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
It would be really great if some sort of "weight" could be attached to a
task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization
options to "16", I might end up with 16 compile tasks running 16 compile
threads each, i.e. 256 running processes. In practice this doesn't
actually happen, but the memory load gets high sometimes, so I reduce
the TASKS to 8 at most. That has kept my system out of swap trouble for
the time being.
This could be neatly handled by using the GNU-make job-server mechanism.
If bitbake itself would provide a jub-server, all make-based recipes would
automatically get their jobs properly limited. There is a (sadly not merged yet)
MR [1] for ninja tu gain job-server support as well, through which we should have
a pretty good coverage of the recipes set (as a backend for cmake, meson, and more).

[1] https://github.com/ninja-build/ninja/issues/1139
You mean like:

http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237

? :)

Sadly we never fixed all the issues to let that merge.

Cheers,

Richard


Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)

Armin Kuster
 

On 3/19/20 6:57 AM, Jain, Sangeeta wrote:
Hello all,

This is the full report for yocto-3.1_M3.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.
No new defects are found in this cycle.
Valgrind ptest failed (BUG id:13838).

Note: Few failures are observed. These are setup issue since running tests remotely, not real yocto issue.
Where any tests skipped do to lack of physical access to h/w? A quick
look at some of the manual tests do imply physical access.

- armin
======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13838

Thanks,
Sangeeta

-----Original Message-----
From: yocto@... <yocto@...> On Behalf
Of pokybuild@...
Sent: Monday, 16 March, 2020 5:40 PM
To: yocto@...
Cc: otavio@...; yi.zhao@...; Sangal, Apoorv
<apoorv.sangal@...>; Yeoh, Ee Peng <ee.peng.yeoh@...>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@...>;
richard.purdie@...; akuster808@...;
sjolley.yp.pm@...; Jain, Sangeeta <sangeeta.jain@...>
Subject: [yocto] QA notification for completed autobuilder build (yocto-
3.1_M3.rc1)


A build flagged for QA (yocto-3.1_M3.rc1) was completed on the autobuilder
and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.1_M3.rc1


Build hash information:

bitbake: e67dfa4a4d0d63e4752655f25367582e5a95f1da
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: 60773e8496370d821309e00f2c312128a130c22b
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 61d80b07bcfa4adf5f1feb2904fec0a8d09c89f6
poky: 6f02caa39985fb89d9ad49e1f788a9a8dd6e12d7



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...



Re: What are the key factors for yocto build speed?

Yann Dirson
 



Le jeu. 19 mars 2020 à 17:07, Mike Looijmans <mike.looijmans@...> a écrit :
On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
>> , fetch, configure, package and rootfs tasks.
>
> Sadly these tasks are much harder.

It would be really great if some sort of "weight" could be attached to a
task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization
options to "16", I might end up with 16 compile tasks running 16 compile
threads each, i.e. 256 running processes. In practice this doesn't
actually happen, but the memory load gets high sometimes, so I reduce
the TASKS to 8 at most. That has kept my system out of swap trouble for
the time being.

This could be neatly handled by using the GNU-make job-server mechanism.
If bitbake itself would provide a jub-server, all make-based recipes would
automatically get their jobs properly limited.  There is a (sadly not merged yet)
MR [1] for ninja tu gain job-server support as well, through which we should have
a pretty good coverage of the recipes set (as a backend for cmake, meson, and more).


--
Yann Dirson <yann@...>
Blade / Shadow -- http://shadow.tech


Re: What are the key factors for yocto build speed?

Mike Looijmans
 

On 19-03-2020 12:04, Richard Purdie via Lists.Yoctoproject.Org wrote:
, fetch, configure, package and rootfs tasks.
Sadly these tasks are much harder.
It would be really great if some sort of "weight" could be attached to a task. This relates to memory usage.

My system has 16 cores but only 8GB RAM. With both parallelization options to "16", I might end up with 16 compile tasks running 16 compile threads each, i.e. 256 running processes. In practice this doesn't actually happen, but the memory load gets high sometimes, so I reduce the TASKS to 8 at most. That has kept my system out of swap trouble for the time being.

The idea was that tasks get a "weight" in terms of cores they'll use, and the scheduler takes that into account. So it would run 16 do_configure tasks (weight=1) in parallel, but it would not start a new task that would push the weight over some number (say 40 for my case). So it would start a third compile, but not a fourth, but it would start a do_configure task.

Does that make sense?

In builds involving FPGA's I have tasks that take up about 48GB of RAM (my machine cannot run them) but only a single CPU core. Attempting to run multiple of these in parallel (happened to me when I changed some shared recipe content) will bring most machines to their knees. Currently my only way of handling that is manual interference...

--
Mike Looijmans


Re: QA notification for completed autobuilder build (yocto-3.1_M3.rc1)

Sangeeta Jain
 

Hello all,

This is the full report for yocto-3.1_M3.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.
No new defects are found in this cycle.
Valgrind ptest failed (BUG id:13838).

Note: Few failures are observed. These are setup issue since running tests remotely, not real yocto issue.
======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13838

Thanks,
Sangeeta

-----Original Message-----
From: yocto@... <yocto@...> On Behalf
Of pokybuild@...
Sent: Monday, 16 March, 2020 5:40 PM
To: yocto@...
Cc: otavio@...; yi.zhao@...; Sangal, Apoorv
<apoorv.sangal@...>; Yeoh, Ee Peng <ee.peng.yeoh@...>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@...>;
richard.purdie@...; akuster808@...;
sjolley.yp.pm@...; Jain, Sangeeta <sangeeta.jain@...>
Subject: [yocto] QA notification for completed autobuilder build (yocto-
3.1_M3.rc1)


A build flagged for QA (yocto-3.1_M3.rc1) was completed on the autobuilder
and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.1_M3.rc1


Build hash information:

bitbake: e67dfa4a4d0d63e4752655f25367582e5a95f1da
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: 60773e8496370d821309e00f2c312128a130c22b
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 61d80b07bcfa4adf5f1feb2904fec0a8d09c89f6
poky: 6f02caa39985fb89d9ad49e1f788a9a8dd6e12d7



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...


8901 - 8920 of 57772