Date   

Re: #apt #linux #yocto #raspberrypi., apt, gcc, sudo not present #apt #linux #yocto

Josef Holzmayr <holzmayr@...>
 

Howdy!

Am 27.05.2020 um 12:06 schrieb Siddhartha V:
Hello,
  I built the image for Raspberrypi3B+ board. But when boot the board gcc, apt, sudo were not there. Actually only the poweron and poweroff and few basic commnds like, ls, mkdir, whoami were working. May I konw what wrong I did while building please.
This sounds like you actually want an ubuntu. Why not just use it, then? Beating a yocto build into shape will always just leave you wanting and complaining, unless you are willing to get rid of your assumptions.

Hint:
The frist things you have to get out of your head are expecting apt, sudo and gcc on the board.

Reasoning:
- no apt, because the images are usually constructed at build time including all expected software. Runtime package management is actually rarely needed.
- no sudo, because you can just log in as whatever user you want to. And in the product case it has no use because usually people do not login, hence no need for its security means.
- no gcc, because all compilation is meant to happen at image building time, or at least on the development host.

Of course this is only a very generic, top-level explanation and use cases diffee. But If the mindset does not fit right from the beginning, then there only lies trouble ahead.

Greetz

--
_____________________________________________________________
R-S-I Elektrotechnik GmbH & Co. KG
Woelkestrasse 11
D-85301 Schweitenkirchen
Fon: +49 8444 9204-0
Fax: +49 8444 9204-50
www.rsi-elektrotechnik.de

_____________________________________________________________
Amtsgericht Ingolstadt - GmbH: HRB 191328 - KG: HRA 170363
Geschäftsführer: Dr.-Ing. Michael Sorg, Dipl.-Ing. Franz Sorg
USt-IdNr.: DE 128592548


Re: #apt #linux #yocto #raspberrypi., apt, gcc, sudo not present #apt #linux #yocto

Mark Van De Vyver <mark@...>
 

Hi Siddhartha,
Caveat: This is generally what I've done, and not specific to any rpi or rhel setup....
 
If you run `bitbake -e core-image-base | grep "^IMAGE_FEATURES" you'll be able to see what was configured.
You can also filter *_FEATURES to get a better idea of what went on in terms of machine and compatible features.
 
Next, with that data in mind you can read the ./<build-folder>/bitbake-cookerdaemon.log
When reading this you will see various files are loaded - opening each of them in turn you might see some code that turns off what you expected to have installed.
Or you might notice that the packages you expected to be installed were never put forward for installation.
 
Not sure if that helps.  I've run into some wrinkles where a package wasn't available, and became so only after adding a 'seemingly' (to me that is) unrelated layer - but in those cases I got a warning the package could not be found.  Sounds like that **is not** happening here.  Sorry I can't be of better assistance.

HTH?

 


Re: QA notification for completed autobuilder build (yocto-2.7.4.rc2)

William Mills <wmills@...>
 

On 5/27/20 6:42 PM, Michael Halstead wrote:
I've rebuilt the fancyindex module and enabled it. The full filenames
are displayed now. The module is disabled automatically when new
versions of EPEL are released. Hopefully the change in html output
doesn't break any script you've started writing.
Thanks Michael. Its funny because I had switched machines in the
meantime and I was scratching my head. "how does Firefox know how to do
that when Chrome did not"? But I figured out you had made a change :)

I also remembered "wget -r -np <url>". I have not used that is so long
I forgot about it. It downloads too much but it is a quick start. (It
downloads each symlink as a unique file.)

So for the real solution I think I am only going to download the files
that have a <file>.md5sum. I suppose I could recreate the symlinks
based on heuristics of the filename but I probably won't bother.

It would still be nice to have an intentionally machine readable
directory of the full release content in one file in the root.
md5sum and crew won't show symlink vs real files. I'll keep thinking
about any existing format that might work.

I found and closed a few old
<https://bugzilla.yoctoproject.org/show_bug.cgi?id=12416>bugs
<https://bugzilla.yoctoproject.org/show_bug.cgi?id=11249> about this. I
think the issue has been fixed and returned a few times since these bugs
were filed.
Nice.

Thanks,
Bill


Re: QA notification for completed autobuilder build (yocto-2.7.4.rc2)

Sangeeta Jain
 

Hello All,

Intel and WR YP QA is planning for QA execution for YP build yocto-2.7.4.rc2. We are planning to execute following tests for this cycle:

OEQA-manual tests for following module:
1. OE-Core
2. BSP-hw

Runtime auto test for following platforms:
1. MinnowTurbot 32-bit
2. Coffee Lake
3. NUC 7
4. NUC 6
5. Edgerouter
6. MPC8315e-rdb
7. Beaglebone

ETA for completion is Monday, June 01.

Thanks,
Sangeeta

-----Original Message-----
From: yocto@lists.yoctoproject.org <yocto@lists.yoctoproject.org> On Behalf
Of pokybuild@ubuntu1804-ty-1.yocto.io
Sent: Wednesday, 27 May, 2020 8:09 PM
To: yocto@lists.yoctoproject.org
Cc: otavio@ossystems.com.br; yi.zhao@windriver.com; Sangal, Apoorv
<apoorv.sangal@intel.com>; Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@intel.com>;
richard.purdie@linuxfoundation.org; akuster808@gmail.com;
sjolley.yp.pm@gmail.com; Jain, Sangeeta <sangeeta.jain@intel.com>
Subject: [yocto] QA notification for completed autobuilder build (yocto-
2.7.4.rc2)


A build flagged for QA (yocto-2.7.4.rc2) was completed on the autobuilder and is
available at:


https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2


Build hash information:

bitbake: 7f7126211170439ac1d7d72e980786ce0edb7bb7
meta-gplv2: d5d9fc9a4bbd365d6cd6fe4d6a8558f7115c17da
meta-intel: 29ee4852a05931dcf856670d9d8a3c3077a40fe8
meta-mingw: 10695afe8cd406844e0d0dd868c11677e07557d4
oecore: db3ce703d03b18e8a4120969d32ff7f344f34fe9
poky: f65b24e9ca0918a4ede70ea48ed8b7cc4620f07f



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org



Re: overwrite LAYERSERIES_COMPAT_ for different layer

Denys Dmytriyenko
 

On Mon, May 25, 2020 at 03:28:52PM +0200, Quentin Schulz wrote:
On Mon, May 25, 2020 at 03:10:39PM +0200, Martin Jansa wrote:
You can add a layer which will set LAYERSERIES_COMPAT for another layer,
but it needs to be parsed before the layer you want to change, e.g.:
https://github.com/webosose/meta-webosose/blob/thud/meta-qt5-compat/conf/layer.conf
it's useful to use this layer also to implement whatever modifications are
needed to make the layer to be actually compatible with the release you're
using, like:
https://github.com/webOS-ports/meta-webos-ports/tree/zeus/meta-qt5-compat
FWIW, you could make the parsing order not matter (at least in thud,
from a quick look, master as well). LAYERSERIES_COMPAT is resolved after
all layer.conf have been parsed[1]. I do not know if it's on purpose or
not, meaning it could well disappear in the near future.

So you could override it from anywhere by using __append but it has to
be done before or during the conf/layer.conf parsing. This also makes it
future proof wrt layer priorities and how LAYERSERIES_COMPAT is set (+=,
=, ?= ?).

I personally have it in conf/bblayers.conf (for some reason we "ship"
this one, though it's not best practice IIRC).
Also not recommended, you can put this in your conf/bblayers.conf:

OVERRIDES = "iamgroot"
LAYERSERIES_COMPAT_browser-layer_iamgroot = "zeus"

--
Denys


Re: QA notification for completed autobuilder build (yocto-2.7.4.rc2)

Michael Halstead
 

I've rebuilt the fancyindex module and enabled it. The full filenames are displayed now. The module is disabled automatically when new versions of EPEL are released. Hopefully the change in html output doesn't break any script you've started writing.

I found and closed a few old bugs about this. I think the issue has been fixed and returned a few times since these bugs were filed.




On Wed, May 27, 2020 at 2:53 PM William Mills via lists.yoctoproject.org <wmills=ti.com@...> wrote:


On 5/27/20 4:48 PM, Richard Purdie wrote:
> Hi Bill,
>
> On Wed, 2020-05-27 at 12:31 +0000, Mills, William wrote:
>> In a script, how would I download all the files in this dir:
>>
>> https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2/machines/beaglebone-yocto/
>>
>> I see no global md5sum (or sha256sum etc) file in the tree anywhere.
>>
>> The names are cut off in the default html list format so screen
>> scrapping won't work.  Is there an automated way to request a better
>> format?  WebDav? Am I missing something obvious?
>
> If you look at the raw html it looks like:
>
> <a href="am335x-bone--5.0.13%2Bgit0%2B7f6e97c357_f990fd0ce1-r0.2-beaglebone-yocto-20200526225701.dtb">am335x-bone--5.0.13+git0+7f6e97c357_f990fd0ce1-..&gt;</a> 27-May-2020 07:00               56237
>
> so the href is correct and its only the displayed url that is
> shortened. That is why you can click on them to get the artefacts and
> it is 'just' a display issue. It also means any script can pull the
> correct urls the same way.
>

Of course.  That _was_ obvious.  Trying to do too many things at the
same time.

Still it would be nice to drop a sha256sums file in the base of the
tree.  I would feel much happier coding to that than the html scrapping.
But I will start with the html.


Thanks.
Bill

> I think Michael has tweaked the line lengths in the past but it keeps
> getting reset to distro defaults as the machine is upgraded. He knows
> about it and will look at sorting that again.
>
> Cheers,
>
> Richard
>
>
>



--
Michael Halstead
Linux Foundation / Yocto Project
Systems Operations Engineer


Re: QA notification for completed autobuilder build (yocto-2.7.4.rc2)

William Mills <wmills@...>
 

On 5/27/20 4:48 PM, Richard Purdie wrote:
Hi Bill,

On Wed, 2020-05-27 at 12:31 +0000, Mills, William wrote:
In a script, how would I download all the files in this dir:

https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2/machines/beaglebone-yocto/

I see no global md5sum (or sha256sum etc) file in the tree anywhere.

The names are cut off in the default html list format so screen
scrapping won't work. Is there an automated way to request a better
format? WebDav? Am I missing something obvious?
If you look at the raw html it looks like:

<a href="am335x-bone--5.0.13%2Bgit0%2B7f6e97c357_f990fd0ce1-r0.2-beaglebone-yocto-20200526225701.dtb">am335x-bone--5.0.13+git0+7f6e97c357_f990fd0ce1-..&gt;</a> 27-May-2020 07:00 56237

so the href is correct and its only the displayed url that is
shortened. That is why you can click on them to get the artefacts and
it is 'just' a display issue. It also means any script can pull the
correct urls the same way.
Of course. That _was_ obvious. Trying to do too many things at the
same time.

Still it would be nice to drop a sha256sums file in the base of the
tree. I would feel much happier coding to that than the html scrapping.
But I will start with the html.


Thanks.
Bill

I think Michael has tweaked the line lengths in the past but it keeps
getting reset to distro defaults as the machine is upgraded. He knows
about it and will look at sorting that again.

Cheers,

Richard



Re: QA notification for completed autobuilder build (yocto-2.7.4.rc2)

Richard Purdie
 

Hi Bill,

On Wed, 2020-05-27 at 12:31 +0000, Mills, William wrote:
In a script, how would I download all the files in this dir:

https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2/machines/beaglebone-yocto/

I see no global md5sum (or sha256sum etc) file in the tree anywhere.

The names are cut off in the default html list format so screen
scrapping won't work. Is there an automated way to request a better
format? WebDav? Am I missing something obvious?
If you look at the raw html it looks like:

<a href="am335x-bone--5.0.13%2Bgit0%2B7f6e97c357_f990fd0ce1-r0.2-beaglebone-yocto-20200526225701.dtb">am335x-bone--5.0.13+git0+7f6e97c357_f990fd0ce1-..&gt;</a> 27-May-2020 07:00 56237

so the href is correct and its only the displayed url that is
shortened. That is why you can click on them to get the artefacts and
it is 'just' a display issue. It also means any script can pull the
correct urls the same way.

I think Michael has tweaked the line lengths in the past but it keeps
getting reset to distro defaults as the machine is upgraded. He knows
about it and will look at sorting that again.

Cheers,

Richard


Re: QA notification for completed autobuilder build (yocto-2.7.4.rc2)

Armin Kuster
 

On 5/27/20 5:31 AM, Mills, William wrote:
In a script, how would I download all the files in this dir:

https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2/machines/beaglebone-yocto/

I see no global md5sum (or sha256sum etc) file in the tree anywhere.

The names are cut off in the default html list format so screen scrapping won't work. Is there an automated way to request a better format? WebDav? Am I missing something obvious?
I believe its a known  issue on the web site.  Maybe a having a defect
opened so we can find someone to work on it.

- armin

Thanks,
Bill

-----Original Message-----
From: yocto@lists.yoctoproject.org [mailto:yocto@lists.yoctoproject.org] On Behalf Of pokybuild@ubuntu1804-ty-1.yocto.io
Sent: Wednesday, May 27, 2020 8:09 AM
To: yocto@lists.yoctoproject.org
Cc: otavio@ossystems.com.br; yi.zhao@windriver.com; apoorv.sangal@intel.com; ee.peng.yeoh@intel.com; aaron.chun.yew.chan@intel.com; richard.purdie@linuxfoundation.org; akuster808@gmail.com; sjolley.yp.pm@gmail.com; sangeeta.jain@intel.com
Subject: [yocto] QA notification for completed autobuilder build (yocto-2.7.4.rc2)


A build flagged for QA (yocto-2.7.4.rc2) was completed on the autobuilder and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2


Build hash information:

bitbake: 7f7126211170439ac1d7d72e980786ce0edb7bb7
meta-gplv2: d5d9fc9a4bbd365d6cd6fe4d6a8558f7115c17da
meta-intel: 29ee4852a05931dcf856670d9d8a3c3077a40fe8
meta-mingw: 10695afe8cd406844e0d0dd868c11677e07557d4
oecore: db3ce703d03b18e8a4120969d32ff7f344f34fe9
poky: f65b24e9ca0918a4ede70ea48ed8b7cc4620f07f



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org



Re: what to expect from distributed sstate cache?

Mike Looijmans
 

We're sharing the sstate-cache on plain Ubuntu LTS machines, all installed from wherever, and no issues with low hitrates.

Even if the build server was running something like ubuntu 14 and "my" machine running 16, the hit rate would still be pretty good, it would only rebuild the "native" stuff, but not the target binaries.

We just share the sstate-cache over HTTP (Apache), so it's a one-way thing - you can get sstate objects from the build server, but not the other way around.



Met vriendelijke groet / kind regards,

Mike Looijmans
System Expert


TOPIC Embedded Products B.V.
Materiaalweg 4, 5681 RJ Best
The Netherlands

T: +31 (0) 499 33 69 69
E: mike.looijmans@topicproducts.com
W: www.topicproducts.com

Please consider the environment before printing this e-mail

On 27-05-2020 10:58, Mans Zigher via lists.yoctoproject.org wrote:
Hi,

This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.

Thanks

--
Mike Looijmans


Re: [OE-core] OpenEmbedded Happy Hour

Philip Balister
 

Just a reminder this happens later "today". As always consult:

https://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenEmbedded+Happy+Hour+May+27&iso=20200527T21&p1=%3A&ah=1

for the local day and time.

Philip

On 5/15/20 11:56 AM, Philip Balister wrote:
I've made a wiki page to track these:

https://www.openembedded.org/wiki/Happy_Hours

Then next one is scheduled for 2100 UTC on May 27. This is late evening
for Europe and morning for New Zealand, so hopefully we see some
different faces. The meeting info is on the wiki page.

There is no set agenda, so bring some projects to talk about with the
community.


Philip




Re: QA notification for completed autobuilder build (yocto-2.7.4.rc2)

William Mills <wmills@...>
 

In a script, how would I download all the files in this dir:

https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2/machines/beaglebone-yocto/

I see no global md5sum (or sha256sum etc) file in the tree anywhere.

The names are cut off in the default html list format so screen scrapping won't work. Is there an automated way to request a better format? WebDav? Am I missing something obvious?

Thanks,
Bill

-----Original Message-----
From: yocto@lists.yoctoproject.org [mailto:yocto@lists.yoctoproject.org] On Behalf Of pokybuild@ubuntu1804-ty-1.yocto.io
Sent: Wednesday, May 27, 2020 8:09 AM
To: yocto@lists.yoctoproject.org
Cc: otavio@ossystems.com.br; yi.zhao@windriver.com; apoorv.sangal@intel.com; ee.peng.yeoh@intel.com; aaron.chun.yew.chan@intel.com; richard.purdie@linuxfoundation.org; akuster808@gmail.com; sjolley.yp.pm@gmail.com; sangeeta.jain@intel.com
Subject: [yocto] QA notification for completed autobuilder build (yocto-2.7.4.rc2)


A build flagged for QA (yocto-2.7.4.rc2) was completed on the autobuilder and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2


Build hash information:

bitbake: 7f7126211170439ac1d7d72e980786ce0edb7bb7
meta-gplv2: d5d9fc9a4bbd365d6cd6fe4d6a8558f7115c17da
meta-intel: 29ee4852a05931dcf856670d9d8a3c3077a40fe8
meta-mingw: 10695afe8cd406844e0d0dd868c11677e07557d4
oecore: db3ce703d03b18e8a4120969d32ff7f344f34fe9
poky: f65b24e9ca0918a4ede70ea48ed8b7cc4620f07f



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org


QA notification for completed autobuilder build (yocto-2.7.4.rc2)

pokybuild@...
 

A build flagged for QA (yocto-2.7.4.rc2) was completed on the autobuilder and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-2.7.4.rc2


Build hash information:

bitbake: 7f7126211170439ac1d7d72e980786ce0edb7bb7
meta-gplv2: d5d9fc9a4bbd365d6cd6fe4d6a8558f7115c17da
meta-intel: 29ee4852a05931dcf856670d9d8a3c3077a40fe8
meta-mingw: 10695afe8cd406844e0d0dd868c11677e07557d4
oecore: db3ce703d03b18e8a4120969d32ff7f344f34fe9
poky: f65b24e9ca0918a4ede70ea48ed8b7cc4620f07f



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org


Re: what to expect from distributed sstate cache?

Mans Zigher <mans.zigher@...>
 

Thanks will have a look at that script.

Den ons 27 maj 2020 kl 13:41 skrev Martin Jansa <martin.jansa@gmail.com>:


There is no limitation like that, but it's quite easy to break that, you mentioned some ugly BSP before, I wouldn't be surprised if it's broken there.

What worked well for me over the years is using this script:
openembedded-core/scripts/sstate-diff-machines.sh
on jenkins which produces the sstate-cache, it not only checks for signature issues between MACHINEs (can be used for single MACHINE as well) but creates a directory with all sstate signatures in our builds, this is then published as tarball together with built images.

Then when I'm seeing unexpected low reuse of sstate on some builder, I just use the same sstate-diff-machines.sh locally, fetch the tarball from the build I was trying to reproduce and compare the content with bitbake-diffsigs.

Cheers,

On Wed, May 27, 2020 at 10:59 AM Mans Zigher <mans.zigher@gmail.com> wrote:

Hi,

This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.

Thanks


Re: what to expect from distributed sstate cache?

Martin Jansa
 

There is no limitation like that, but it's quite easy to break that, you mentioned some ugly BSP before, I wouldn't be surprised if it's broken there.

What worked well for me over the years is using this script:
openembedded-core/scripts/sstate-diff-machines.sh
on jenkins which produces the sstate-cache, it not only checks for signature issues between MACHINEs (can be used for single MACHINE as well) but creates a directory with all sstate signatures in our builds, this is then published as tarball together with built images.

Then when I'm seeing unexpected low reuse of sstate on some builder, I just use the same sstate-diff-machines.sh locally, fetch the tarball from the build I was trying to reproduce and compare the content with bitbake-diffsigs.

Cheers,

On Wed, May 27, 2020 at 10:59 AM Mans Zigher <mans.zigher@...> wrote:
Hi,

This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned  and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.

Thanks


#yocto #raspberrypi #linux . Failure to build xen-image-minimal. Yocto bitbake failed @99% while building xen-minimal-image. #yocto #raspberrypi #linux

Siddhartha V
 

Hello,

I am building the xen minimal image using yocto warrior ("bitbake xen-image-minimal") by giving the target machine as "raspberrypi4". It reached 99% but at last I got below error and the whole process terminated with error.

ERROR: xen-image-minimal-1.0-r0 do_rootfs: Could not invoke dnf. Command '/home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/recipe-sysroot-native/usr/bin/dnf -v --rpmverbosity=info -y -c /home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/rootfs/etc/dnf/dnf.conf --setopt=reposdir=/home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/rootfs/etc/yum.repos.d --installroot=/home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/rootfs --setopt=logdir=/home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/temp --repofrompath=oe-repo,/home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/oe-rootfs-repo --nogpgcheck install kernel-module-xen-blkback kernel-module-xen-gntalloc kernel-module-xen-gntdev kernel-module-xen-netback kernel-module-xen-wdt packagegroup-core-boot packagegroup-core-ssh-dropbear packagegroup-core-ssh-openssh qemu run-postinsts xen-base locale-base-en-us locale-base-en-gb' returned 1:

DNF version: 4.1.0

cachedir: /home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/rootfs/var/cache/dnf

Added oe-repo repo from /home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/oe-rootfs-repo

repo: using cache for: oe-repo

not found other for: 

not found modules for: 

not found deltainfo for: 

not found updateinfo for: 

oe-repo: using metadata from Wed 27 May 2020 06:15:39 AM UTC.

Last metadata expiration check: 0:00:01 ago on Wed 27 May 2020 06:15:45 AM UTC.

No module defaults found

No match for argument: kernel-module-xen-blkback

No match for argument: kernel-module-xen-gntalloc

No match for argument: kernel-module-xen-gntdev

No match for argument: kernel-module-xen-netback

No match for argument: kernel-module-xen-wdt

Error: Unable to find a match

 

ERROR: xen-image-minimal-1.0-r0 do_rootfs: 

ERROR: xen-image-minimal-1.0-r0 do_rootfs: Function failed: do_rootfs

ERROR: Logfile of failure stored in: /home/siddhu/Documents/yocto/build/tmp/work/raspberrypi4-poky-linux-gnueabi/xen-image-minimal/1.0-r0/temp/log.do_rootfs.14256

ERROR: Task (/home/siddhu/Documents/yocto/sources/meta-virtualization/recipes-extended/images/xen-image-minimal.bb:do_rootfs) failed with exit code '1'

NOTE: Tasks Summary: Attempted 2605 tasks of which 2152 didn't need to be rerun and 1 failed.

Summary: 1 task failed:

  /home/siddhu/Documents/yocto/sources/meta-virtualization/recipes-extended/images/xen-image-minimal.bb:do_rootfs

Summary: There were 3 ERROR messages shown, returning a non-zero exit code.

May I know what mistake I am doing here please. I have attached screenshot also.


My bblayer.conf file is as below:

# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"

BBFILES ?= ""
BBLAYERS ?= " \

  /home/siddhu/Documents/yocto/sources/poky/meta \
  /home/siddhu/Documents/yocto/sources/poky/meta-poky \
  /home/siddhu/Documents/yocto/sources/poky/meta-yocto-bsp \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-oe \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-multimedia \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-python \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-networking \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-filesystems \
  /home/siddhu/Documents/yocto/sources/meta-cloud-services \
  /home/siddhu/Documents/yocto/sources/meta-selinux \
  /home/siddhu/Documents/yocto/sources/meta-virtualization\
  /home/siddhu/Documents/yocto/sources/meta-raspberrypi \
  "

My local.conf file I have added the below details:

MACHINE ??= "raspberrypi4"
DISTRO_FEATURES += "virtualization xen"
PACKAGE_CLASSES ?= "package_rpm"
CONF_VERSION = "1"
IMAGE_FEATURES += "ssh-server-dropbear"


I have tried the same for "cubieboard2" also even there I faced the same issue.Eagerly waiting for the response.

Regards,
Siddhartha V


#apt #linux #yocto #raspberrypi., apt, gcc, sudo not present #apt #linux #yocto

Siddhartha V
 

Hello,

  I built the image for Raspberrypi3B+ board. But when boot the board gcc, apt, sudo were not there. Actually only the poweron and poweroff and few basic commnds like, ls, mkdir, whoami were working. May I konw what wrong I did while building please.

I am using yocto warrior. I used "bitbake core-image-base" command for building.

My bblayers.conf is as below:

# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
 
BBPATH = "${TOPDIR}"
BBFILES ?= ""
 
BBLAYERS ?= " \
  /home/siddhu/Documents/yocto/sources/poky/meta \
  /home/siddhu/Documents/yocto/sources/poky/meta-poky \
  /home/siddhu/Documents/yocto/sources/poky/meta-yocto-bsp \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-oe \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-multimedia \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-python \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-networking \
  /home/siddhu/Documents/yocto/sources/meta-openembedded/meta-filesystems \
  /home/siddhu/Documents/yocto/sources/meta-raspberrypi \
  "

My local.conf file I have added the below details:

MACHINE ??= "raspberrypi3"
PACKAGE_CLASSES ?= "package_rpm"
CONF_VERSION = "1"
IMAGE_FEATURES += "ssh-server-dropbear"


Re: what to expect from distributed sstate cache?

Mans Zigher <mans.zigher@...>
 

Hi,

Thanks for the input. Regarding docker we are building the docker
image and we are using the same image for all nodes so should they not
be identical when the nodes start the containers?

Thanks,

Den ons 27 maj 2020 kl 11:16 skrev <Mikko.Rapeli@bmw.de>:


Hi,

On Wed, May 27, 2020 at 10:58:55AM +0200, Mans Zigher wrote:
This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.
We do something similar except we rsync a sstate mirror to build
nodes from latest release before a build (and topic from gerrit
are merged to latest release too to avoid sstate and build tree getting
too out of sync).

bitbake-diffsigs can tell you why things get rebuild. The answers
should be there.

Also note that docker images are not reproducible by default
and might end up having different patch versions of openssl etc
depending on who build them and when. One way to work around this
is to use e.g. snapshots.debian.org repos for Debian containers
with a timestamped state of the full package repo used to generate
the container. I've done something similar but manual on top of
debootstrap to create a build rootfs tarball for lxc.

Hope this helps,

-Mikko


Re: what to expect from distributed sstate cache?

Mikko Rapeli
 

Hi,

On Wed, May 27, 2020 at 10:58:55AM +0200, Mans Zigher wrote:
This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.
We do something similar except we rsync a sstate mirror to build
nodes from latest release before a build (and topic from gerrit
are merged to latest release too to avoid sstate and build tree getting
too out of sync).

bitbake-diffsigs can tell you why things get rebuild. The answers
should be there.

Also note that docker images are not reproducible by default
and might end up having different patch versions of openssl etc
depending on who build them and when. One way to work around this
is to use e.g. snapshots.debian.org repos for Debian containers
with a timestamped state of the full package repo used to generate
the container. I've done something similar but manual on top of
debootstrap to create a build rootfs tarball for lxc.

Hope this helps,

-Mikko


Re: what to expect from distributed sstate cache?

Alexander Kanavin
 

The recommended set up is to use r/w NFS between build machines, so they all contribute to the cache directly. And yes, if the inputs to the task are identical, then there should a cache hit.

If you are getting cache misses where you are expecting a cache hit, then bitbake-diffsigs/bitbake-dumpsig may help to debug.

Alex


On Wed, 27 May 2020 at 10:59, Mans Zigher <mans.zigher@...> wrote:
Hi,

This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned  and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.

Thanks

3981 - 4000 of 53484