Date   

Re: cmake verison #cmake

Rakesh H S
 

Hi Kush,


Please check the recipe what you created and add the below line in recipe.

inherit cmake

It will resolve the cmake issue.



Rgs,
Rakesh H S



From: yocto@... <yocto@...> on behalf of lavkhush2208 via lists.yoctoproject.org <lavkhush2208=gmail.com@...>
Sent: 18 June 2021 08:29
To: yocto@... <yocto@...>
Subject: [yocto] cmake verison #cmake
 
Hello Guys

I am using pytorch-1.9 version using github source,  but i am facing an issue:-

ERROR: pytorch-v1.9.0+gitAUTOINC+ecc37184a5-r0 do_compile: 'python3 setup.py build ' execution failed.
ERROR: pytorch-v1.9.0+gitAUTOINC+ecc37184a5-r0 do_compile: Execution of '/home/kush/package-create/kush/sources/fu540-build/tmp-glibc/work/riscv64-oe-linux/pytorch/v1.9.0+gitAUTOINC+ecc37184a5-r0/temp/run.do_compile.14902' failed with exit code 1:
Building wheel torch-1.10.0a0+gitecc3718

raise RuntimeError('no cmake or cmake3 with version >= 3.5.0 found')
RuntimeError: no cmake or cmake3 with version >= 3.5.0 found
ERROR: 'python3 setup.py build ' execution failed.

if something is missing, please update me so that i can modify.

T&R
lavkhush
 
[EXT]


[PATCH] smack: add 3 cves to allowlist

Sekine Shigeki
 

CVE-2014-0363, CVE-2014-0364, CVE-2016-10027 are not for smack of smack-team(https://github.com/smack-team/smack) but other project.

Signed-off-by: Sekine Shigeki <sekine.shigeki@...>
---
recipes-mac/smack/smack_1.3.1.bb | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/recipes-mac/smack/smack_1.3.1.bb b/recipes-mac/smack/smack_1.3.1.bb
index b1ea4e9..6ae715e 100644
--- a/recipes-mac/smack/smack_1.3.1.bb
+++ b/recipes-mac/smack/smack_1.3.1.bb
@@ -13,6 +13,11 @@ SRC_URI = " \

PV = "1.3.1"

+# CVE-2014-0363, CVE-2014-0364, CVE-2016-10027 is valnerble for other product.
+CVE_CHECK_WHITELIST += "CVE-2014-0363"
+CVE_CHECK_WHITELIST += "CVE-2014-0364"
+CVE_CHECK_WHITELIST += "CVE-2016-10027"
+
inherit autotools update-rc.d pkgconfig ptest
inherit ${@bb.utils.contains('VIRTUAL-RUNTIME_init_manager','systemd','systemd','', d)}
inherit features_check
--
2.25.1


Re: Yocto Autobuilder: Latency Monitor and AB-INT - Meeting notes: June 17, 2021

Alexandre Belloni
 

On 17/06/2021 10:05:48-0400, Randy MacLeod wrote:
5. On the ubuntu-18.04 builders, we seem to see issues there,
we dont' know why, maybe only that we have more of those workers...
Alex, could you possibly get failures per worker statistics?
Specifically for bug 14273 (the rcu stall is seen on the logs):

ubuntu1804-ty-1 6 23.1%
ubuntu1804-ty-2 5 19.2%
ubuntu1804-ty-3 12 38.5%
fedora31-ty-1 2 7.7%
debian8-ty-1 3 11.5%


6. discussed Sakib's summary script. It's coming along.
TO DO:
- special activities: rm (of trash), tar, qemu*
- report all zombies
(The current hoard are due to Paul Barker's patch)
7.
make: job server
- the fifo was being re-created by the wrapper on each call
so Trevor will fix that.



8. From last week, I don't think we've increased the timeouts:

- qemu-runner? timeout increase 120 -> 240
- ptest timeouts 300 -> 450?




8.
Plans for the week:

Richard: RCU stall
Alex:
Sakib: task summary
Trevor: make job server
Tony: ptests and work with upstream valgrind on fixing bugs.
Saul: (1 week) have QMP deal with sigusr1 to close the QMP socket
Randy: coffee, herd cats!!


../Randy
--
Alexandre Belloni, co-owner and COO, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


Re: Empty source package when using devtool

Frederic Martinsons <frederic.martinsons@...>
 

It it can help to understand, I rechecked the full content of source packages and glib-2.0 / networkamanager are not ok like I said earlire.
There are not empty but they seem to contain only build generated files (for example glib/glibconfig.h which, in the glib source repository, is glib/glibconfig.h.in).


Empty source package when using devtool

Frederic Martinsons <frederic.martinsons@...>
 

Hello,

I currently used yocto warrior and I encounter an issue during packaging of recipes.
I would like to embed some source package (to be able to run coverage on target) , this work without issues and places the source of the package in ${ROOTFS}/usr/src/debug, but when I 'devtool modify' a recipe to work on , the source package is totally empty.

First, I suspected that was my various recipe customization so I tried the same method (devtool modify then packing it and look at content of -src package) for more standard package, here are my results:
     - glib-2.0 -> ok
     - networkmanager -> ok
     - dnsmasq -> empty source
     - zlib -> empty source

I try to look at log.do_package but I didn't see any warning nor error there.

Since warrior is EOL, I tried a fresh poky on dunfell and hardknott, I got the exact same results. Is anybody here have experienced this ? Is this bug or did I miss some configuration ?

Thank you very much in advance for any help you can bring.


Yocto Technical Team Minutes, Engineering Sync, for June 15 2021

Trevor Woerner
 

Yocto Technical Team Minutes, Engineering Sync, for June 15 2021
archive: https://docs.google.com/document/d/1ly8nyhO14kDNnFcW2QskANXW3ZT7QwKC5wWVDg9dDH4/edit

== disclaimer ==
Best efforts are made to ensure the below is accurate and valid. However,
errors sometimes happen. If any errors or omissions are found, please feel
free to reply to this email with any corrections.

== attendees ==
Trevor Woerner, Stephen Jolley, Jan-Simon Möller, Steve Sakoman, Randy
MacLeod, Joshua Watt (JPEW), Michael Opdenacker, Peter Kjellesrstedt,
Richard Purdie, Tim Orling, Tony Tascioglu, Trevor Gamblin, Denys
Dmytriyenko, Bruce Ashfield, Michael Halstead, Ross Burton, Saul Wold

== notes ==
- 3.4 m1 built and in QA (honister)
- m1 unblocked, we thought it was caused by kernel stuff, but it’s going to
take longer
- still working on AB-INT centos8 issue
- multiconfig issues continue, need simpler test cases
- ongoing discussion of potential new bitbake assignment operator (see
architecture mailing list)
- pr-serv updates from PaulB are revealing issues with shutdown, hanging
threads, etc
- uptick in CVEs (up to 17, was down to 4 at one point) thanks to RossB for
help (please join in)

== general ==
TimO: i was looking at one of the CVEs for dunfell, there was a collision in the patch that we’re working on


Randy: Tony, did you send the email regarding ffmpeg
Tony: not yet, but i sent the 2nd part of valgrind fixes
Randy: there are many CVEs out for ffmpeg that Tony is looking at


TrevorG: we’re using kia as the test package, looking at how the job server
works, collecting data. if you run with X jobs but add the debug flag it
spits out messages about needing a token
Randy: 2 builds, both building kia, we should see a difference in the amount
of time taken but we’re not seeing that
TrevorG: working on gathering more data
Randy: do you want the patch today
RP: seems to be confusion over the results so lets sort that out. we’ve
been looking at how make does its job server stuff and we’re looking to
integrating it into our builds (it sets up pipes and writes tokens to the
pipe and the sub builders read a token, work on stuff, then put the token
back)
RP: hopefully this helps with the AB-INT issues, and if we can solve the
kernel stuff wrt AB-INT then we’ll be in a good position


Randy: can we disable sstate generation?
RP: not that simple, sstate does a bunch of stuff, recipe-specific sysroots,
for example, so some parts of the sstate generation have to be used
regardless, the only bit you can disable is the final creation of the
sstate object, or we could zero out the final function call. what would
blow up if we did? don’t know, won’t look into it. i wont be accepting
patches for that. the gains you think you might see won’t be worth it
Randy: okay, i’ll look into it. if it’s less than 1 or 2% then i won’t
care. my use-case is that we clean up everything after each build, so why
create it in the first place
RP: you’ll see problems with multiconfig and in other places, and i’m not
convinced it’s worth the pain
JPEW: doesn’t he also have to disable siginfodata generating?
RP: probably, this is why i don’t want to go there


JPEW: been working on SPDX stuff. we have our own view of the world and
that’s what we need to include in the SPDX.
RP: i think we’re going to see more announcements soon (external tools etc)
JPEW: we are going to the plugfest in 2 weeks
RP: but you’ve missed the deadline
JPEW: i submitted one
RP: (needs to follow up)
JPEW: maybe i didn’t put it in the right place. i’ll double-check.
RP: i’ll give you Kate’s email and coordinate with her. getting involved
would be good
JPEW: looking at making meta-doubleopen better integrated into oe-core
RP: it feels like a lot of patches and hacks, which worries me
JPEW: a lot of it was making it do sstate properly. the big thing is the
archiving of the sources which i think will make use of your sources
changes. the changes to oe-core were minimal (debug sources)
RP: if we could pipe that through lz4 and compress it
JPEW: we could just add it as a host package (dependency)
RP: that would be simpler than doing it through python APIs
JPEW: agreed
RP: maybe we could save off base hash
MichaelO: why lz4?
JPEW: it’s the fastest of the ones we’ve looked at for this type of data.
zstd is more configurable, but lz4 is just pinned to be fast
MichaelO: if speed matters then i agree lz4 is the fastest
RP: we’ve looked at a couple and had better compression with some others,
but lz4 always ended up being the fastest. we even looked at having
bitbake start the next task before finishing the compression, but that
didn’t work out
JPEW: i think zstd has parallel compressing so we could use multiple threads.
and with configurability we can change between size and speed
RP: with the testing i did i also found gz worked well in some cases. i’m
open to any ideas here if there’s data to support it. i’m not sure
what’s in centos7, but we use buildtools tarball anyway
MichaelO: i can do some testing, were is it set?
RP: it’s not parameterized, but look in the sstate bbclass and do a
search+replace of tgz, then do a test and compare size and time
JPEW: we use pigz if it’s available
RP: right. it’s smart enough to start with gz and then use pigz if available
JPEW: i think zstd has the same, there’s pzstd for the parallel version


JPEW: compressed package data, debuginfo, do you want it debuginfo or do
you want to be able to put other things in there, or replicate what’s in
the package data
RP: i assume you’re going to create a new file
JPEW: yes
RP: then let’s have an api that says “this file has this data in it”
Ross: is it time to consider a package data v2 (e.g. with compressed json)
because i was looking at adding more info into package data too and it was
bad
Saul: it could be configurable. it could be packagedata but all in one file
RP: the “configurable” thing is problematic. a 2nd build with a different
config won’t be able to reuse sstate info. but adding debuginfo is large
(hence compression). i can see what Ross would like because internally
using that information to regenerate its own internal state is quite fast
Saul: sorry, i meant what’s output could be configurable
RP: but what JPEW wants is adding to the packagedata. there’s the top-level
file but there’s also data in the runtime directory and the two are
linked. i’m fine with moving it to json (or some other format) but we
need to make sure we can access the subsets so it doesn’t become 1 huge
json file, but a file with sets. i’m thinking out loud (these might not
work). it goes into sstate as individual entries but then gets splattered
like a sysroot. the nice thing is it is lockless, but with individual
files maybe locking becomes an issue? although with package data being
recipe-specific it might avoid any issues.
JPEW: then there’s the hard links too
RP: it only hardlinks to things it depends on
JPEW: knowing what we need from the packagedata is the first step
RP: some of that is obsolete because of recipe-specific sysroots
RP: what do people think of making a hardlink of the sources? the downside is
having hardlinked directory trees that take a long time to delete.
JPEW: how are we deleting them?
RP: we’ve tried a couple things, but they’re all slow because fs people
don’t care about that usecase
JPEW: are you thinking of moving the place where we checkout to in order to
make cleanup easier?
RP: there are a number of reasons, currently there is no simple place to point
to to find the sources for a build (think devtool)
JPEW: the archiver is very configurable, would you save off the copy before or
after do_configure?
RP: good question. i think we might have to ask people what they want. it
would depend on what their legal departments might think. are there any
preferences?
Randy: we’ve gone back and forth, but i think we save off the patched ones
now
RP: i do have patches for playing with saving off the sources, and they do
produce a build, but as soon as you touch it with devtool (for example) it
explodes, so if anyone wants to play with it they are available


Re: Integration of mpg321 in Yocto Zeus #zeus #yocto

Ross Burton <ross@...>
 

oe-core has mpg123 already, so just use that recipe.

Ross

On Thu, 17 Jun 2021 at 15:42, Poornesh <poornesh.g@...> wrote:

Greetings !

If anyone achieved the task of integrating the "mpg321" in yocto , kindly requesting to share the procedure/steps to be followed .

Thanks


Re: Integration of mpg321 in Yocto Zeus #zeus #yocto

Poornesh <poornesh.g@...>
 

Greetings !

If anyone achieved the task of integrating the "mpg321" in yocto , kindly requesting to share the procedure/steps to be followed .

Thanks


Re: [PATCH yocto-autobuilder-helper] summarize_top_output.py: add script, use it and publish summary

sakib.sajal@...
 

On 2021-06-16 1:33 p.m., Richard Purdie wrote:
[Please note: This e-mail is from an EXTERNAL e-mail address]

On Wed, 2021-06-16 at 04:43 -0400, sakib.sajal@... wrote:
summarize_top_output.py is used to summarize the top
output that is captured during autobuilder intermittent
failures.

Use the script to summarize the host top output and
publish the summary that is created instead of
the raw logfile.


[...]
if jcfg:
diff --git a/scripts/summarize_top_output.py b/scripts/summarize_top_output.py
new file mode 100755
index 0000000..0606a34
--- /dev/null
+++ b/scripts/summarize_top_output.py
@@ -0,0 +1,132 @@
+#!/usr/bin/env python3
+
+import os, sys, glob
+
+# constants
+top_header = 7
+top_summary = 5
+max_cols = 11
+
+# string substitution to make things easier to read
+subs = {
+ "/home/pokybuild/yocto-worker/" : "~/",
+ "/build/build/tmp/work/core2-32-poky-linux/" : "/.../POKY_32/.../",
+ "/build/build/tmp/work/core2-64-poky-linux/" : "/.../POKY_64/.../",
+ "/recipe-sysroot-native/usr/bin/x86_64-poky-linux/../../libexec/x86_64-poky-linux/gcc/x86_64-poky-linux/" : "/...GCC.../"
+}
One quick question - the above assumes an x86 target machine using those two tunes.
Should that be wildcarded?

Cheers,

Richard
Yes it should be. We've looked at a number of logs and I will send a V2 with wild-carding for the cross-compiler paths.

After the V2, I plan on submitting patches for the following in roughly the presented order:

1) any process that is listed once (singleton) is not reported in the summary, but we are planning to make exceptions for "special" commands such as "rm" and "tar"

2) i noticed some zombie processes and it might be worthwhile to report them. Right now they are not indicated in the summary. Should we report all zombies or ignore the singletons?

For example I'd collapse [Parser-31], [Parser-32] into [Parser-N]


3) It would be nice to specify other builds going on and the top level directory

4) currently top does not show PPID, should we include the information in the output? This is useful for more context in the raw logfile.

5) show top 5 or 10 virtual memory users


From the yp-ab index page, there are no logs for arm host. I will run a build and make sure the script work well there as well.

Any ideas on why the arm host builds are not being shown on the index page?

Sakib



cross compiling perl modules with c/c++ code

Marco <marco@...>
 

Hello All,

I'm trying to get a Perl module (specifically
Google-ProtocolBuffers-Dynamic @
https://metacpan.org/pod/Google::ProtocolBuffers::Dynamic) cross
compiled for an imx8 target; and having a bit of difficulty doing so.

I was hoping someone could provide some clues/examples/direction on cross
compiling Perl modules under Yocto (specifically those that interface
with underlining c/c++ code like this one).

As a note, I manually futzed around with it and I was able to cross
compile the Perl module manually with things like `perl Build.PL
--config cc=.. --config ld=..` and got a library that I placed on the
target. Attempting to use it though, gives a error of "loadable library
and Perl binaries are mismatched".

Thank you in advance for any help!

Marco


Yocto Autobuilder: Latency Monitor and AB-INT - Meeting notes: June 17, 2021

Randy MacLeod
 

Join Zoom Meeting - 9 AM ET
https://windriver.zoom.us/j/3696693975

Attendees: Alex, Richard, Saul, Randy, Tony, Trevor, Sakib


Summary: Things are improving somewhat on the autobuilder,
RCU stalls are the top problem now.


1.
LTP kernel BUG:
Many thanks to Paul Gortmaker for his work on this!


2. The most common problem now is the qemu RCU hang.

For example these builds:

https://autobuilder.yoctoproject.org/typhoon/#/builders/73/builds/3541/steps/13/logs/stdio

https://autobuilder.yoctoproject.org/typhoon/#/builders/73/builds/3541/steps/13/logs/stdio

Richards links on RCU stall detection, and tuning parameters:
https://www.kernel.org/doc/Documentation/RCU/stallwarn.txt
https://lwn.net/Articles/777214/

Next:
- Ask around for advice on qemu debugging.
- RP thinks that the underlying system has a problem:
CPU or other overload.
We do see that there are two qemus that are using lots of CPU in
the links above.
Richard says that the likely activity is:
- core-image-sato-sdk, compiler tests
- core-image-sato lighter general tests
Alex thinks that the particular workload is not significant.
- run two qemu in a controlled env, with stress-ng.
- iostat will help - Sakib.

3. Valgrind ptest results are getting better.

4. ptest issues are coming along, with util-linux being the next
thing to be merged today likely.

5. On the ubuntu-18.04 builders, we seem to see issues there,
we dont' know why, maybe only that we have more of those workers...
Alex, could you possibly get failures per worker statistics?


6. discussed Sakib's summary script. It's coming along.
TO DO:
- special activities: rm (of trash), tar, qemu*
- report all zombies
(The current hoard are due to Paul Barker's patch)
7.
make: job server
- the fifo was being re-created by the wrapper on each call
so Trevor will fix that.



8. From last week, I don't think we've increased the timeouts:

- qemu-runner? timeout increase 120 -> 240
- ptest timeouts 300 -> 450?




8.
Plans for the week:

Richard: RCU stall
Alex:
Sakib: task summary
Trevor: make job server
Tony: ptests and work with upstream valgrind on fixing bugs.
Saul: (1 week) have QMP deal with sigusr1 to close the QMP socket
Randy: coffee, herd cats!!


../Randy


Re: Use of SDK for building images?

Josef Holzmayr
 



Leon Woestenberg <leon@...> schrieb am Do., 17. Juni 2021, 14:17:
Hello all,

In some other build systems the generated SDK can be used to also
generate an image. Thus the SDK allows development against the target
sysroot using the prebuilt host and target tools, as well as
(re)generate the target images in quick iteration cycles.

What approaches are recommended with Yocto to achieve the same benefits?

The esdk is meant for exactly that use case.


My need is to regenerate an initramfs as well as compose this into an
image using WIC.

I am aware of shared state, and using prebuilt toolchains, but my
question is whether the prebuilt SDK allows to generate images?

Regards,

Leon.




Use of SDK for building images?

Leon Woestenberg
 

Hello all,

In some other build systems the generated SDK can be used to also
generate an image. Thus the SDK allows development against the target
sysroot using the prebuilt host and target tools, as well as
(re)generate the target images in quick iteration cycles.

What approaches are recommended with Yocto to achieve the same benefits?

My need is to regenerate an initramfs as well as compose this into an
image using WIC.

I am aware of shared state, and using prebuilt toolchains, but my
question is whether the prebuilt SDK allows to generate images?

Regards,

Leon.


[yocto-autobuilder2][PATCH] reporters/swatbot: sanitize urls

Alexandre Belloni
 

When the log name contains a space, the generated URL is not correct. Thi=
s
later also breaks parsing in swatbot. This was triggered by "property
changes" and the correct URL is indeed logs/property_changes.

Signed-off-by: Alexandre Belloni <alexandre.belloni@...>
---

This is untested because I don't have the setup handy but, I hope, it is
trivial enough to work properly


reporters/swatbot.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/reporters/swatbot.py b/reporters/swatbot.py
index 4a5a04eb1809..41ae762359a0 100644
--- a/reporters/swatbot.py
+++ b/reporters/swatbot.py
@@ -202,7 +202,7 @@ class SwatBotURI(object):
logs =3D list(logs)
urls =3D []
for l in logs:
- urls.append('%s/steps/%s/logs/%s' % (build['url'], step_=
number, l['name']))
+ urls.append('%s/steps/%s/logs/%s' % (build['url'], step_=
number, l['name'].replace(" ", "_")))
if urls:
urls =3D " ".join(urls)
else:
--=20
2.31.1


Re: [qa-build-notification] QA notification for completed autobuilder build (yocto-3.4_M1.rc1)

Sangeeta Jain
 

Hi all,

This is the full report for yocto-3.4_M1.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.

new issue found

BUG id:14434 - [3.4 M1] dmesg: proc: Bad value for 'hidepid' with poky-altcfg distro

BUG id:14435 - [3.4 M1 beaglebone] Some drm error messages in dmesg


======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?14434
https://bugzilla.yoctoproject.org/show_bug.cgi?id=14435

Thanks,
Sangeeta

-----Original Message-----
From: qa-build-notification@... <qa-build-
notification@...> On Behalf Of Pokybuild User
Sent: Saturday, 12 June, 2021 7:49 PM
To: yocto@...
Cc: qa-build-notification@...
Subject: [qa-build-notification] QA notification for completed autobuilder build
(yocto-3.4_M1.rc1)


A build flagged for QA (yocto-3.4_M1.rc1) was completed on the autobuilder
and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.4_M1.rc1


Build hash information:

bitbake: 398a1686176c695d103591089a36e25173f9fd6e
meta-arm: 6c3d62c776fc45b4bae47d178899e84b17248b31
meta-gplv2: 1ee1a73070d91e0c727f9d0db11943a61765c8d9
meta-intel: 0937728bcd98dd13d2c6829e1cd604ea2e53e5cd
meta-mingw: bfd22a248c0db4c57d5a72d675979d8341a7e9c1
oecore: 3b2903ccc791d5dedd84c75227f38ae4c8d29251
poky: 59d93693bf24e02ca0f05fe06d96a46f4f0f1bf8



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...







[PATCH] bsps/5.10: update to v5.10.43

Bruce Ashfield
 

From: Bruce Ashfield <bruce.ashfield@...>

Updating linux-yocto/5.10 to the latest korg -stable release, and to
match the qemu BSPs in oe-core

Signed-off-by: Bruce Ashfield <bruce.ashfield@...>
---
.../linux/linux-yocto_5.10.bbappend | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.10.bbappend b/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.10.bbappend
index bc2b3bf576..f8362b6635 100644
--- a/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.10.bbappend
+++ b/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.10.bbappend
@@ -7,17 +7,17 @@ KMACHINE_genericx86 ?= "common-pc"
KMACHINE_genericx86-64 ?= "common-pc-64"
KMACHINE_beaglebone-yocto ?= "beaglebone"

-SRCREV_machine_genericx86 ?= "8c516ced69f41563404ada0bea315a55bcf1df6f"
-SRCREV_machine_genericx86-64 ?= "8c516ced69f41563404ada0bea315a55bcf1df6f"
-SRCREV_machine_edgerouter ?= "965ab3ab746ae8a1158617b6302d9c218ffbbb66"
-SRCREV_machine_beaglebone-yocto ?= "8c516ced69f41563404ada0bea315a55bcf1df6f"
+SRCREV_machine_genericx86 ?= "ab49d2db98bdee2c8c6e17fb59ded9e5292b0f41"
+SRCREV_machine_genericx86-64 ?= "ab49d2db98bdee2c8c6e17fb59ded9e5292b0f41"
+SRCREV_machine_edgerouter ?= "274d63799465eebfd201b3e8251f16d29e93a978"
+SRCREV_machine_beaglebone-yocto ?= "ab49d2db98bdee2c8c6e17fb59ded9e5292b0f41"

COMPATIBLE_MACHINE_genericx86 = "genericx86"
COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
COMPATIBLE_MACHINE_edgerouter = "edgerouter"
COMPATIBLE_MACHINE_beaglebone-yocto = "beaglebone-yocto"

-LINUX_VERSION_genericx86 = "5.10.21"
-LINUX_VERSION_genericx86-64 = "5.10.21"
-LINUX_VERSION_edgerouter = "5.10.21"
-LINUX_VERSION_beaglebone-yocto = "5.10.21"
+LINUX_VERSION_genericx86 = "5.10.43"
+LINUX_VERSION_genericx86-64 = "5.10.43"
+LINUX_VERSION_edgerouter = "5.10.43"
+LINUX_VERSION_beaglebone-yocto = "5.10.43"
--
2.19.1


Re: [PATCH yocto-autobuilder-helper] summarize_top_output.py: add script, use it and publish summary

sakib.sajal@...
 

On 2021-06-16 11:41 a.m., Randy MacLeod wrote:
On 2021-06-16 4:43 a.m., sakib.sajal@... wrote:
summarize_top_output.py is used to summarize the top
output that is captured during autobuilder intermittent
failures.

Use the script to summarize the host top output and
publish the summary that is created instead of
the raw logfile.

Looks good Sakib,

Can you show people what the typical output looks like?
Is the raw top output still published?

../Randy
The script goes over the raw logfile (for example, foo.txt) which consists of multiple top outputs and summarizes each top output and writes them to foo_summary.txt

Some improvements that can be made:

1) create a separate file for each top summary, ie, if foo.txt has n top outputs, create foo_summary_[1...n].txt

2) since python3 is used to run many different kinds of jobs, it is difficult to generalize python3 threads. the script can be modified to log all the different jobs run under python3, right now it only counts the processes that are running python3.

Any feedback is welcome!

Sample output:

NOTE: program names have been shortened for better readability.
Substitutions are as follows:
~/ = /home/pokybuild/yocto-worker/
/.../POKY_32/.../ = /build/build/tmp/work/core2-32-poky-linux/
/.../POKY_64/.../ = /build/build/tmp/work/core2-64-poky-linux/
/...GCC.../ = /recipe-sysroot-native/usr/bin/x86_64-poky-linux/../../libexec/x86_64-poky-linux/gcc/x86_64-poky-linux/

top was invoked 12 times.

top - 16:51:30 up 4 days, 19:26,  1 user,  load average: 93.84, 57.26, 48.58
Tasks: 1084 total,  80 running, 664 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.1 us,  2.0 sy,  9.8 ni, 83.1 id,  2.9 wa,  0.0 hi, 0.0 si,  0.0 st
KiB Mem : 13192559+total,  5571436 free, 14307780 used, 11204637+buff/cache
KiB Swap:  8388604 total,  8239384 free,   149220 used. 11455193+avail Mem

Summary:
85  /bin/sh
41  python3
39  /bin/bash
30  x86_64-poky-linux-g++
29  make
16 ~/meta-oe/.../POKY_64/.../cpprest/2.10.18-r0/...GCC.../11.1.0/cc1plus
16 ~/meta-oe/.../POKY_64/.../libvpx/1.8.2-r0/...GCC.../11.1.0/cc1plus
16  ~/meta-oe/.../POKY_64/.../libvpx/1.8.2-r0/...GCC.../11.1.0/as
16 ~/meta-oe/.../POKY_64/.../cpprest/2.10.18-r0/recipe-sysroot-native/usr/bin/x86_64-poky-linux/x86_64-poky-linux-g++
16 ~/meta-oe/.../POKY_64/.../cpprest/2.10.18-r0/...GCC.../11.1.0/as
14 ~/meta-oe/.../POKY_64/.../rtorrent/0.9.8-r0/...GCC.../11.1.0/cc1plus
14 ~/meta-oe/.../POKY_64/.../rtorrent/0.9.8-r0/...GCC.../11.1.0/as
13 ~/meta-oe/.../POKY_64/.../vulkan-cts/1.2.6.0-r0/...GCC.../11.1.0/cc1plus
13 ~/meta-oe/.../POKY_64/.../vulkan-cts/1.2.6.0-r0/...GCC.../11.1.0/as
13 ~/meta-oe/.../POKY_64/.../vulkan-cts/1.2.6.0-r0/recipe-sysroot-native/usr/bin/x86_64-poky-linux/x86_64-poky-linux-g++
11  x86_64-poky-linux-gcc
10 ~/meta-oe/.../POKY_64/.../dovecot/2.3.14-r0/...GCC.../11.1.0/cc1
10 ~/meta-oe/.../POKY_64/.../fluentbit/1.3.5-r0/recipe-sysroot-native/usr/bin/x86_64-poky-linux/x86_64-poky-linux-gcc
10 ~/meta-oe/.../POKY_64/.../dovecot/2.3.14-r0/...GCC.../11.1.0/as
7 ~/meta-oe/.../POKY_64/.../fluentbit/1.3.5-r0/...GCC.../11.1.0/cc1
7 ~/meta-oe/.../POKY_64/.../fluentbit/1.3.5-r0/...GCC.../11.1.0/as
4  /usr/bin/python3
3  cmake
2 ~/meta-oe/.../POKY_64/.../vulkan-cts/1.2.6.0-r0/...GCC.../11.1.0/ar
2  sh
2  /lib/systemd/systemd
2  (sd-pam)
2  ninja
2  bitbake-server
2 ~/meta-oe/.../POKY_64/.../vulkan-cts/1.2.6.0-r0/recipe-sysroot-native/usr/bin/x86_64-poky-linux/x86_64-poky-linux-gcc-ar

Kernel Summary:
293  kworker
56  ksoftirqd
56  migration
56  watchdog
56  cpuhp
3  jbd2
3  ext4-rsv-conver
2  kdmflush
2  bioset
....

Sakib



Signed-off-by: Sakib Sajal <sakib.sajal@...>
---
  scripts/collect-results              |   2 +-
  scripts/generate-testresult-index.py |   2 +-
  scripts/run-config                   |   1 +
  scripts/summarize_top_output.py      | 132 +++++++++++++++++++++++++++
  4 files changed, 135 insertions(+), 2 deletions(-)
  create mode 100755 scripts/summarize_top_output.py

diff --git a/scripts/collect-results b/scripts/collect-results
index 7474e36..7178380 100755
--- a/scripts/collect-results
+++ b/scripts/collect-results
@@ -19,7 +19,7 @@ if [ -e $WORKDIR/buildhistory ]; then
  fi
    HSFILE=$WORKDIR/tmp/buildstats/*/host_stats
-d=`date +%Y-%m-%d--%H-%M`
+d="intermittent_failure_host_data"
    mkdir -p $DEST/$target/$d
  diff --git a/scripts/generate-testresult-index.py b/scripts/generate-testresult-index.py
index 7fdc17c..d85d606 100755
--- a/scripts/generate-testresult-index.py
+++ b/scripts/generate-testresult-index.py
@@ -154,7 +154,7 @@ for build in sorted(os.listdir(path), key=keygen, reverse=True):
      hd = []
      counter = 0
      # do we really need the loop?
-    for p in glob.glob(buildpath + "/*/*/host_stats*top.txt"):
+    for p in glob.glob(buildpath + "/*/*/host_stats*top_summary.txt"):
          n_split = p.split(build)
          res = reldir[0:-1] + n_split[1]
          hd.append((res, str(counter)))
diff --git a/scripts/run-config b/scripts/run-config
index 8ed88cf..82de91f 100755
--- a/scripts/run-config
+++ b/scripts/run-config
@@ -327,6 +327,7 @@ elif args.phase == "finish" and args.stepname == "collect-results":
      if args.results_dir:
          hp.printheader("Running results collection")
          runcmd([scriptsdir + "/collect-results", args.builddir, args.results_dir, args.target])
+        runcmd([scriptsdir + "/summarize_top_output.py", args.results_dir, args.target])
      sys.exit(0)
    if jcfg:
diff --git a/scripts/summarize_top_output.py b/scripts/summarize_top_output.py
new file mode 100755
index 0000000..0606a34
--- /dev/null
+++ b/scripts/summarize_top_output.py
@@ -0,0 +1,132 @@
+#!/usr/bin/env python3
+
+import os, sys, glob
+
+# constants
+top_header = 7
+top_summary = 5
+max_cols = 11
+
+# string substitution to make things easier to read
+subs = {
+    "/home/pokybuild/yocto-worker/" : "~/",
+    "/build/build/tmp/work/core2-32-poky-linux/" : "/.../POKY_32/.../",
+    "/build/build/tmp/work/core2-64-poky-linux/" : "/.../POKY_64/.../",
+ "/recipe-sysroot-native/usr/bin/x86_64-poky-linux/../../libexec/x86_64-poky-linux/gcc/x86_64-poky-linux/" : "/...GCC.../"
+}
+
+def usage():
+    print("Usage: " + sys.argv[0] + " <dest> <target>")
+
+def list_top_outputs(logfile):
+    # top delimiter
+    top_start = "start: top output"
+    top_end = "end: top output"
+
+    # list of top outputs
+    top_outputs = []
+
+    # flag
+    collect = False
+    with open(logfile) as log:
+        top_output = []
+        for line in log:
+            lstrip = line.strip()
+            if collect:
+                if lstrip.startswith(top_end):
+                    collect = False
+                    top_outputs.append(top_output)
+                    top_output = []
+                else:
+                    top_output.append(lstrip)
+            if lstrip.startswith(top_start):
+                collect = True
+
+    return top_outputs
+
+def summarize_top(top_outs):
+    summaries = []
+    kernel_summaries = []
+    short_summaries = []
+    for top_out in top_outs:
+        summary = {}
+        kernel_summary = {}
+        short_summary = top_out[:top_summary]
+        for line in top_out[top_header:]:
+            cmd = line.split(maxsplit=max_cols)[-1]
+            # kernel processes
+            if cmd[0] == "[" and cmd[-1] == "]":
+                kproc = cmd[1:-1].split("/")[0]
+                if kproc not in kernel_summary:
+                    kernel_summary[kproc] = 1
+                else:
+                    kernel_summary[kproc] += 1
+                continue
+            cmd_split = cmd.split()
+            prog = cmd_split[0]
+            if prog not in summary:
+                summary[prog] = 1
+            else:
+                summary[prog] += 1
+        summary = dict(sorted(summary.items(), key=lambda item: item[1], reverse=True))
+        kernel_summary = dict(sorted(kernel_summary.items(), key=lambda item: item[1], reverse=True))
+
+        summaries.append(summary)
+        kernel_summaries.append(kernel_summary)
+        short_summaries.append(short_summary)
+
+    return (short_summaries, summaries, kernel_summaries)
+
+def summarize_path(path):
+    p = path
+    for k, v in subs.items():
+        p = p.replace(k, v)
+    return p
+
+def write_summary(short_summary, summary, kernel_summary, logfile):
+    dirname = os.path.dirname(logfile)
+    fname = os.path.basename(logfile)
+    report_name = fname.split(".")[0] + "_summary.txt"
+    outfile = os.path.join(dirname, report_name)
+    out = "NOTE: program names have been shortened for better readability.\nSubstitutions are as follows:\n"
+    for k, v in subs.items():
+        out += (v + " = " + k + "\n")
+    out += "\n"
+
+    out += "top was invoked " + str(len(short_summary)) + " times.\n\n"
+
+    for i in range(len(short_summary)):
+        for l in short_summary[i]:
+            out += (l + "\n")
+
+        out += ("\nSummary: " + "\n")
+        for k, v in summary[i].items():
+            if v > 1:
+                r = summarize_path(k)
+                out += (str(v) + "  " + r + "\n")
+
+        out += ("\nKernel Summary: " + "\n")
+        for k, v in kernel_summary[i].items():
+            if v > 1:
+                r = summarize_path(k)
+                out += (str(v) + "  " + r + "\n")
+        out += ("\n")
+
+    with open(outfile, "w") as of:
+        of.write(out)
+
+def main():
+    if len(sys.argv) != 3:
+        usage()
+        sys.exit()
+
+    dest = sys.argv[1]
+    target = sys.argv[2]
+    host_data_dir = "intermittent_failure_host_data"
+    directory = os.path.join(dest, target, host_data_dir)
+    for f in glob.glob(directory + "/*_top.txt"):
+        outputs = list_top_outputs(f)
+        short_summary, summary, kernel_summary = summarize_top(outputs)
+        write_summary(short_summary, summary, kernel_summary, f)
+
+main()




Re: [meta-intel]: support for NUC11

simon
 

On 2021-06-16 1:24 pm, Alexander Kanavin wrote:

On Wed, 16 Jun 2021 at 16:58, simon <simon@...> wrote:
- There's a warning about not finding the IRIS driver:
MESA_LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open
shared object file: No such file or directory
failed to load driver: iris

But from what I was able to understand, I was expecting it would be
available with mesa 21.0.3 since I've seen some commit about fixing
issue with it in the change logs.

I've tried to force the driver to both i965 and i915 but I either got a
warning that this GEN cannot use it or still got the same fps.
Am I right to assume that it goes to a default driver that uses the CPU
instead of the GPU?

Hello Simon,
 
I honestly don't know how Intel folks have overlooked this, but mesa in yocto not only does not enable iris, the mesa recipe does not even have an option to enable it.
 
However, it should be pretty easy to add. Please take a look at poky/meta/recipes-graphics/mesa/mesa.inc, specificially any lines that mention GALLIUMDRIVERS and matching PACKAGECONFIG,
and you should be able to add iris there, and rebuild your image. Then you can submit your first patch to yocto via oe-core mailing list ;)
 
Alex


Hello Alex,

I was able to fix the iris driver issue with the following added to mesa.inc

GALLIUMDRIVERS_append = "${@bb.utils.contains('PACKAGECONFIG', 'iris', ',iris', '', d)}"

I will look into sending the patch

thanks a lot for your help

Simon



Re: [PATCH yocto-autobuilder-helper] summarize_top_output.py: add script, use it and publish summary

Richard Purdie
 

On Wed, 2021-06-16 at 04:43 -0400, sakib.sajal@... wrote:
summarize_top_output.py is used to summarize the top
output that is captured during autobuilder intermittent
failures.

Use the script to summarize the host top output and
publish the summary that is created instead of
the raw logfile.


[...]
 if jcfg:
diff --git a/scripts/summarize_top_output.py b/scripts/summarize_top_output.py
new file mode 100755
index 0000000..0606a34
--- /dev/null
+++ b/scripts/summarize_top_output.py
@@ -0,0 +1,132 @@
+#!/usr/bin/env python3
+
+import os, sys, glob
+
+# constants
+top_header = 7
+top_summary = 5
+max_cols = 11
+
+# string substitution to make things easier to read
+subs = {
+ "/home/pokybuild/yocto-worker/" : "~/",
+ "/build/build/tmp/work/core2-32-poky-linux/" : "/.../POKY_32/.../",
+ "/build/build/tmp/work/core2-64-poky-linux/" : "/.../POKY_64/.../",
+ "/recipe-sysroot-native/usr/bin/x86_64-poky-linux/../../libexec/x86_64-poky-linux/gcc/x86_64-poky-linux/" : "/...GCC.../"
+}
One quick question - the above assumes an x86 target machine using those two tunes. 
Should that be wildcarded?

Cheers,

Richard


Re: [meta-intel]: support for NUC11

Alexander Kanavin
 

On Wed, 16 Jun 2021 at 16:58, simon <simon@...> wrote:
- There's a warning about not finding the IRIS driver:
MESA_LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open
shared object file: No such file or directory
failed to load driver: iris

But from what I was able to understand, I was expecting it would be
available with mesa 21.0.3 since I've seen some commit about fixing
issue with it in the change logs.

I've tried to force the driver to both i965 and i915 but I either got a
warning that this GEN cannot use it or still got the same fps.
Am I right to assume that it goes to a default driver that uses the CPU
instead of the GPU?

Hello Simon,

I honestly don't know how Intel folks have overlooked this, but mesa in yocto not only does not enable iris, the mesa recipe does not even have an option to enable it.

However, it should be pretty easy to add. Please take a look at poky/meta/recipes-graphics/mesa/mesa.inc, specificially any lines that mention GALLIUMDRIVERS and matching PACKAGECONFIG,
and you should be able to add iris there, and rebuild your image. Then you can submit your first patch to yocto via oe-core mailing list ;)

Alex