Date   

[meta-security] [dunfell] [PATCH 0/3] Backport several IMA fixes to LTS dunfell

Ming Liu <liu.ming50@...>
 

From: Ming Liu <ming.liu@toradex.com>

Ming Liu (3):
ima-evm-keys: add file-checksums to IMA_EVM_X509
meta: drop IMA_POLICY from policy recipes
initramfs-framework-ima: introduce IMA_FORCE

.../initrdscripts/initramfs-framework-ima.bb | 5 +++++
.../initrdscripts/initramfs-framework-ima/ima | 9 +++++++--
.../recipes-security/ima-evm-keys/ima-evm-keys_1.0.bb | 1 +
.../ima-policy-appraise-all_1.0.bb | 9 ++-------
.../ima_policy_hashed/ima-policy-hashed_1.0.bb | 9 ++-------
.../ima_policy_simple/ima-policy-simple_1.0.bb | 9 ++-------
6 files changed, 19 insertions(+), 23 deletions(-)

--=20
2.29.0


#yocto #llvm #yocto #llvm

Monsees, Steven C (US)
 

 

I attempted to add llvm to my zeus image, and I am seeing the following Yocto build error…

 

What is the actual problem here, and how best resolve ?

 

Build Configuration:

BB_VERSION           = "1.44.0"

BUILD_SYS            = "x86_64-linux"

NATIVELSBSTRING      = "rhel-7.9"

TARGET_SYS           = "x86_64-poky-linux"

MACHINE              = "sbcb-default"

DISTRO               = "limws"

DISTRO_VERSION       = "3.0.4"

TUNE_FEATURES        = "m64 corei7"

TARGET_FPU           = ""

meta

meta-poky            = "my_yocto_3.0.4:f2eb22a8783f1eecf99bd4042695bab920eed00e"

meta-perl

meta-python

meta-filesystems

meta-networking

meta-initramfs

meta-oe              = "zeus:2b5dd1eb81cd08bc065bc76125f2856e9383e98b"

meta-clang           = "zeus:f5355ca9b86fb5de5930132ffd95a9b352d694f9"

meta                 = "master:a32ddd2b2a51b26c011fa50e441df39304651503"

meta-intel           = "zeus:d9942d4c3a710406b051852de7232db03c297f4e"

meta-intel           = "v2019.02:f635a364c55f1fb12519aff54924a0a5b947091e"

 

Initialising tasks: 100% |#######################################################| Time: 0:00:04

Checking sstate mirror object availability: 100% |###############################| Time: 0:00:00

Sstate summary: Wanted 2129 Found 2090 Missed 39 Current 0 (98% match, 0% complete)

NOTE: Executing Tasks

NOTE: Setscene tasks completed

ERROR: llvm-8.0.1-r0 do_compile: Execution of '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964' failed with exit code 1:

ninja: error: '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/recipe-sysroot-native/usr/bin/llvm-tblgen8.0.1', needed by 'include/llvm/IR/Attributes.inc', missing and no known rule to make it

WARNING: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964:1 exit 1 from 'ninja -v -j 4'

 

ERROR: Logfile of failure stored in: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/log.do_compile.18964

Log data follows:

| DEBUG: Executing shell function do_compile

| ninja: error: '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/recipe-sysroot-native/usr/bin/llvm-tblgen8.0.1', needed by 'include/llvm/IR/Attributes.inc', missing and no known rule to make it

| WARNING: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964:1 exit 1 from 'ninja -v -j 4'

| ERROR: Execution of '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964' failed with exit code 1:

| ninja: error: '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/recipe-sysroot-native/usr/bin/llvm-tblgen8.0.1', needed by 'include/llvm/IR/Attributes.inc', missing and no known rule to make it

| WARNING: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964:1 exit 1 from 'ninja -v -j 4'

|

ERROR: Task (/disk0/scratch/smonsees/yocto/workspace_3/poky/meta/recipes-devtools/llvm/llvm_git.bb:do_compile) failed with exit code '1'

NOTE: Tasks Summary: Attempted 5949 tasks of which 5385 didn't need to be rerun and 1 failed.

 

Summary: 1 task failed:

  /disk0/scratch/smonsees/yocto/workspace_3/poky/meta/recipes-devtools/llvm/llvm_git.bb:do_compile

Summary: There was 1 ERROR message shown, returning a non-zero exit code.

15:16 smonsees@yix490038 /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default>


Re: bitbake controlling memory use

Gmane Admin
 

Hi,
Op 18-04-2021 om 11:59 schreef Richard Purdie:
On Sun, 2021-04-18 at 00:17 +0200, Gmane Admin wrote:
Hi,
Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average is
too high, so it's only natural to add a RAM limiter too.

    -l [N], --load-average[=N], --max-load[=N]
                                Don't start multiple jobs unless load is
below N.

In any case, patches welcome :)
During today's Yocto technical call (1),
we talked about approaches to limiting the system load and avoiding
swap and/or OOM events. Here's what (little!) i recall from the
discussion, 9 busy hours later.

In the short run, instead of independently maintaining changes to
configurations to limit parallelism or xz memory usage, etc, we
could develop an optional common include file where such limits
are shared across the community.
I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.
It would need to be:
PARALLEL_MAKE_pn-nodejs = "-j 1"

So I watched it run for a while. It compiles with g++ and as at about
0.5GB per thread, which is OK. In the end it does ld taking 4GB and it
tries to do 4 in parallel. And then swapping becomes so heavy the
desktop becomes unresponsive. Like I mentioned before ssh from another
machine allows me to STOP one of them, allowing the remaining to
complete. And then CONT the last one.

I worked around it now, by creating a bbappend for nodejs with only
PARALLEL_MAKE = "-j 2"
If that works, the override above should also work. You do need the "pn-"
prefix to the recipe name though.
And indeed it does, thanks so much for the tip.

Ferry

Cheers,
Richard


[PATCH yocto-autobuilder-helper] config.json: measure every 60 seconds

Randy MacLeod
 

With the previous interval of 10 seconds, there would be
serveral times when the system was very busy and the
script would not return before the next run was scheduled
resulting in no measurement. In addition, build:
https://autobuilder.yocto.io/pub/non-release/20210417-13/
produced 17 files with top output with top running 454 times
and that's a bit too much data to analyze for each run. By
decreasing the measurements, we'll find the worse problems
first, fix them and then we can increase the freqency of
measurement if needed.

Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
config.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config.json b/config.json
index aad5257..962d8ae 100644
--- a/config.json
+++ b/config.json
@@ -56,7 +56,7 @@
"BB_DISKMON_DIRS = 'STOPTASKS,${TMPDIR},1G,100K STOPTASKS,${DL_DIR},1G STOPTASKS,${SSTATE_DIR},1G STOPTASKS,/tmp,100M,100K ABORT,${TMPDIR},100M,1K ABORT,${DL_DIR},100M ABORT,${SSTATE_DIR},100M ABORT,/tmp,10M,1K'",
"BB_HASHSERVE = 'typhoon.yocto.io:8686'",
"RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'",
- "BB_HEARTBEAT_EVENT = '10'",
+ "BB_HEARTBEAT_EVENT = '60'",
"BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
"BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
]
--
2.27.0


Re: bitbake controlling memory use

Richard Purdie
 

On Sun, 2021-04-18 at 00:17 +0200, Gmane Admin wrote:
Hi,
Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average is
too high, so it's only natural to add a RAM limiter too.

    -l [N], --load-average[=N], --max-load[=N]
                                Don't start multiple jobs unless load is
below N.

In any case, patches welcome :)
During today's Yocto technical call (1),
we talked about approaches to limiting the system load and avoiding
swap and/or OOM events. Here's what (little!) i recall from the
discussion, 9 busy hours later.

In the short run, instead of independently maintaining changes to
configurations to limit parallelism or xz memory usage, etc, we
could develop an optional common include file where such limits
are shared across the community.
I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.
It would need to be:

PARALLEL_MAKE_pn-nodejs = "-j 1"

So I watched it run for a while. It compiles with g++ and as at about
0.5GB per thread, which is OK. In the end it does ld taking 4GB and it
tries to do 4 in parallel. And then swapping becomes so heavy the
desktop becomes unresponsive. Like I mentioned before ssh from another
machine allows me to STOP one of them, allowing the remaining to
complete. And then CONT the last one.

I worked around it now, by creating a bbappend for nodejs with only
PARALLEL_MAKE = "-j 2"
If that works, the override above should also work. You do need the "pn-" 
prefix to the recipe name though.

Cheers,

Richard


Re: bitbake controlling memory use

Gmane Admin
 

Hi,
Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average is
too high, so it's only natural to add a RAM limiter too.

   -l [N], --load-average[=N], --max-load[=N]
                               Don't start multiple jobs unless load is
below N.

In any case, patches welcome :)
During today's Yocto technical call (1),
we talked about approaches to limiting the system load and avoiding
swap and/or OOM events. Here's what (little!) i recall from the
discussion, 9 busy hours later.

In the short run, instead of independently maintaining changes to
configurations to limit parallelism or xz memory usage, etc, we
could develop an optional common include file where such limits
are shared across the community.
I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.

So I watched it run for a while. It compiles with g++ and as at about 0.5GB per thread, which is OK. In the end it does ld taking 4GB and it tries to do 4 in parallel. And then swapping becomes so heavy the desktop becomes unresponsive. Like I mentioned before ssh from another machine allows me to STOP one of them, allowing the remaining to complete. And then CONT the last one.

I worked around it now, by creating a bbappend for nodejs with only
PARALLEL_MAKE = "-j 2"

In the longer run, changes to how bitbake schedules work may be needed.

Richard says that there was a make/build server idea and maybe even a
patch from a while ago. It may be in one of his poky-contrib branches.
I took a few minutes to look but nothing popped up. A set of keywords to
search for might help me find it.
http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237
Cheers,
Richard


Re: [PATCH yocto-autobuilder-helper 1/4] config.json: add "collect-data" template

Randy MacLeod
 

On 2021-04-15 4:48 p.m., Randy MacLeod wrote:
On 2021-04-15 1:55 p.m., Randy MacLeod wrote:
On 2021-04-15 11:55 a.m., Richard Purdie wrote:
On Thu, 2021-04-15 at 11:31 -0400, Sakib Sajal wrote:
On 2021-04-15 9:52 a.m., Richard Purdie wrote:
[Please note: This e-mail is from an EXTERNAL e-mail address]

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
collect-data template can run arbitrary commands/scripts
on a regular basis and logs the output in a file.

See oe-core for more details:
      edb7098e9e buildstats.bbclass: add functionality to collect build system stats

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
   config.json | 7 +++++++
   1 file changed, 7 insertions(+)

diff --git a/config.json b/config.json
index 5bfa240..c43d231 100644
--- a/config.json
+++ b/config.json
@@ -87,6 +87,13 @@
                   "SANITYTARGETS" : "core-image-full-cmdline:do_testimage core-image-sato:do_testimage core-image-sato-sdk:do_testimage"
               }
           },
+     "collect-data" : {
+            "extravars" : [
+                "BB_HEARTBEAT_EVENT = '10'",
+                "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+                "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
+            ]
+        },
Is the template used anywhere? I can't remember if we support nesting templates in which
case this is useful, or not?
We were using it for testing on the YP AB and thought it would be
useful if at some point the monitoring was dropped from the
default config.

I think we can just add it later if needed.
Richard,

I think that the web server for:
  https://autobuilder.yocto.io/pub/non-release/
runs every 30 seconds via cron so if you are happy with
this crude dd trigger once things have soaked in master-next
and we want to gather some data overnight, could you merge to master?


I ran a simpler test with fewer io stressors from:
$ stress -hdd N
and have attached a graph with up to 3000! stressors that
we looked at this morning and another with up to 35 stressors.

It's a crude indicators but once we get beyond 18-20 io stressors
on the system I tested (48 cores, 128 GB RAM, 12 TB magnetic disk)
dd time become erratic.

Running qemu from tmpfs has clearly helped.
Let's gather some data and decide if we want to spend more time
learning how to monitor the system to tune how we are using it.

../Randy
Thanks for fixing the fall-out due to assumptions in other tests.
Is the system back to normal and operational now?


What was the impact of running the heartbeat and the dd test every
10 seconds on the system build performance?

Should we  increase the interval to 30, 60, ore more seconds?


I spent some time looking at the first bit of data along with
Sakib and Saul from time to time.

General conclusions:

1. It seems like ALL triggers involve oe-selftest being active.

2. xz might be a problem but we're not sure yet.

3. We need more data and tools and time to think about it.


To Do:

1. increase top cmdline length from 512 to  16K

2. sometimes we see:

     Command '['oe-time-dd-test.sh', '100']' timed out after 10.0 seconds

That should not happen so we should understand why and either increase
the time between runs or fix the tooling. This seems to happen under load
so it's hiding the interesting data that we are looking for!

3. tail the cooker console in addition to top. Present that before top.

    It would be nice to have a top equivalent for bitbake.




We did collect some triggered host data last night as seen in:

https://autobuilder.yocto.io/pub/non-release/

https://autobuilder.yocto.io/pub/non-release/20210415-16/

Only one a-full build was run. There were 10 log files produced.

There were 21 times that the dd time exceeded the 5 second limit
out of a total of 21581 (or so!) invocations and those triggers we
captured by 10 log files:

testresults/beaglebone-alt/2021-04-16--00-19/host_stats_0_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_2_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_4_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_6_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_8_top.txt
testresults/qemuarm/2021-04-16--00-02/host_stats_0_top.txt
testresults/qemuarm/2021-04-16--00-02/host_stats_1_top.txt
testresults/qemumips-alt/2021-04-15--23-36/host_stats_1_top.txt
testresults/qemumips64/2021-04-16--02-46/host_stats_0_top.txt
testresults/qemux86-world/2021-04-16--00-00/host_stats_0_top.txt


We knew that our naming convention needed work in that the files
are generically named and differ only by the directory datastamp and,
where the logs contain 'top' output or not the _top suffix.  We'd like to help
whoever is looking at the data understand what the context of
the build was. That's not clear to Sakib and I given that we are YP AB newbies still.
Do you have any suggestions about what the directory structure or file naming convention should be?
The other thing need to do is correlate these higher latency times
with the intermittent problems we've encountered. We can do that manually
I support via the SWAT team but ideally there would be an automated process.


More quick analysis...

The number of times that top ran per log file:

$ grep "^top - " `fd _top autobuilder.yocto.io/` | cut -d":" -f1 | uniq -c | \
    sed -e 's|autobuilder.yocto.io/pub/non-release/20210415-16/||'
      2 testresults/beaglebone-alt/2021-04-16--00-19/host_stats_0_top.txt
      3 testresults/qa-extras2/2021-04-15--22-43/host_stats_2_top.txt
      1 testresults/qa-extras2/2021-04-15--22-43/host_stats_4_top.txt
      2 testresults/qa-extras2/2021-04-15--22-43/host_stats_6_top.txt
      1 testresults/qa-extras2/2021-04-15--22-43/host_stats_8_top.txt
      5 testresults/qemuarm/2021-04-16--00-02/host_stats_0_top.txt
      2 testresults/qemuarm/2021-04-16--00-02/host_stats_1_top.txt
      1 testresults/qemumips-alt/2021-04-15--23-36/host_stats_1_top.txt
      2 testresults/qemumips64/2021-04-16--02-46/host_stats_0_top.txt
      2 testresults/qemux86-world/2021-04-16--00-00/host_stats_0_top.txt
some of these are duplicates in that the different steps _2,4,6,8 above

A little shell hacking can produce one file per top output with
ample access to stackoverflow!

COUNTER=1
for i in `fd _top`;
   do for j in `grep "^top - " $i | cut -c 7-15`; do
        sed -n "/top - ${j}/,/Event Time:/p" $i >> host-stats-$j--$COUNTER.log;
        ((COUNTER++));
   done;
done

This works because the first line of time is similar to:
   top - 15:40:53 up 2 days, 22:17,  1 user,  load average: 0.36, 0.58, 0.85
so cutting out chars 7-15 gives a fairly uniq timestamp string for the filename and
adding the counter makes it unique.

Now we have 21 log files:

$ ls host-stats-2* | wc -l
21

How big are these files, ie how many process/kernel threads were running
when top ran?

$ wc -l host-stats-2* | sort -n
    757 host-stats-22:12:32--17.log
    778 host-stats-22:18:21--8.log
    784 host-stats-21:59:42--5.log
    785 host-stats-21:59:42--12.log
    792 host-stats-22:18:01--7.log
    800 host-stats-22:18:21--14.log
    811 host-stats-22:07:40--13.log
    812 host-stats-22:07:59--6.log
    821 host-stats-21:56:21--3.log
    850 host-stats-21:59:33--11.log
    856 host-stats-21:59:33--4.log
    869 host-stats-22:29:49--16.log
    884 host-stats-22:29:14--9.log
    886 host-stats-22:29:36--15.log
    981 host-stats-21:55:40--10.log
    985 host-stats-22:47:27--2.log
    987 host-stats-22:47:27--21.log
   1124 host-stats-22:37:33--1.log
   1193 host-stats-22:37:26--20.log
   1304 host-stats-23:19:14--19.log
   1321 host-stats-23:18:57--18.log
  19380 total

I noticed that several but not all log files were running xz with args like:

    xz -a --memlimit=50% --threads=56

$ for i in `ls host-stats-2*`; do echo -n $i ": "; grep "xz " $i | wc -l; done
host-stats-21:55:40--10.log : 28
host-stats-21:56:21--3.log : 4
host-stats-21:59:33--11.log : 1
host-stats-21:59:33--4.log : 1
host-stats-21:59:42--12.log : 1
host-stats-21:59:42--5.log : 1
host-stats-22:07:40--13.log : 2
host-stats-22:07:59--6.log : 2
host-stats-22:12:32--17.log : 0
host-stats-22:18:01--7.log : 6
host-stats-22:18:21--14.log : 3
host-stats-22:18:21--8.log : 3
host-stats-22:29:14--9.log : 1
host-stats-22:29:36--15.log : 0
host-stats-22:29:49--16.log : 0
host-stats-22:37:26--20.log : 56
host-stats-22:37:33--1.log : 16
host-stats-22:47:27--21.log : 0
host-stats-22:47:27--2.log : 0
host-stats-23:18:57--18.log : 0
host-stats-23:19:14--19.log : 18

In this case, I don't think it's a problem but if we had several packages
running xz like that at once with a limit of 50% of memory each,
that could be a problem. Has anyone looked at the time impact of
say reducing the number of threads to 32 and the memory limit to
15% ?


All of the top output logs seems to be running oe-selftest:

$ for i in host-stats-2*; do grep -H -c "DISPLAY.*oe-selftest " $i ; done
host-stats-21:55:40--10.log:1
host-stats-21:56:21--3.log:1
host-stats-21:59:33--11.log:1
host-stats-21:59:33--4.log:1
host-stats-21:59:42--12.log:1
host-stats-21:59:42--5.log:1
host-stats-22:07:40--13.log:1
host-stats-22:07:59--6.log:1
host-stats-22:12:32--17.log:1
host-stats-22:18:01--7.log:1
host-stats-22:18:21--14.log:1
host-stats-22:18:21--8.log:1
host-stats-22:29:14--9.log:1
host-stats-22:29:36--15.log:1
host-stats-22:29:49--16.log:1
host-stats-22:37:26--20.log:1
host-stats-22:37:33--1.log:1
host-stats-22:47:27--21.log:1
host-stats-22:47:27--2.log:1
host-stats-23:18:57--18.log:2
host-stats-23:19:14--19.log:2
$ for i in host-stats-2*; do grep -H -c "DISPLAY.*oe-selftest " $i ; done   | wc -l
21

Yikes, that seems like more than just random chance.


The logs do not seem to be duplicates in that there isn't a single cluster
of identical or similar timestamps although some are close and are
likely from the same file. That said they certainly don't seem to
be spread out uniformly over time and that's what we all expect
in that the system response time is okay for much of the time and
is poor for quite a while every now and then.

$ for i in host-stats-2*; do echo -n $i ": "; head -1 $i | cut -c -15; done
host-stats-21:55:40--10.log : top - 21:55:40
host-stats-21:56:21--3.log   : top - 21:56:21
host-stats-21:59:33--11.log : top - 21:59:33
host-stats-21:59:33--4.log   : top - 21:59:33
host-stats-21:59:42--12.log : top - 21:59:42
host-stats-21:59:42--5.log   : top - 21:59:42
host-stats-22:07:40--13.log : top - 22:07:40
host-stats-22:07:59--6.log   : top - 22:07:59
host-stats-22:12:32--17.log : top - 22:12:32
host-stats-22:18:01--7.log   : top - 22:18:01
host-stats-22:18:21--14.log : top - 22:18:21
host-stats-22:18:21--8.log   : top - 22:18:21
host-stats-22:29:14--9.log   : top - 22:29:14
host-stats-22:29:36--15.log : top - 22:29:36
host-stats-22:29:49--16.log : top - 22:29:49
host-stats-22:37:26--20.log : top - 22:37:26
host-stats-22:37:33--1.log   : top - 22:37:33
host-stats-22:47:27--21.log : top - 22:47:27
host-stats-22:47:27--2.log   : top - 22:47:27
host-stats-23:18:57--18.log : top - 23:18:57
host-stats-23:19:14--19.log : top - 23:19:14


All for now.

../Randy




../Randy

Cheers,

Richard
The template is not used anywhere, yet, the initial patchset enables the
data collection by default.

I have left the template in case the data collection is removed from
defaults and need to be used on a case by case basis.

I am not entirely sure if nesting templates work. I have not seen any
examples of it, neither did i try it myself. If nesting does work, the
template should be useful.
I had a quick look at the code and sadly, it doesn't appear I implemented
nesting so this wouldn't be that useful as things stand.

Cheers,

Richard




--
# Randy MacLeod
# Wind River Linux


#yocto #bitbake #gatesgarth #qca9377 #qca9377 #yocto #bitbake #gatesgarth

jovanalukovic0@...
 

Hi,
I am trying to include qca9377 module in my image (yocto-gatesgarth). I am using command  bitbake imx-image-full, and i added two lines in my local.conf file: MACHINE_FEATURES += " qca9377"
IMAGE_INSTALL_append += " kernel-module-qca9377", but i get constantly the same mistake:
ERROR: kernel-module-qca9377-3.1-r0 do_compile: oe_runmake failed
ERROR: kernel-module-qca9377-3.1-r0 do_compile: Execution of '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/temp/run.do_compile.25256' failed with exit code 1:
make -C /home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source M=/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git modules WLAN_ROOT=/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git MODNAME?=wlan CONFIG_QCA_WIFI_ISOC=0 CONFIG_QCA_WIFI_2_0=1 CONFIG_QCA_CLD_WLAN=m WLAN_OPEN_SOURCE=1  
make[1]: Entering directory '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source'
make[2]: Entering directory '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-build-artifacts'
and this is just a part of whole group of mistakes. There are a lot of mistakes, for examples:
cc1: some warnings being treated as errors
| /home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source/scripts/Makefile.build:279: recipe for target '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git/CORE/HDD/src/wlan_hdd_oemdata.o' failed
| make[3]: *** [/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git/CORE/HDD/src/wlan_hdd_oemdata.o] Error 1
| /home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source/scripts/Makefile.build:279: recipe for target '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git/CORE/HDD/src/wlan_hdd_early_suspend.o' failed.
Do you have eny ideas what can i do? Do i miss something for my compiling?

Best regards and thanks a lot!


#bitbake Can't use 'bitbake -g <image-name> -u taskdep #bitbake

keydi
 

Hi,
Myself had to ask web search engine for usage of Task Dependency Explorer taskexp as I didn't succeed on search in Yocto materials.
Hence, the way I try to use taskexp might be not right.

In the fashion as I try to start Task Dependency Explorer it does not work.
Invocation completes with error in __init__.py  -> require_version figures out lack of Gtk namespace.
Which is build-area Gtk is missed? Image, distribution, yet another? 
Which way of fixing should I aim to resolve error?

  File "/mnt/..../meta/poky/bitbake/lib/bb/ui/taskexp.py", line 22, in <module>
    gi.require_version('Gtk', '3.0')
  File "/usr/lib/python3/dist-packages/gi/__init__.py", line 130, in require_version
    raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gtk not available

Best Regards
keydi
 


Re: [PATCH yocto-autobuilder-helper 2/4] config.json: collect data by default

Richard Purdie
 

On Fri, 2021-04-16 at 09:28 +0100, Richard Purdie via lists.yoctoproject.org wrote:
On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
add the variables required to collect data to "defaults"
so that data is collected on all builds.

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
 config.json | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index c43d231..cd82047 100644
--- a/config.json
+++ b/config.json
@@ -55,7 +55,10 @@
             "SDK_INCLUDE_TOOLCHAIN = '1'",
             "BB_DISKMON_DIRS = 'STOPTASKS,${TMPDIR},1G,100K STOPTASKS,${DL_DIR},1G STOPTASKS,${SSTATE_DIR},1G STOPTASKS,/tmp,100M,100K ABORT,${TMPDIR},100M,1K ABORT,${DL_DIR},100M ABORT,${SSTATE_DIR},100M ABORT,/tmp,10M,1K'",
             "BB_HASHSERVE = 'typhoon.yocto.io:8686'",
- "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'"
+ "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'",
+ "BB_HEARTBEAT_EVENT = '10'",
+ "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+ "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
         ]
     },
     "templates" : {
I merged 2-4 of this series, unfortunately this resulted in a few issues overnight:

https://autobuilder.yoctoproject.org/typhoon/#/builders/85/builds/1393

which is due to the non-executable script which there is a patch for, it just
wasn't in master due to the release. I've fixed that by merging the patches.

The bigger issue is the performance metrics which this broke:

https://autobuilder.yoctoproject.org/typhoon/#/builders/91/builds/4427
https://autobuilder.yoctoproject.org/typhoon/#/builders/92/builds/4453

We're going to need to disable these events on the performance metrics
targets...
There is also another issue as BB_HEARTBEAT_EVENT defaults to 1, the change to 10 
changes the default timings for buildstats and other pieces of code. In particular
I suspect this is breaking:

https://autobuilder.yoctoproject.org/typhoon/#/builders/80/builds/1993

and again in:

https://autobuilder.yoctoproject.org/typhoon/#/builders/79/builds/2014

in the disk monitoring selftest...

Cheers,

Richard


Re: [PATCH yocto-autobuilder-helper 2/4] config.json: collect data by default

Richard Purdie
 

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
add the variables required to collect data to "defaults"
so that data is collected on all builds.

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
 config.json | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index c43d231..cd82047 100644
--- a/config.json
+++ b/config.json
@@ -55,7 +55,10 @@
             "SDK_INCLUDE_TOOLCHAIN = '1'",
             "BB_DISKMON_DIRS = 'STOPTASKS,${TMPDIR},1G,100K STOPTASKS,${DL_DIR},1G STOPTASKS,${SSTATE_DIR},1G STOPTASKS,/tmp,100M,100K ABORT,${TMPDIR},100M,1K ABORT,${DL_DIR},100M ABORT,${SSTATE_DIR},100M ABORT,/tmp,10M,1K'",
             "BB_HASHSERVE = 'typhoon.yocto.io:8686'",
- "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'"
+ "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'",
+ "BB_HEARTBEAT_EVENT = '10'",
+ "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+ "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
         ]
     },
     "templates" : {
I merged 2-4 of this series, unfortunately this resulted in a few issues overnight:

https://autobuilder.yoctoproject.org/typhoon/#/builders/85/builds/1393

which is due to the non-executable script which there is a patch for, it just
wasn't in master due to the release. I've fixed that by merging the patches.

The bigger issue is the performance metrics which this broke:

https://autobuilder.yoctoproject.org/typhoon/#/builders/91/builds/4427
https://autobuilder.yoctoproject.org/typhoon/#/builders/92/builds/4453

We're going to need to disable these events on the performance metrics
targets...

Cheers,

Richard


Re: Building image from Root

Mike Looijmans
 

You can use both by changing the directory for downloads and sstate-cache. Put these in your local.conf:

SSTATE_DIR = "/opt/sstate-cache"
DL_DIR = "/opt/downloads"

(You change "/opt" to some other location. Make sure you have write access to those directories)

Move the contents of the build/sstate-cache and downloads to the new directories.

You really should consider adding extra storage. It's the easiest way out. OpenEmbedded/Yocto is insensitive to disk speed, so if you have some old rotating disk in the attic, put that in your PC.



Met vriendelijke groet / kind regards,

Mike Looijmans
System Expert


TOPIC Embedded Products B.V.
Materiaalweg 4, 5681 RJ Best
The Netherlands

T: +31 (0) 499 33 69 69
E: mike.looijmans@topicproducts.com
W: www.topic.nl

Please consider the environment before printing this e-mail

On 15-04-2021 17:34, Murugesh M via lists.yoctoproject.org wrote:
Hi

I am new to Yocto project and have little experience in Linux.

In my computer, Root folder is having free space of 65 GB and Home is having 45 GB free space.

Shall I get the poky in root folder and do the complete Yocto build image process from Root directory itself?

Please suggest.

Thanks.

--
Mike Looijmans


Re: Building image from Root

Khem Raj
 

try adding

INHERIT += "rm_work"

to local.conf and see if that helps

On Thu, Apr 15, 2021 at 11:11 PM Murugesh M <murugesh.pappu@gmail.com> wrote:

I had proceeded the Build image process with Home directory and got stuck with low disk space.
Now, the build image process is stopped almost near to last stage.

Please provide me any suggestion to come out of this problem.


Re: Building image from Root

Murugesh M
 

I had proceeded the Build image process with Home directory and got stuck with low disk space.
Now, the build image process is stopped almost near to last stage.

Please provide me any suggestion to come out of this problem.


Re: [PATCH yocto-autobuilder-helper 1/4] config.json: add "collect-data" template

Randy MacLeod
 

On 2021-04-15 1:55 p.m., Randy MacLeod wrote:
On 2021-04-15 11:55 a.m., Richard Purdie wrote:
On Thu, 2021-04-15 at 11:31 -0400, Sakib Sajal wrote:
On 2021-04-15 9:52 a.m., Richard Purdie wrote:
[Please note: This e-mail is from an EXTERNAL e-mail address]

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
collect-data template can run arbitrary commands/scripts
on a regular basis and logs the output in a file.

See oe-core for more details:
      edb7098e9e buildstats.bbclass: add functionality to collect build system stats

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
   config.json | 7 +++++++
   1 file changed, 7 insertions(+)

diff --git a/config.json b/config.json
index 5bfa240..c43d231 100644
--- a/config.json
+++ b/config.json
@@ -87,6 +87,13 @@
                   "SANITYTARGETS" : "core-image-full-cmdline:do_testimage core-image-sato:do_testimage core-image-sato-sdk:do_testimage"
               }
           },
+     "collect-data" : {
+            "extravars" : [
+                "BB_HEARTBEAT_EVENT = '10'",
+                "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+                "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
+            ]
+        },
Is the template used anywhere? I can't remember if we support nesting templates in which
case this is useful, or not?
We were using it for testing on the YP AB and thought it would be
useful if at some point the monitoring was dropped from the
default config.
I think we can just add it later if needed.
Richard,

I think that the web server for:
https://autobuilder.yocto.io/pub/non-release/
runs every 30 seconds via cron so if you are happy with
this crude dd trigger once things have soaked in master-next
and we want to gather some data overnight, could you merge to master?


I ran a simpler test with fewer io stressors from:
$ stress -hdd N
and have attached a graph with up to 3000! stressors that
we looked at this morning and another with up to 35 stressors.

It's a crude indicators but once we get beyond 18-20 io stressors
on the system I tested (48 cores, 128 GB RAM, 12 TB magnetic disk)
dd time become erratic.

Running qemu from tmpfs has clearly helped.
Let's gather some data and decide if we want to spend more time
learning how to monitor the system to tune how we are using it.

../Randy

../Randy

Cheers,

Richard
The template is not used anywhere, yet, the initial patchset enables the
data collection by default.

I have left the template in case the data collection is removed from
defaults and need to be used on a case by case basis.

I am not entirely sure if nesting templates work. I have not seen any
examples of it, neither did i try it myself. If nesting does work, the
template should be useful.
I had a quick look at the code and sadly, it doesn't appear I implemented
nesting so this wouldn't be that useful as things stand.

Cheers,

Richard





--
# Randy MacLeod
# Wind River Linux


Re: [PATCH yocto-autobuilder-helper 1/4] config.json: add "collect-data" template

Randy MacLeod
 

On 2021-04-15 11:55 a.m., Richard Purdie wrote:
On Thu, 2021-04-15 at 11:31 -0400, Sakib Sajal wrote:
On 2021-04-15 9:52 a.m., Richard Purdie wrote:
[Please note: This e-mail is from an EXTERNAL e-mail address]

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
collect-data template can run arbitrary commands/scripts
on a regular basis and logs the output in a file.

See oe-core for more details:
     edb7098e9e buildstats.bbclass: add functionality to collect build system stats

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
  config.json | 7 +++++++
  1 file changed, 7 insertions(+)

diff --git a/config.json b/config.json
index 5bfa240..c43d231 100644
--- a/config.json
+++ b/config.json
@@ -87,6 +87,13 @@
                  "SANITYTARGETS" : "core-image-full-cmdline:do_testimage core-image-sato:do_testimage core-image-sato-sdk:do_testimage"
              }
          },
+ "collect-data" : {
+ "extravars" : [
+ "BB_HEARTBEAT_EVENT = '10'",
+ "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+ "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
+ ]
+ },
Is the template used anywhere? I can't remember if we support nesting templates in which
case this is useful, or not?
We were using it for testing on the YP AB and thought it would be
useful if at some point the monitoring was dropped from the
default config.

I think we can just add it later if needed.

../Randy

Cheers,

Richard
The template is not used anywhere, yet, the initial patchset enables the
data collection by default.

I have left the template in case the data collection is removed from
defaults and need to be used on a case by case basis.

I am not entirely sure if nesting templates work. I have not seen any
examples of it, neither did i try it myself. If nesting does work, the
template should be useful.
I had a quick look at the code and sadly, it doesn't appear I implemented
nesting so this wouldn't be that useful as things stand.
Cheers,
Richard

--
# Randy MacLeod
# Wind River Linux


Re: [PATCH yocto-autobuilder-helper 1/4] config.json: add "collect-data" template

Richard Purdie
 

On Thu, 2021-04-15 at 11:31 -0400, Sakib Sajal wrote:
On 2021-04-15 9:52 a.m., Richard Purdie wrote:
[Please note: This e-mail is from an EXTERNAL e-mail address]

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
collect-data template can run arbitrary commands/scripts
on a regular basis and logs the output in a file.

See oe-core for more details:
     edb7098e9e buildstats.bbclass: add functionality to collect build system stats

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
  config.json | 7 +++++++
  1 file changed, 7 insertions(+)

diff --git a/config.json b/config.json
index 5bfa240..c43d231 100644
--- a/config.json
+++ b/config.json
@@ -87,6 +87,13 @@
                  "SANITYTARGETS" : "core-image-full-cmdline:do_testimage core-image-sato:do_testimage core-image-sato-sdk:do_testimage"
              }
          },
+ "collect-data" : {
+ "extravars" : [
+ "BB_HEARTBEAT_EVENT = '10'",
+ "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+ "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
+ ]
+ },
Is the template used anywhere? I can't remember if we support nesting templates in which
case this is useful, or not?

Cheers,

Richard
The template is not used anywhere, yet, the initial patchset enables the
data collection by default.

I have left the template in case the data collection is removed from
defaults and need to be used on a case by case basis.

I am not entirely sure if nesting templates work. I have not seen any
examples of it, neither did i try it myself. If nesting does work, the
template should be useful.
I had a quick look at the code and sadly, it doesn't appear I implemented 
nesting so this wouldn't be that useful as things stand.

Cheers,

Richard


Building image from Root

Murugesh M
 

Hi

I am new to Yocto project and have little experience in Linux.

In my computer, Root folder is having free space of 65 GB and Home is having 45 GB free space.

Shall I get the poky in root folder and do the complete Yocto build image process from Root directory itself?

Please suggest.

Thanks.


Re: [PATCH yocto-autobuilder-helper 1/4] config.json: add "collect-data" template

Richard Purdie
 

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
collect-data template can run arbitrary commands/scripts
on a regular basis and logs the output in a file.

See oe-core for more details:
    edb7098e9e buildstats.bbclass: add functionality to collect build system stats

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
 config.json | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/config.json b/config.json
index 5bfa240..c43d231 100644
--- a/config.json
+++ b/config.json
@@ -87,6 +87,13 @@
                 "SANITYTARGETS" : "core-image-full-cmdline:do_testimage core-image-sato:do_testimage core-image-sato-sdk:do_testimage"
             }
         },
+ "collect-data" : {
+ "extravars" : [
+ "BB_HEARTBEAT_EVENT = '10'",
+ "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+ "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
+ ]
+ },
Is the template used anywhere? I can't remember if we support nesting templates in which 
case this is useful, or not?

Cheers,

Richard


what to include in a "hardware bringup image"?

Robert P. J. Day
 

for a current project (and subsequent projects), i want to define a
hardware bringup image; that is, a really basic image chock-full of
low-level utilities for debugging initial board bringup. this means
precious little unnecessary userspace crud that has no value in that
context. (at the moment, the target is aarch64 but, naturally, the
ideal image would be maximally applicable no matter the target.)

i'm thinking of recipes that allow probing/configuration of memory
and busses and that sort of thing. off the top of my head:

* pciutils
* usbutils
* libgpiod
* phytool (or other MDIO probe/debug tools)
* devmem2
* spidev-test/spitools
* i2c-tools

the list can go on and on, and i just ran across this which i had
never seen before:

http://cgit.openembedded.org/meta-openembedded/tree/meta-oe/recipes-support/c-periphery/c-periphery_2.3.1.bb?h=master

suggestions? i suspect numerous folks on this list have already done
something like this.

rday

781 - 800 of 53908