Date   

[meta-networking][gatesgarth] keepalived exec_python_func() error

morgan.hill@...
 

Hello,

We are experiencing a build failures only on our build servers, but not
using the same containerised build environment locally, which is
confusing to say the least. We have already tried the nuclear option
and deleted all the caches to no avail.

The following is the error message we are getting, if anyone has an
idea for further debugging it would be much appreciated:

ERROR: keepalived-2.1.5-r0 do_package: Error executing a python
function in exec_python_func() autogenerated:


The stack trace of python calls that resulted in this exception/failure
was:

File: 'exec_python_func() autogenerated', lineno: 2, function: <module>

0001:

*** 0002:perform_packagecopy(d)

0003:

File: '/var/sstate/build/intel-corei7-
64/poky/build/conf/../../../poky/meta/classes/package.bbclass', lineno:
826, function: perform_packagecopy

0822: rpath_replace (dvar, d)

0823:}

0824:perform_packagecopy[cleandirs] = "${PKGD}"

0825:perform_packagecopy[dirs] = "${PKGD}"

*** 0826:

0827:# We generate a master list of directories to process, we
start by

0828:# seeding this list with reasonable defaults, then load from

0829:# the fs-perms.txt files

0830:python fixup_perms () {

File: '/usr/lib/python3.6/subprocess.py', lineno: 356, function:
check_output

0352: # empty string. That is maintained here for backwards
compatibility.

0353: kwargs['input'] = '' if
kwargs.get('universal_newlines', False) else b''

0354:

0355: return run(*popenargs, stdout=PIPE, timeout=timeout,
check=True,

*** 0356: **kwargs).stdout

0357:

0358:

0359:class CompletedProcess(object):

0360: """A process that has finished running.

File: '/usr/lib/python3.6/subprocess.py', lineno: 438, function: run

0434: raise

0435: retcode = process.poll()

0436: if check and retcode:

0437: raise CalledProcessError(retcode, process.args,

*** 0438: output=stdout,
stderr=stderr)

0439: return CompletedProcess(process.args, retcode, stdout,
stderr)

0440:

0441:

0442:def list2cmdline(seq):

Exception: subprocess.CalledProcessError: Command 'tar -cf - -C
/var/sstate/build/intel-corei7-64/poky/build/tmp-glibc/work/corei7-64-
holoplot-linux/keepalived/2.1.5-r0/image -p -S . | tar -xf - -C
/var/sstate/build/intel-corei7-64/poky/build/tmp-glibc/work/corei7-64-
holoplot-linux/keepalived/2.1.5-r0/package' returned non-zero exit
status 2.


Subprocess output:

abort()ing pseudo client by server request. See
https://wiki.yoctoproject.org/wiki/Pseudo_Abort for more details on
this.

Check logfile: /var/sstate/build/intel-corei7-64/poky/build/tmp-
glibc/work/corei7-64-holoplot-linux/keepalived/2.1.5-
r0/pseudo//pseudo.log

Aborted (core dumped)

tar: Unexpected EOF in archive

tar: Unexpected EOF in archive

tar: Error is not recoverable: exiting now


Best regards,

Morgan


--

<http://holoplot.com/?utm_source=email&utm_medium=sg&utm_campaign=Holoplot_Signature+>

<https://holoplot.com/>*HOLOPLOT GmbH - Headquarters*


Ringbahnstr. 12
(10-14) / A2

12099 Berlin, Germany

+49 (0) 30 40745812




*HOLOPLOT GmbH
- Manufacturing*


Alboinstr. 17-23 / Hall 12

12103 Berlin, Germany

+49
(0) 30 959988740


www.holoplot.com <https://holoplot.com/>



Follow us on
<https://www.facebook.com/OriginalHOLOPLOT/?utm_source=email&utm_medium=sg&utm_campaign=Holoplot_Signature+>
LinkedIn
<https://www.linkedin.com/company/holoplot-gmbh?utm_source=email&utm_medium=sg&utm_campaign=Holoplot_Signature+>.




Roman Sick – CEO | HRB183974B, Register Court Charlottenburg, Germany |
EU Tax-Registration No. DE277000701 This e-mail contains confidential
and/or privileged information. If you are not the intended recipient (or
have received this e-mail in error) please notify the sender immediately
and destroy this e-mail. Any unauthorized copying, disclosure or
distribution of the information in this e-mail is strictly forbidden.


Verifying Yocto image in beaglebone

Murugesh M
 

Hello

I have built the Yocto gatesgarth image for Beaglebone black. Then I loaded the image in SD card and booted with beagle.

Is there any way to check whether the image is built and working with beagle correctly?

Please suggest.

Note: I don't have UART to usb converter with me.

Thanks.


Re: [yocto-rocko] : fsl-community-bsp X11 build touch screen calibration issue on iMX6UL

rohit jadhav
 

Hello ,
I am experiencing same issue for Rocko build rootfs with QT application for X11.

I have tried the solution provided by Max , but it did not worked for me.

as Per my log :
Using calibration data stored in /etc/pointercal.xinput
Invalid format 42060
unable to find device EETI eGalax Touch Screen
INFO: width=1024, height=600
imx6ull14x14evk login: RandR extension missing
matchbox: Cant find a keycode for keysym 269025056
matchbox: ignoring key shortcut XF86Calendar=!$contacts

matchbox: Cant find a keycode for keysym 2809
matchbox: ignoring key shortcut telephone=!$dates

matchbox: Cant find a keycode for keysym 269025050
matchbox: ignoring key shortcut XF86Start=!matchbox-remote -desktop

Activating service name='org.a11y.atspi.Registry'
Successfully activated service 'org.a11y.atspi.Registry'
SpiRegistry daemon is running with well-known name - org.a11y.atspi.Registry
random: nonblocking pool is initialized

I have used Touch screen module of TFT TOUCH MODULE ,MODULE NO. : SF 70175.

Please provide some guidance.
Thank You.

Regards
Rohit J.


[meta-security] [dunfell] [PATCH 3/3] initramfs-framework-ima: introduce IMA_FORCE

Ming Liu <liu.ming50@...>
 

From: Ming Liu <liu.ming50@gmail.com>

Introduce IMA_FORCE to allow the IMA policy be applied forcely even
'no_ima' boot parameter is available.

This ensures the end users have a way to disable 'no_ima' support if
they want to, because it may expose a security risk if an attacker can
find a way to change kernel arguments, it will easily bypass rootfs
authenticity checks.

Signed-off-by: Sergio Prado <sergio.prado@toradex.com>
Signed-off-by: Ming Liu <liu.ming50@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
---
.../initrdscripts/initramfs-framework-ima.bb | 5 +++++
.../initrdscripts/initramfs-framework-ima/ima | 9 +++++++--
2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/meta-integrity/recipes-core/initrdscripts/initramfs-framewor=
k-ima.bb b/meta-integrity/recipes-core/initrdscripts/initramfs-framework-=
ima.bb
index 77f6f7c..6471c53 100644
--- a/meta-integrity/recipes-core/initrdscripts/initramfs-framework-ima.b=
b
+++ b/meta-integrity/recipes-core/initrdscripts/initramfs-framework-ima.b=
b
@@ -14,6 +14,9 @@ LIC_FILES_CHKSUM =3D "file://${COREBASE}/meta/COPYING.M=
IT;md5=3D3da9cfbcb788c80a0384
# to this recipe can just point towards one of its own files.
IMA_POLICY ?=3D "ima-policy-hashed"
=20
+# Force proceed IMA procedure even 'no_ima' boot parameter is available.
+IMA_FORCE ?=3D "false"
+
SRC_URI =3D " file://ima"
=20
inherit features_check
@@ -23,6 +26,8 @@ do_install () {
install -d ${D}/${sysconfdir}/ima
install -d ${D}/init.d
install ${WORKDIR}/ima ${D}/init.d/20-ima
+
+ sed -i "s/@@FORCE_IMA@@/${IMA_FORCE}/g" ${D}/init.d/20-ima
}
=20
FILES_${PN} =3D "/init.d ${sysconfdir}"
diff --git a/meta-integrity/recipes-core/initrdscripts/initramfs-framewor=
k-ima/ima b/meta-integrity/recipes-core/initrdscripts/initramfs-framework=
-ima/ima
index cff26a3..8971494 100644
--- a/meta-integrity/recipes-core/initrdscripts/initramfs-framework-ima/i=
ma
+++ b/meta-integrity/recipes-core/initrdscripts/initramfs-framework-ima/i=
ma
@@ -2,11 +2,16 @@
#
# Loads IMA policy into the kernel.
=20
+force_ima=3D@@FORCE_IMA@@
+
ima_enabled() {
- if [ "$bootparam_no_ima" =3D "true" ]; then
+ if [ "$force_ima" =3D "true" ]; then
+ return 0
+ elif [ "$bootparam_no_ima" =3D "true" ]; then
return 1
+ else
+ return 0
fi
- return 0
}
=20
ima_run() {
--=20
2.29.0


[meta-security] [dunfell] [PATCH 2/3] meta: drop IMA_POLICY from policy recipes

Ming Liu <liu.ming50@...>
 

From: Ming Liu <liu.ming50@gmail.com>

IMA_POLICY is being referred as policy recipe name in some places and it
is also being referred as policy file in other places, they are
conflicting with each other which make it impossible to set a IMA_POLICY
global variable in config file.

Fix it by dropping IMA_POLICY definitions from policy recipes

Signed-off-by: Ming Liu <liu.ming50@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
---
.../ima-policy-appraise-all_1.0.bb | 9 ++-------
.../ima_policy_hashed/ima-policy-hashed_1.0.bb | 9 ++-------
.../ima_policy_simple/ima-policy-simple_1.0.bb | 9 ++-------
3 files changed, 6 insertions(+), 21 deletions(-)

diff --git a/meta-integrity/recipes-security/ima_policy_appraise_all/ima-=
policy-appraise-all_1.0.bb b/meta-integrity/recipes-security/ima_policy_a=
ppraise_all/ima-policy-appraise-all_1.0.bb
index da62a4c..84ea161 100644
--- a/meta-integrity/recipes-security/ima_policy_appraise_all/ima-policy-=
appraise-all_1.0.bb
+++ b/meta-integrity/recipes-security/ima_policy_appraise_all/ima-policy-=
appraise-all_1.0.bb
@@ -2,19 +2,14 @@ SUMMARY =3D "IMA sample simple appraise policy "
LICENSE =3D "MIT"
LIC_FILES_CHKSUM =3D "file://${COREBASE}/meta/COPYING.MIT;md5=3D3da9cfbc=
b788c80a0384361b4de20420"
=20
-# This policy file will get installed as /etc/ima/ima-policy.
-# It is located via the normal file search path, so a .bbappend
-# to this recipe can just point towards one of its own files.
-IMA_POLICY ?=3D "ima_policy_appraise_all"
-
-SRC_URI =3D " file://${IMA_POLICY}"
+SRC_URI =3D " file://ima_policy_appraise_all"
=20
inherit features_check
REQUIRED_DISTRO_FEATURES =3D "ima"
=20
do_install () {
install -d ${D}/${sysconfdir}/ima
- install ${WORKDIR}/${IMA_POLICY} ${D}/${sysconfdir}/ima/ima-policy
+ install ${WORKDIR}/ima_policy_appraise_all ${D}/${sysconfdir}/ima/im=
a-policy
}
=20
FILES_${PN} =3D "${sysconfdir}/ima"
diff --git a/meta-integrity/recipes-security/ima_policy_hashed/ima-policy=
-hashed_1.0.bb b/meta-integrity/recipes-security/ima_policy_hashed/ima-po=
licy-hashed_1.0.bb
index ebb0426..ff7169e 100644
--- a/meta-integrity/recipes-security/ima_policy_hashed/ima-policy-hashed=
_1.0.bb
+++ b/meta-integrity/recipes-security/ima_policy_hashed/ima-policy-hashed=
_1.0.bb
@@ -2,13 +2,8 @@ SUMMARY =3D "IMA sample hash policy"
LICENSE =3D "MIT"
LIC_FILES_CHKSUM =3D "file://${COREBASE}/meta/COPYING.MIT;md5=3D3da9cfbc=
b788c80a0384361b4de20420"
=20
-# This policy file will get installed as /etc/ima/ima-policy.
-# It is located via the normal file search path, so a .bbappend
-# to this recipe can just point towards one of its own files.
-IMA_POLICY ?=3D "ima_policy_hashed"
-
SRC_URI =3D " \
- file://${IMA_POLICY} \
+ file://ima_policy_hashed \
"
=20
inherit features_check
@@ -16,7 +11,7 @@ REQUIRED_DISTRO_FEATURES =3D "ima"
=20
do_install () {
install -d ${D}/${sysconfdir}/ima
- install ${WORKDIR}/${IMA_POLICY} ${D}/${sysconfdir}/ima/ima-policy
+ install ${WORKDIR}/ima_policy_hashed ${D}/${sysconfdir}/ima/ima-poli=
cy
}
=20
FILES_${PN} =3D "${sysconfdir}/ima"
diff --git a/meta-integrity/recipes-security/ima_policy_simple/ima-policy=
-simple_1.0.bb b/meta-integrity/recipes-security/ima_policy_simple/ima-po=
licy-simple_1.0.bb
index cb4b6b8..0e56aec 100644
--- a/meta-integrity/recipes-security/ima_policy_simple/ima-policy-simple=
_1.0.bb
+++ b/meta-integrity/recipes-security/ima_policy_simple/ima-policy-simple=
_1.0.bb
@@ -2,19 +2,14 @@ SUMMARY =3D "IMA sample simple policy"
LICENSE =3D "MIT"
LIC_FILES_CHKSUM =3D "file://${COREBASE}/meta/COPYING.MIT;md5=3D3da9cfbc=
b788c80a0384361b4de20420"
=20
-# This policy file will get installed as /etc/ima/ima-policy.
-# It is located via the normal file search path, so a .bbappend
-# to this recipe can just point towards one of its own files.
-IMA_POLICY ?=3D "ima_policy_simple"
-
-SRC_URI =3D " file://${IMA_POLICY}"
+SRC_URI =3D " file://ima_policy_simple"
=20
inherit features_check
REQUIRED_DISTRO_FEATURES =3D "ima"
=20
do_install () {
install -d ${D}/${sysconfdir}/ima
- install ${WORKDIR}/${IMA_POLICY} ${D}/${sysconfdir}/ima/ima-policy
+ install ${WORKDIR}/ima_policy_simple ${D}/${sysconfdir}/ima/ima-poli=
cy
}
=20
FILES_${PN} =3D "${sysconfdir}/ima"
--=20
2.29.0


[meta-security] [dunfell] [PATCH 1/3] ima-evm-keys: add file-checksums to IMA_EVM_X509

Ming Liu <liu.ming50@...>
 

From: Ming Liu <liu.ming50@gmail.com>

This ensures when a end user change the IMA_EVM_X509 key file,
ima-evm-keys recipe will be rebuilt.

Signed-off-by: Ming Liu <liu.ming50@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
---
meta-integrity/recipes-security/ima-evm-keys/ima-evm-keys_1.0.bb | 1 +
1 file changed, 1 insertion(+)

diff --git a/meta-integrity/recipes-security/ima-evm-keys/ima-evm-keys_1.=
0.bb b/meta-integrity/recipes-security/ima-evm-keys/ima-evm-keys_1.0.bb
index 62685bb..7708aef 100644
--- a/meta-integrity/recipes-security/ima-evm-keys/ima-evm-keys_1.0.bb
+++ b/meta-integrity/recipes-security/ima-evm-keys/ima-evm-keys_1.0.bb
@@ -14,3 +14,4 @@ do_install () {
lnr ${D}${sysconfdir}/keys/x509_evm.der ${D}${sysconfdir}/keys/x=
509_ima.der
fi
}
+do_install[file-checksums] +=3D "${@'${IMA_EVM_X509}:%s' % os.path.exist=
s('${IMA_EVM_X509}')}"
--=20
2.29.0


[meta-security] [dunfell] [PATCH 0/3] Backport several IMA fixes to LTS dunfell

Ming Liu <liu.ming50@...>
 

From: Ming Liu <ming.liu@toradex.com>

Ming Liu (3):
ima-evm-keys: add file-checksums to IMA_EVM_X509
meta: drop IMA_POLICY from policy recipes
initramfs-framework-ima: introduce IMA_FORCE

.../initrdscripts/initramfs-framework-ima.bb | 5 +++++
.../initrdscripts/initramfs-framework-ima/ima | 9 +++++++--
.../recipes-security/ima-evm-keys/ima-evm-keys_1.0.bb | 1 +
.../ima-policy-appraise-all_1.0.bb | 9 ++-------
.../ima_policy_hashed/ima-policy-hashed_1.0.bb | 9 ++-------
.../ima_policy_simple/ima-policy-simple_1.0.bb | 9 ++-------
6 files changed, 19 insertions(+), 23 deletions(-)

--=20
2.29.0


#yocto #llvm #yocto #llvm

Monsees, Steven C (US)
 

 

I attempted to add llvm to my zeus image, and I am seeing the following Yocto build error…

 

What is the actual problem here, and how best resolve ?

 

Build Configuration:

BB_VERSION           = "1.44.0"

BUILD_SYS            = "x86_64-linux"

NATIVELSBSTRING      = "rhel-7.9"

TARGET_SYS           = "x86_64-poky-linux"

MACHINE              = "sbcb-default"

DISTRO               = "limws"

DISTRO_VERSION       = "3.0.4"

TUNE_FEATURES        = "m64 corei7"

TARGET_FPU           = ""

meta

meta-poky            = "my_yocto_3.0.4:f2eb22a8783f1eecf99bd4042695bab920eed00e"

meta-perl

meta-python

meta-filesystems

meta-networking

meta-initramfs

meta-oe              = "zeus:2b5dd1eb81cd08bc065bc76125f2856e9383e98b"

meta-clang           = "zeus:f5355ca9b86fb5de5930132ffd95a9b352d694f9"

meta                 = "master:a32ddd2b2a51b26c011fa50e441df39304651503"

meta-intel           = "zeus:d9942d4c3a710406b051852de7232db03c297f4e"

meta-intel           = "v2019.02:f635a364c55f1fb12519aff54924a0a5b947091e"

 

Initialising tasks: 100% |#######################################################| Time: 0:00:04

Checking sstate mirror object availability: 100% |###############################| Time: 0:00:00

Sstate summary: Wanted 2129 Found 2090 Missed 39 Current 0 (98% match, 0% complete)

NOTE: Executing Tasks

NOTE: Setscene tasks completed

ERROR: llvm-8.0.1-r0 do_compile: Execution of '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964' failed with exit code 1:

ninja: error: '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/recipe-sysroot-native/usr/bin/llvm-tblgen8.0.1', needed by 'include/llvm/IR/Attributes.inc', missing and no known rule to make it

WARNING: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964:1 exit 1 from 'ninja -v -j 4'

 

ERROR: Logfile of failure stored in: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/log.do_compile.18964

Log data follows:

| DEBUG: Executing shell function do_compile

| ninja: error: '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/recipe-sysroot-native/usr/bin/llvm-tblgen8.0.1', needed by 'include/llvm/IR/Attributes.inc', missing and no known rule to make it

| WARNING: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964:1 exit 1 from 'ninja -v -j 4'

| ERROR: Execution of '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964' failed with exit code 1:

| ninja: error: '/disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/recipe-sysroot-native/usr/bin/llvm-tblgen8.0.1', needed by 'include/llvm/IR/Attributes.inc', missing and no known rule to make it

| WARNING: /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default/tmp/work/corei7-64-poky-linux/llvm/8.0.1-r0/temp/run.do_compile.18964:1 exit 1 from 'ninja -v -j 4'

|

ERROR: Task (/disk0/scratch/smonsees/yocto/workspace_3/poky/meta/recipes-devtools/llvm/llvm_git.bb:do_compile) failed with exit code '1'

NOTE: Tasks Summary: Attempted 5949 tasks of which 5385 didn't need to be rerun and 1 failed.

 

Summary: 1 task failed:

  /disk0/scratch/smonsees/yocto/workspace_3/poky/meta/recipes-devtools/llvm/llvm_git.bb:do_compile

Summary: There was 1 ERROR message shown, returning a non-zero exit code.

15:16 smonsees@yix490038 /disk0/scratch/smonsees/yocto/workspace_3/builds2/sbcb-default>


Re: bitbake controlling memory use

Gmane Admin
 

Hi,
Op 18-04-2021 om 11:59 schreef Richard Purdie:
On Sun, 2021-04-18 at 00:17 +0200, Gmane Admin wrote:
Hi,
Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average is
too high, so it's only natural to add a RAM limiter too.

    -l [N], --load-average[=N], --max-load[=N]
                                Don't start multiple jobs unless load is
below N.

In any case, patches welcome :)
During today's Yocto technical call (1),
we talked about approaches to limiting the system load and avoiding
swap and/or OOM events. Here's what (little!) i recall from the
discussion, 9 busy hours later.

In the short run, instead of independently maintaining changes to
configurations to limit parallelism or xz memory usage, etc, we
could develop an optional common include file where such limits
are shared across the community.
I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.
It would need to be:
PARALLEL_MAKE_pn-nodejs = "-j 1"

So I watched it run for a while. It compiles with g++ and as at about
0.5GB per thread, which is OK. In the end it does ld taking 4GB and it
tries to do 4 in parallel. And then swapping becomes so heavy the
desktop becomes unresponsive. Like I mentioned before ssh from another
machine allows me to STOP one of them, allowing the remaining to
complete. And then CONT the last one.

I worked around it now, by creating a bbappend for nodejs with only
PARALLEL_MAKE = "-j 2"
If that works, the override above should also work. You do need the "pn-"
prefix to the recipe name though.
And indeed it does, thanks so much for the tip.

Ferry

Cheers,
Richard


[PATCH yocto-autobuilder-helper] config.json: measure every 60 seconds

Randy MacLeod
 

With the previous interval of 10 seconds, there would be
serveral times when the system was very busy and the
script would not return before the next run was scheduled
resulting in no measurement. In addition, build:
https://autobuilder.yocto.io/pub/non-release/20210417-13/
produced 17 files with top output with top running 454 times
and that's a bit too much data to analyze for each run. By
decreasing the measurements, we'll find the worse problems
first, fix them and then we can increase the freqency of
measurement if needed.

Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
config.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config.json b/config.json
index aad5257..962d8ae 100644
--- a/config.json
+++ b/config.json
@@ -56,7 +56,7 @@
"BB_DISKMON_DIRS = 'STOPTASKS,${TMPDIR},1G,100K STOPTASKS,${DL_DIR},1G STOPTASKS,${SSTATE_DIR},1G STOPTASKS,/tmp,100M,100K ABORT,${TMPDIR},100M,1K ABORT,${DL_DIR},100M ABORT,${SSTATE_DIR},100M ABORT,/tmp,10M,1K'",
"BB_HASHSERVE = 'typhoon.yocto.io:8686'",
"RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'",
- "BB_HEARTBEAT_EVENT = '10'",
+ "BB_HEARTBEAT_EVENT = '60'",
"BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
"BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
]
--
2.27.0


Re: bitbake controlling memory use

Richard Purdie
 

On Sun, 2021-04-18 at 00:17 +0200, Gmane Admin wrote:
Hi,
Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average is
too high, so it's only natural to add a RAM limiter too.

    -l [N], --load-average[=N], --max-load[=N]
                                Don't start multiple jobs unless load is
below N.

In any case, patches welcome :)
During today's Yocto technical call (1),
we talked about approaches to limiting the system load and avoiding
swap and/or OOM events. Here's what (little!) i recall from the
discussion, 9 busy hours later.

In the short run, instead of independently maintaining changes to
configurations to limit parallelism or xz memory usage, etc, we
could develop an optional common include file where such limits
are shared across the community.
I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.
It would need to be:

PARALLEL_MAKE_pn-nodejs = "-j 1"

So I watched it run for a while. It compiles with g++ and as at about
0.5GB per thread, which is OK. In the end it does ld taking 4GB and it
tries to do 4 in parallel. And then swapping becomes so heavy the
desktop becomes unresponsive. Like I mentioned before ssh from another
machine allows me to STOP one of them, allowing the remaining to
complete. And then CONT the last one.

I worked around it now, by creating a bbappend for nodejs with only
PARALLEL_MAKE = "-j 2"
If that works, the override above should also work. You do need the "pn-" 
prefix to the recipe name though.

Cheers,

Richard


Re: bitbake controlling memory use

Gmane Admin
 

Hi,
Op 14-04-2021 om 06:59 schreef Richard Purdie:
On Tue, 2021-04-13 at 21:14 -0400, Randy MacLeod wrote:
On 2021-04-11 12:19 p.m., Alexander Kanavin wrote:
make already has -l option for limiting new instances if load average is
too high, so it's only natural to add a RAM limiter too.

   -l [N], --load-average[=N], --max-load[=N]
                               Don't start multiple jobs unless load is
below N.

In any case, patches welcome :)
During today's Yocto technical call (1),
we talked about approaches to limiting the system load and avoiding
swap and/or OOM events. Here's what (little!) i recall from the
discussion, 9 busy hours later.

In the short run, instead of independently maintaining changes to
configurations to limit parallelism or xz memory usage, etc, we
could develop an optional common include file where such limits
are shared across the community.
I tried PARALLEL_MAKE_nodejs = "-j 1" from local.conf but that didn't work.

So I watched it run for a while. It compiles with g++ and as at about 0.5GB per thread, which is OK. In the end it does ld taking 4GB and it tries to do 4 in parallel. And then swapping becomes so heavy the desktop becomes unresponsive. Like I mentioned before ssh from another machine allows me to STOP one of them, allowing the remaining to complete. And then CONT the last one.

I worked around it now, by creating a bbappend for nodejs with only
PARALLEL_MAKE = "-j 2"

In the longer run, changes to how bitbake schedules work may be needed.

Richard says that there was a make/build server idea and maybe even a
patch from a while ago. It may be in one of his poky-contrib branches.
I took a few minutes to look but nothing popped up. A set of keywords to
search for might help me find it.
http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/wipqueue4&id=d66a327fb6189db5de8bc489859235dcba306237
Cheers,
Richard


Re: [PATCH yocto-autobuilder-helper 1/4] config.json: add "collect-data" template

Randy MacLeod
 

On 2021-04-15 4:48 p.m., Randy MacLeod wrote:
On 2021-04-15 1:55 p.m., Randy MacLeod wrote:
On 2021-04-15 11:55 a.m., Richard Purdie wrote:
On Thu, 2021-04-15 at 11:31 -0400, Sakib Sajal wrote:
On 2021-04-15 9:52 a.m., Richard Purdie wrote:
[Please note: This e-mail is from an EXTERNAL e-mail address]

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
collect-data template can run arbitrary commands/scripts
on a regular basis and logs the output in a file.

See oe-core for more details:
      edb7098e9e buildstats.bbclass: add functionality to collect build system stats

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
   config.json | 7 +++++++
   1 file changed, 7 insertions(+)

diff --git a/config.json b/config.json
index 5bfa240..c43d231 100644
--- a/config.json
+++ b/config.json
@@ -87,6 +87,13 @@
                   "SANITYTARGETS" : "core-image-full-cmdline:do_testimage core-image-sato:do_testimage core-image-sato-sdk:do_testimage"
               }
           },
+     "collect-data" : {
+            "extravars" : [
+                "BB_HEARTBEAT_EVENT = '10'",
+                "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+                "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
+            ]
+        },
Is the template used anywhere? I can't remember if we support nesting templates in which
case this is useful, or not?
We were using it for testing on the YP AB and thought it would be
useful if at some point the monitoring was dropped from the
default config.

I think we can just add it later if needed.
Richard,

I think that the web server for:
  https://autobuilder.yocto.io/pub/non-release/
runs every 30 seconds via cron so if you are happy with
this crude dd trigger once things have soaked in master-next
and we want to gather some data overnight, could you merge to master?


I ran a simpler test with fewer io stressors from:
$ stress -hdd N
and have attached a graph with up to 3000! stressors that
we looked at this morning and another with up to 35 stressors.

It's a crude indicators but once we get beyond 18-20 io stressors
on the system I tested (48 cores, 128 GB RAM, 12 TB magnetic disk)
dd time become erratic.

Running qemu from tmpfs has clearly helped.
Let's gather some data and decide if we want to spend more time
learning how to monitor the system to tune how we are using it.

../Randy
Thanks for fixing the fall-out due to assumptions in other tests.
Is the system back to normal and operational now?


What was the impact of running the heartbeat and the dd test every
10 seconds on the system build performance?

Should we  increase the interval to 30, 60, ore more seconds?


I spent some time looking at the first bit of data along with
Sakib and Saul from time to time.

General conclusions:

1. It seems like ALL triggers involve oe-selftest being active.

2. xz might be a problem but we're not sure yet.

3. We need more data and tools and time to think about it.


To Do:

1. increase top cmdline length from 512 to  16K

2. sometimes we see:

     Command '['oe-time-dd-test.sh', '100']' timed out after 10.0 seconds

That should not happen so we should understand why and either increase
the time between runs or fix the tooling. This seems to happen under load
so it's hiding the interesting data that we are looking for!

3. tail the cooker console in addition to top. Present that before top.

    It would be nice to have a top equivalent for bitbake.




We did collect some triggered host data last night as seen in:

https://autobuilder.yocto.io/pub/non-release/

https://autobuilder.yocto.io/pub/non-release/20210415-16/

Only one a-full build was run. There were 10 log files produced.

There were 21 times that the dd time exceeded the 5 second limit
out of a total of 21581 (or so!) invocations and those triggers we
captured by 10 log files:

testresults/beaglebone-alt/2021-04-16--00-19/host_stats_0_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_2_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_4_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_6_top.txt
testresults/qa-extras2/2021-04-15--22-43/host_stats_8_top.txt
testresults/qemuarm/2021-04-16--00-02/host_stats_0_top.txt
testresults/qemuarm/2021-04-16--00-02/host_stats_1_top.txt
testresults/qemumips-alt/2021-04-15--23-36/host_stats_1_top.txt
testresults/qemumips64/2021-04-16--02-46/host_stats_0_top.txt
testresults/qemux86-world/2021-04-16--00-00/host_stats_0_top.txt


We knew that our naming convention needed work in that the files
are generically named and differ only by the directory datastamp and,
where the logs contain 'top' output or not the _top suffix.  We'd like to help
whoever is looking at the data understand what the context of
the build was. That's not clear to Sakib and I given that we are YP AB newbies still.
Do you have any suggestions about what the directory structure or file naming convention should be?
The other thing need to do is correlate these higher latency times
with the intermittent problems we've encountered. We can do that manually
I support via the SWAT team but ideally there would be an automated process.


More quick analysis...

The number of times that top ran per log file:

$ grep "^top - " `fd _top autobuilder.yocto.io/` | cut -d":" -f1 | uniq -c | \
    sed -e 's|autobuilder.yocto.io/pub/non-release/20210415-16/||'
      2 testresults/beaglebone-alt/2021-04-16--00-19/host_stats_0_top.txt
      3 testresults/qa-extras2/2021-04-15--22-43/host_stats_2_top.txt
      1 testresults/qa-extras2/2021-04-15--22-43/host_stats_4_top.txt
      2 testresults/qa-extras2/2021-04-15--22-43/host_stats_6_top.txt
      1 testresults/qa-extras2/2021-04-15--22-43/host_stats_8_top.txt
      5 testresults/qemuarm/2021-04-16--00-02/host_stats_0_top.txt
      2 testresults/qemuarm/2021-04-16--00-02/host_stats_1_top.txt
      1 testresults/qemumips-alt/2021-04-15--23-36/host_stats_1_top.txt
      2 testresults/qemumips64/2021-04-16--02-46/host_stats_0_top.txt
      2 testresults/qemux86-world/2021-04-16--00-00/host_stats_0_top.txt
some of these are duplicates in that the different steps _2,4,6,8 above

A little shell hacking can produce one file per top output with
ample access to stackoverflow!

COUNTER=1
for i in `fd _top`;
   do for j in `grep "^top - " $i | cut -c 7-15`; do
        sed -n "/top - ${j}/,/Event Time:/p" $i >> host-stats-$j--$COUNTER.log;
        ((COUNTER++));
   done;
done

This works because the first line of time is similar to:
   top - 15:40:53 up 2 days, 22:17,  1 user,  load average: 0.36, 0.58, 0.85
so cutting out chars 7-15 gives a fairly uniq timestamp string for the filename and
adding the counter makes it unique.

Now we have 21 log files:

$ ls host-stats-2* | wc -l
21

How big are these files, ie how many process/kernel threads were running
when top ran?

$ wc -l host-stats-2* | sort -n
    757 host-stats-22:12:32--17.log
    778 host-stats-22:18:21--8.log
    784 host-stats-21:59:42--5.log
    785 host-stats-21:59:42--12.log
    792 host-stats-22:18:01--7.log
    800 host-stats-22:18:21--14.log
    811 host-stats-22:07:40--13.log
    812 host-stats-22:07:59--6.log
    821 host-stats-21:56:21--3.log
    850 host-stats-21:59:33--11.log
    856 host-stats-21:59:33--4.log
    869 host-stats-22:29:49--16.log
    884 host-stats-22:29:14--9.log
    886 host-stats-22:29:36--15.log
    981 host-stats-21:55:40--10.log
    985 host-stats-22:47:27--2.log
    987 host-stats-22:47:27--21.log
   1124 host-stats-22:37:33--1.log
   1193 host-stats-22:37:26--20.log
   1304 host-stats-23:19:14--19.log
   1321 host-stats-23:18:57--18.log
  19380 total

I noticed that several but not all log files were running xz with args like:

    xz -a --memlimit=50% --threads=56

$ for i in `ls host-stats-2*`; do echo -n $i ": "; grep "xz " $i | wc -l; done
host-stats-21:55:40--10.log : 28
host-stats-21:56:21--3.log : 4
host-stats-21:59:33--11.log : 1
host-stats-21:59:33--4.log : 1
host-stats-21:59:42--12.log : 1
host-stats-21:59:42--5.log : 1
host-stats-22:07:40--13.log : 2
host-stats-22:07:59--6.log : 2
host-stats-22:12:32--17.log : 0
host-stats-22:18:01--7.log : 6
host-stats-22:18:21--14.log : 3
host-stats-22:18:21--8.log : 3
host-stats-22:29:14--9.log : 1
host-stats-22:29:36--15.log : 0
host-stats-22:29:49--16.log : 0
host-stats-22:37:26--20.log : 56
host-stats-22:37:33--1.log : 16
host-stats-22:47:27--21.log : 0
host-stats-22:47:27--2.log : 0
host-stats-23:18:57--18.log : 0
host-stats-23:19:14--19.log : 18

In this case, I don't think it's a problem but if we had several packages
running xz like that at once with a limit of 50% of memory each,
that could be a problem. Has anyone looked at the time impact of
say reducing the number of threads to 32 and the memory limit to
15% ?


All of the top output logs seems to be running oe-selftest:

$ for i in host-stats-2*; do grep -H -c "DISPLAY.*oe-selftest " $i ; done
host-stats-21:55:40--10.log:1
host-stats-21:56:21--3.log:1
host-stats-21:59:33--11.log:1
host-stats-21:59:33--4.log:1
host-stats-21:59:42--12.log:1
host-stats-21:59:42--5.log:1
host-stats-22:07:40--13.log:1
host-stats-22:07:59--6.log:1
host-stats-22:12:32--17.log:1
host-stats-22:18:01--7.log:1
host-stats-22:18:21--14.log:1
host-stats-22:18:21--8.log:1
host-stats-22:29:14--9.log:1
host-stats-22:29:36--15.log:1
host-stats-22:29:49--16.log:1
host-stats-22:37:26--20.log:1
host-stats-22:37:33--1.log:1
host-stats-22:47:27--21.log:1
host-stats-22:47:27--2.log:1
host-stats-23:18:57--18.log:2
host-stats-23:19:14--19.log:2
$ for i in host-stats-2*; do grep -H -c "DISPLAY.*oe-selftest " $i ; done   | wc -l
21

Yikes, that seems like more than just random chance.


The logs do not seem to be duplicates in that there isn't a single cluster
of identical or similar timestamps although some are close and are
likely from the same file. That said they certainly don't seem to
be spread out uniformly over time and that's what we all expect
in that the system response time is okay for much of the time and
is poor for quite a while every now and then.

$ for i in host-stats-2*; do echo -n $i ": "; head -1 $i | cut -c -15; done
host-stats-21:55:40--10.log : top - 21:55:40
host-stats-21:56:21--3.log   : top - 21:56:21
host-stats-21:59:33--11.log : top - 21:59:33
host-stats-21:59:33--4.log   : top - 21:59:33
host-stats-21:59:42--12.log : top - 21:59:42
host-stats-21:59:42--5.log   : top - 21:59:42
host-stats-22:07:40--13.log : top - 22:07:40
host-stats-22:07:59--6.log   : top - 22:07:59
host-stats-22:12:32--17.log : top - 22:12:32
host-stats-22:18:01--7.log   : top - 22:18:01
host-stats-22:18:21--14.log : top - 22:18:21
host-stats-22:18:21--8.log   : top - 22:18:21
host-stats-22:29:14--9.log   : top - 22:29:14
host-stats-22:29:36--15.log : top - 22:29:36
host-stats-22:29:49--16.log : top - 22:29:49
host-stats-22:37:26--20.log : top - 22:37:26
host-stats-22:37:33--1.log   : top - 22:37:33
host-stats-22:47:27--21.log : top - 22:47:27
host-stats-22:47:27--2.log   : top - 22:47:27
host-stats-23:18:57--18.log : top - 23:18:57
host-stats-23:19:14--19.log : top - 23:19:14


All for now.

../Randy




../Randy

Cheers,

Richard
The template is not used anywhere, yet, the initial patchset enables the
data collection by default.

I have left the template in case the data collection is removed from
defaults and need to be used on a case by case basis.

I am not entirely sure if nesting templates work. I have not seen any
examples of it, neither did i try it myself. If nesting does work, the
template should be useful.
I had a quick look at the code and sadly, it doesn't appear I implemented
nesting so this wouldn't be that useful as things stand.

Cheers,

Richard




--
# Randy MacLeod
# Wind River Linux


#yocto #bitbake #gatesgarth #qca9377 #qca9377 #yocto #bitbake #gatesgarth

jovanalukovic0@...
 

Hi,
I am trying to include qca9377 module in my image (yocto-gatesgarth). I am using command  bitbake imx-image-full, and i added two lines in my local.conf file: MACHINE_FEATURES += " qca9377"
IMAGE_INSTALL_append += " kernel-module-qca9377", but i get constantly the same mistake:
ERROR: kernel-module-qca9377-3.1-r0 do_compile: oe_runmake failed
ERROR: kernel-module-qca9377-3.1-r0 do_compile: Execution of '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/temp/run.do_compile.25256' failed with exit code 1:
make -C /home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source M=/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git modules WLAN_ROOT=/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git MODNAME?=wlan CONFIG_QCA_WIFI_ISOC=0 CONFIG_QCA_WIFI_2_0=1 CONFIG_QCA_CLD_WLAN=m WLAN_OPEN_SOURCE=1  
make[1]: Entering directory '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source'
make[2]: Entering directory '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-build-artifacts'
and this is just a part of whole group of mistakes. There are a lot of mistakes, for examples:
cc1: some warnings being treated as errors
| /home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source/scripts/Makefile.build:279: recipe for target '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git/CORE/HDD/src/wlan_hdd_oemdata.o' failed
| make[3]: *** [/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git/CORE/HDD/src/wlan_hdd_oemdata.o] Error 1
| /home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work-shared/imx7ulpevk/kernel-source/scripts/Makefile.build:279: recipe for target '/home/jovana/Projects/imx-yocto-bsp-gates/build-xwayland/tmp/work/imx7ulpevk-poky-linux-gnueabi/kernel-module-qca9377/3.1-r0/git/CORE/HDD/src/wlan_hdd_early_suspend.o' failed.
Do you have eny ideas what can i do? Do i miss something for my compiling?

Best regards and thanks a lot!


#bitbake Can't use 'bitbake -g <image-name> -u taskdep #bitbake

keydi
 

Hi,
Myself had to ask web search engine for usage of Task Dependency Explorer taskexp as I didn't succeed on search in Yocto materials.
Hence, the way I try to use taskexp might be not right.

In the fashion as I try to start Task Dependency Explorer it does not work.
Invocation completes with error in __init__.py  -> require_version figures out lack of Gtk namespace.
Which is build-area Gtk is missed? Image, distribution, yet another? 
Which way of fixing should I aim to resolve error?

  File "/mnt/..../meta/poky/bitbake/lib/bb/ui/taskexp.py", line 22, in <module>
    gi.require_version('Gtk', '3.0')
  File "/usr/lib/python3/dist-packages/gi/__init__.py", line 130, in require_version
    raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gtk not available

Best Regards
keydi
 


Re: [PATCH yocto-autobuilder-helper 2/4] config.json: collect data by default

Richard Purdie
 

On Fri, 2021-04-16 at 09:28 +0100, Richard Purdie via lists.yoctoproject.org wrote:
On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
add the variables required to collect data to "defaults"
so that data is collected on all builds.

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
 config.json | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index c43d231..cd82047 100644
--- a/config.json
+++ b/config.json
@@ -55,7 +55,10 @@
             "SDK_INCLUDE_TOOLCHAIN = '1'",
             "BB_DISKMON_DIRS = 'STOPTASKS,${TMPDIR},1G,100K STOPTASKS,${DL_DIR},1G STOPTASKS,${SSTATE_DIR},1G STOPTASKS,/tmp,100M,100K ABORT,${TMPDIR},100M,1K ABORT,${DL_DIR},100M ABORT,${SSTATE_DIR},100M ABORT,/tmp,10M,1K'",
             "BB_HASHSERVE = 'typhoon.yocto.io:8686'",
- "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'"
+ "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'",
+ "BB_HEARTBEAT_EVENT = '10'",
+ "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+ "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
         ]
     },
     "templates" : {
I merged 2-4 of this series, unfortunately this resulted in a few issues overnight:

https://autobuilder.yoctoproject.org/typhoon/#/builders/85/builds/1393

which is due to the non-executable script which there is a patch for, it just
wasn't in master due to the release. I've fixed that by merging the patches.

The bigger issue is the performance metrics which this broke:

https://autobuilder.yoctoproject.org/typhoon/#/builders/91/builds/4427
https://autobuilder.yoctoproject.org/typhoon/#/builders/92/builds/4453

We're going to need to disable these events on the performance metrics
targets...
There is also another issue as BB_HEARTBEAT_EVENT defaults to 1, the change to 10 
changes the default timings for buildstats and other pieces of code. In particular
I suspect this is breaking:

https://autobuilder.yoctoproject.org/typhoon/#/builders/80/builds/1993

and again in:

https://autobuilder.yoctoproject.org/typhoon/#/builders/79/builds/2014

in the disk monitoring selftest...

Cheers,

Richard


Re: [PATCH yocto-autobuilder-helper 2/4] config.json: collect data by default

Richard Purdie
 

On Tue, 2021-04-13 at 13:02 -0400, sakib.sajal@windriver.com wrote:
add the variables required to collect data to "defaults"
so that data is collected on all builds.

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
---
 config.json | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index c43d231..cd82047 100644
--- a/config.json
+++ b/config.json
@@ -55,7 +55,10 @@
             "SDK_INCLUDE_TOOLCHAIN = '1'",
             "BB_DISKMON_DIRS = 'STOPTASKS,${TMPDIR},1G,100K STOPTASKS,${DL_DIR},1G STOPTASKS,${SSTATE_DIR},1G STOPTASKS,/tmp,100M,100K ABORT,${TMPDIR},100M,1K ABORT,${DL_DIR},100M ABORT,${SSTATE_DIR},100M ABORT,/tmp,10M,1K'",
             "BB_HASHSERVE = 'typhoon.yocto.io:8686'",
- "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'"
+ "RUNQEMU_TMPFS_DIR = '/home/pokybuild/tmp'",
+ "BB_HEARTBEAT_EVENT = '10'",
+ "BB_LOG_HOST_STAT_ON_INTERVAL = '1'",
+ "BB_LOG_HOST_STAT_CMDS = 'oe-time-dd-test.sh 100'"
         ]
     },
     "templates" : {
I merged 2-4 of this series, unfortunately this resulted in a few issues overnight:

https://autobuilder.yoctoproject.org/typhoon/#/builders/85/builds/1393

which is due to the non-executable script which there is a patch for, it just
wasn't in master due to the release. I've fixed that by merging the patches.

The bigger issue is the performance metrics which this broke:

https://autobuilder.yoctoproject.org/typhoon/#/builders/91/builds/4427
https://autobuilder.yoctoproject.org/typhoon/#/builders/92/builds/4453

We're going to need to disable these events on the performance metrics
targets...

Cheers,

Richard


Re: Building image from Root

Mike Looijmans
 

You can use both by changing the directory for downloads and sstate-cache. Put these in your local.conf:

SSTATE_DIR = "/opt/sstate-cache"
DL_DIR = "/opt/downloads"

(You change "/opt" to some other location. Make sure you have write access to those directories)

Move the contents of the build/sstate-cache and downloads to the new directories.

You really should consider adding extra storage. It's the easiest way out. OpenEmbedded/Yocto is insensitive to disk speed, so if you have some old rotating disk in the attic, put that in your PC.



Met vriendelijke groet / kind regards,

Mike Looijmans
System Expert


TOPIC Embedded Products B.V.
Materiaalweg 4, 5681 RJ Best
The Netherlands

T: +31 (0) 499 33 69 69
E: mike.looijmans@topicproducts.com
W: www.topic.nl

Please consider the environment before printing this e-mail

On 15-04-2021 17:34, Murugesh M via lists.yoctoproject.org wrote:
Hi

I am new to Yocto project and have little experience in Linux.

In my computer, Root folder is having free space of 65 GB and Home is having 45 GB free space.

Shall I get the poky in root folder and do the complete Yocto build image process from Root directory itself?

Please suggest.

Thanks.

--
Mike Looijmans


Re: Building image from Root

Khem Raj
 

try adding

INHERIT += "rm_work"

to local.conf and see if that helps

On Thu, Apr 15, 2021 at 11:11 PM Murugesh M <murugesh.pappu@gmail.com> wrote:

I had proceeded the Build image process with Home directory and got stuck with low disk space.
Now, the build image process is stopped almost near to last stage.

Please provide me any suggestion to come out of this problem.


Re: Building image from Root

Murugesh M
 

I had proceeded the Build image process with Home directory and got stuck with low disk space.
Now, the build image process is stopped almost near to last stage.

Please provide me any suggestion to come out of this problem.

681 - 700 of 53814