Date   

Re: [meta-rockchip] dunfell: u-boot build issue when added patch to u-boot

Trevor Woerner
 

On Thu 2021-11-11 @ 08:44:48 AM, Marek Belisko wrote:
Hello,

I'm trying to integrate mender for tinker-board-s using meta-rockchip
dunfell branch. When added meta-mender which add few patches to
u-boot I'm seeing u-boot compilation issues like:

Error: SPL image is too large (size 0x11000 than 0x8000)
| Error: Bad parameters for image type

Error is clear to me but patches which mender adds are related mostly
to the environment so I'm not sure how SPL can increase size. Any
ideas on how to resolve this issue?
Does the following help?
https://github.com/mendersoftware/meta-mender-community/tree/dunfell/meta-mender-rockchip


Re: [meta-security][hardknott][PATCH v2] sssd: re-package to fix QA issues

Armin Kuster
 

merged.

On 11/16/21 10:28 AM, Jeremy A. Puhlman wrote:
It packages all file in ${libdir} to package sssd, including the .so
symlink files. Then it causes QA issues:

| ERROR: QA Issue: sssd rdepends on dbus-dev [dev-deps]
| ERROR: QA Issue: sssd rdepends on ding-libs-dev [dev-deps]

So re-package sssd then the .so symlink files and .pc files are packaged
to sssd-dev which should be.

File ${libdir}/libsss_sudo.so is not a symlink file but packaged to
sssd-dev too. Then causes another QA issue:

| ERROR: sssd-2.5.2-r0 do_package_qa: QA Issue:
-dev package sssd-dev contains non-symlink .so '/usr/lib/libsss_sudo.so' [dev-elf]

So create a new sub-package libsss-sudo to package file libsss_sudo.so
and make sssd rdepends on it.

JP: Updated for version differences.

Signed-off-by: Kai Kang <kai.kang@...>
Signed-off-by: Armin Kuster <akuster808@...>
(cherry picked from commit e81c15f851ca5396c78c8737967ee38db0ebe0cd)
Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
recipes-security/sssd/sssd_1.16.5.bb | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/recipes-security/sssd/sssd_1.16.5.bb b/recipes-security/sssd/sssd_1.16.5.bb
index 02d0837..f13fc49 100644
--- a/recipes-security/sssd/sssd_1.16.5.bb
+++ b/recipes-security/sssd/sssd_1.16.5.bb
@@ -120,10 +120,17 @@ SYSTEMD_SERVICE_${PN} = " \
"
SYSTEMD_AUTO_ENABLE = "disable"

-FILES_${PN} += "${libdir} ${datadir} ${base_libdir}/security/pam_sss.so"
-FILES_${PN}-dev = " ${includedir}/* ${libdir}/*la ${libdir}/*/*la"
-
-# The package contains symlinks that trip up insane
-INSANE_SKIP_${PN} = "dev-so"
-
-RDEPENDS_${PN} = "bind dbus libldb libpam"
+PACKAGES =+ "libsss-sudo libsss-autofs"
+ALLOW_EMPTY_libsss-sudo = "1"
+ALLOW_EMPTY_libsss-autofs = "1"
+
+FILES_${PN}-dev += "${libdir}/sssd/modules/lib*.so"
+FILES_${PN} += "${base_libdir}/security/pam_sss*.so \
+ ${datadir}/dbus-1/system-services/*.service \
+ ${libdir}/krb5/* \
+ ${libdir}/ldb/* \
+ "
+FILES_libsss-autofs = "${libdir}/sssd/modules/libsss_autofs.so"
+FILES_libsss-sudo = "${libdir}/libsss_sudo.so"
+
+RDEPENDS_${PN} = "bind dbus libldb libpam libsss-sudo libsss-autofs"


How to create connman_1.40.bbappend to enable and to build connman with iwd?

JH
 

Hi,

Given the high profile of iwd and advocating connman with iwd, I
believe someone have already built Yocto connman and iwd,
surprisingly, I could not even find an iwd recipe in
https://github.com/openembedded/openembedded/tree/master/recipes, nor
can I find connman document and instructions to replace wpa_supplicant
by iwd, what could I be missing here?

Appreciate your kind advice.

Thank you.

Kind regards,

- jh


On 11/26/21, JH via lists.yoctoproject.org
<jupiter.hce=gmail.com@...> wrote:
Hi,

Please correct me, but it seems to me the connman is moving to a
direction to ditch out wpa_supplicant and to use iwd, but the Honister
still configure connman with wpa_supplicant by default, appreciate
your advice:

- Is connman with iwd stable enough?

- How can I create a connman_1.40.bbappend to replace wpa_supplicant
by iwd configure?

- Where are the documents for configuring and building connman with
iwd? Where is the operational guidance to run connman with iwd? Can I
use the same connman dbus APIs or are there any dbus API changes to
run connman with iwd?

My apology for FAQs.

Thank you very much.

Kind regards,

- JH

--
"A man can fail many times, but he isn't a failure until he begins to
blame somebody else."
-- John Burroughs


How to create connman_1.40.bbappend to enable and to build connman with iwd?

JH
 

Hi,

Please correct me, but it seems to me the connman is moving to a
direction to ditch out wpa_supplicant and to use iwd, but the Honister
still configure connman with wpa_supplicant by default, appreciate
your advice:

- Is connman with iwd stable enough?

- How can I create a connman_1.40.bbappend to replace wpa_supplicant
by iwd configure?

- Where are the documents for configuring and building connman with
iwd? Where is the operational guidance to run connman with iwd? Can I
use the same connman dbus APIs or are there any dbus API changes to
run connman with iwd?

My apology for FAQs.

Thank you very much.

Kind regards,

- JH


Task is not re-triggered even if variables in vardeps change #bitbake #yocto

Mohannad Oraby
 

Hi guys, 

I have an issue related to vardeps where I am making a task dependent on some variables. 

I have created some new variables e.g., NEW_VARIABLE, added them to BB_ENV_EXTRAWHITE. In some recipes, I wrote my own implementation of some tasks that are dependent on these new variables, and for this dependency to work I added  e.g., do_install[vardeps] = "NEW_VARIABLE", so I am now expecting that every time I change this NEW_VARIABLE and perform e.g., "bitbake recipename", the do_install task should run. I checked the task signature and I see the NEW_VARIABLE there.

Let's assum I have two possible values for this variable. When I set the variable for the first time "value1", i.e., the first build, everything works and there is no problem. When I change its value to the other value "value2" not used before and build the recipe again, the do_install will also run and no problem occurs. The problem is however, if I set the variable again to the old value "value1", and I execite "bitbake recipename" again. The do_install will not be re-triggered, and this leads to some wrong/old data located in work directory, and also produced in the image. 

I tried setting BB_DONT_CACHE, as I understood in an old topic that the problem might be that the recipe needs to be parsed again, however this did not work at all.

I do not want to always run the tasks when I perform a new build, i.e., do_install[[nostamp] = "1", I just want it to run again every time I change this NEW_VARIABLE.

Is what I am expecting a normal behavior? Or Yocto does not work this way?

Regards
Mohannad


Re: [yocto-infrastructure] push.yoctoproject.org downtime Wednesday November 24th

Alexander Kanavin
 

Thanks!
A few repositories became uncategorized, and clutter the page at the top:

Alex


On Wed, 24 Nov 2021 at 23:18, Michael Halstead <mhalstead@...> wrote:
The migration is complete. git.yoctoproject.org now serves from a pair of load balanced mirrors. push.yoctoproject.org is on a new dedicated secure host.

If git push hangs please double check the git remote is set to push.yoctoproject.org. We switched to this domain years ago but still allowed pushing to git.yoctoproject.org for convenience. That will not work any longer. When updating repository remotes make sure to push from the command line to add the new hostname to your known_hosts file. Scripts will hang at the ssh prompt until the new hostname is added.

The mirrors pull in new changes rapidly but there is a delay between pushing and those changes appearing on the mirrors. Any scripts that push and then reference the commits will need a delay added to allow for the sync. 

Please email or reach out in IRC if you encounter any unexpected issues.



On Thu, Nov 18, 2021 at 2:38 PM Michael Halstead via lists.yoctoproject.org <mhalstead=linuxfoundation.org@...> wrote:
push.yoctoproject.org is going offline for a short downtime between 1700 UTC and 2200 UTC while we move it to a new server. During the downtime pushes may timeout or be rejected. If your push is rejected try again in 15 minutes. I will announce the start and end of the downtime in #yocto on Libera.Chat.

git.yoctoproject.org will be moved to a new pair of high availability servers during the same window. No downtime is planned for this move. 




--
Michael Halstead
Linux Foundation / Yocto Project
Systems Operations Engineer




Minutes: Yocto Project Weekly Triage Meeting 11/25/2021

Trevor Gamblin
 

Wiki: https://wiki.yoctoproject.org/wiki/Bug_Triage

Attendees: Alexandre, Bruce, Michael, Stephen, Randy, Richard, Tim, Trevor

ARs:

N/A

Notes:

No meeting next week - YP Summit

Medium+ 3.5 Unassigned Enhancements/Bugs: 73 (No change)

Medium+ 3.99 Unassigned Enhancements/Bugs: 38 (Last week 39)

AB Bugs: 61 (No change)


[PATCH yocto-autobuilder-helper] publish-artefacts: publish meta-arm/generic-arm64 binaries

Ross Burton <ross@...>
 

Publish the generic-arm64 binaries in a dedicated meta-arm/ directory so
it is clear this isn't from the core layers.

Signed-off-by: Ross Burton <ross.burton@...>
---
scripts/publish-artefacts | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/scripts/publish-artefacts b/scripts/publish-artefacts
index 6ed922a..9795381 100755
--- a/scripts/publish-artefacts
+++ b/scripts/publish-artefacts
@@ -187,6 +187,11 @@ case "$target" in
sha256sums $TMPDIR/deploy/images/genericx86
cp -R --no-dereference --preserve=3Dlinks $TMPDIR/deploy/images/=
genericx86/*genericx86* $DEST/machines/genericx86-alt
;;
+ "meta-arm")
+ mkdir -p $DEST/machines/meta-arm/generic-arm64
+ sha256sums $TMPDIR/deploy/images/generic-arm64
+ cp -R --no-dereference --preserve=3Dlinks $TMPDIR/deploy/images/=
generic-arm64/*generic-arm64* $DEST/machines/meta-arm/generic-arm64
+ ;;
"poky-tiny")
mkdir -p $DEST/machines/qemu/qemu-tiny
sha256sums $TMPDIR/deploy/images/qemux86
--=20
2.25.1


Re: Problem installing python package from a wheel #bitbake #python

Nicolas Jeker
 

On Wed, 2021-11-24 at 09:55 -0800, Tim Orling wrote:


On Mon, Nov 22, 2021 at 2:54 PM David Babich <ddbabich@...>
wrote:
I made it a little further by adding --no-cache-dir to the pip3
install command.  That got rid fo the warning about not being able
to access the .cache/pip.  However I still have the error:
| ERROR: torch-1.10.0-cp36-cp36m-linux_aarch64.whl is not a
supported wheel on this platform.

Installing third-party wheels is not something we are likely to ever
support in Yocto Project/OpenEmbedded recipes.

Are you trying to install using pip3 on target?
Note that many factors will make it tricky for python wheels with
binary content (C or Rust extensions). The python3 version must
match, as will the libraries it requires. 

The wheel you listed was built for Python 3.6 (cp36) and ARM v8
(aarch64).  The error is what you would see if you were trying to
install an aarch64 wheel on an x86-64 target, but other reasons could
lead to that error. We don't know what version of glibc, gcc, etc.
was used and whether those are going to be compatible.
There's a section about building from source with a patch in the
article he linked with his first message. I don't know much about
python in yocto, but maybe doing that in a recipe could work?

Also, when asking questions, please tell us which release of Yocto
Project you are using, what the MACHINE you are building for is,
which layers you are using (and at what release) and other
information to help us help you.

Cheers,
--Tim




Re: cross-localedef file not found in do_rootfs #yocto #zeus

Bel Hadj Salem Talel <bhstalel@...>
 

Thanks for the reply,

I tried adding IMAGE_LINGUAS = " ", it passes but do_image_cpio gives nothing (means no image.gpio.gz) is generated.
I don't know why.


Re: Selectively disable uninative for a recipe

Richard Purdie
 

On Wed, 2021-11-24 at 19:21 -0500, Mohammed Billoo wrote:
Hi,

I need to add TI MCU firmware as part of an OE/Yocto build for a
project that I am working on. Basically, the firmware needs to be
built during the overall image build and be stored in the RFS. My
strategy to build the firmware was to set up the TI toolchain as part
of the image build. Since I have the toolchain components and believe
it to be unnecessary (if not impossible) to build them from source, I
have the necessary pre-built components (provided by TI) as tarballs
uploaded somewhere.

In determining the best way to use a pre-built toolchain, I just
copied the gcc-linaro-baremetal-arm-native recipe. But, when I go to
bake my recipe for the toolchain, I get the following error:

ERROR: ti-bios-1.0-r0 do_populate_sysroot: Error executing a python
function in exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:uninative_changeinterp(d)
0003:
File: '/home/mbilloo/yocto-builds/norbert/yocto/build-doris/../layers/poky/meta/classes/uninative.bbclass',
lineno: 170, function: uninative_changeinterp
0166: continue
0167: if not elf.isDynamic():
0168: continue
0169:
*** 0170: subprocess.check_output(("patchelf-uninative",
"--set-interpreter", d.getVar("UNINATIVE_LOADER"), f),
stderr=subprocess.STDOUT)
0171:}
File: '/usr/lib64/python3.8/subprocess.py', lineno: 411, function: check_output
0407: # Explicitly passing input=None was previously
equivalent to passing an
0408: # empty string. That is maintained here for
backwards compatibility.
0409: kwargs['input'] = '' if
kwargs.get('universal_newlines', False) else b''
0410:
*** 0411: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
0412: **kwargs).stdout
0413:
0414:
0415:class CompletedProcess(object):
File: '/usr/lib64/python3.8/subprocess.py', lineno: 512, function: run
0508: # We don't call process.wait() as .__exit__ does
that for us.
0509: raise
0510: retcode = process.poll()
0511: if check and retcode:
*** 0512: raise CalledProcessError(retcode, process.args,
0513: output=stdout, stderr=stderr)
0514: return CompletedProcess(process.args, retcode, stdout, stderr)
0515:
0516:
Exception: subprocess.CalledProcessError: Command
'('patchelf-uninative', '--set-interpreter',
'/home/mbilloo/yocto-builds/norbert/yocto/build-doris/tmp/sysroots-uninative/x86_64-linux/lib/ld-linux-x86-64.so.2',
'/home/mbilloo/yocto-builds/norbert/yocto/build-doris/tmp/work/x86_64-linux/ti-bios/1.0-r0/sstate-build-populate_sysroot/recipe-sysroot-native/usr/share/bios_6_73_01_01/packages/ti/platforms/sim6xxx/Solaris/kelvin')'
returned non-zero exit status 1.

Subprocess output:
patchelf: cannot find section '.gnu.version_r'

It looks like uninative is ultimately being inherited and there are
some steps taken to eliminate any host-dependencies for the toolchain
binaries. Is there a way to disable this inheritance only for this
recipe (if I globally disable uninative in my conf file, the recipe
bakes just fine, but obviously can't do that)? Or, is there a better
way of accomplishing my goal?
You could define a new uninative_changeinterp() function in your recipe to
override the core one and make it do nothing. Not a particularly elegant
solution but should work...

Cheers,

Richard


Re: [docs] [PATCH yocto-autobuilder-helper] scripts/run-docs-build: remove gatesgarth

Nicolas Dechesne
 



On Wed, Nov 24, 2021 at 7:50 PM Michael Opdenacker <michael.opdenacker@...> wrote:
Hi Quentin,

On 11/24/21 7:47 PM, Quentin Schulz wrote:
> Hi Michael, Richard,
>
> On Wed, Nov 24, 2021 at 06:10:56PM +0000, Richard Purdie wrote:
>> On Wed, 2021-11-24 at 18:16 +0100, Michael Opdenacker wrote:
>>> Together with the corresponding Bitbake version, which are no
>>> longer supported.
>>>
>>> Signed-off-by: Michael Opdenacker <michael.opdenacker@...>
>>> ---
>>>  scripts/run-docs-build | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/scripts/run-docs-build b/scripts/run-docs-build
>>> index 3db4a97..5e1d649 100755
>>> --- a/scripts/run-docs-build
>>> +++ b/scripts/run-docs-build
>>> @@ -35,7 +35,7 @@ mkdir $outputdir/bitbake/next
>>>  cp -r ./_build/final/* $outputdir/bitbake/next
>>> 
>>>  # stable branches
>>> -for branch in 1.46 1.48 1.50 1.52; do
>>> +for branch in 1.46 1.50 1.52; do
>>>      git checkout $branch
>>>      make clean
>>>      make publish
>>> @@ -68,7 +68,7 @@ mkdir $outputdir/next
>>>  cp -r ./_build/final/* $outputdir/next
>>> 
>>>  # stable branches
>>> -for branch in dunfell gatesgarth hardknott honister; do
>>> +for branch in dunfell hardknott honister; do
>>>      cd $ypdocs
>>>      git checkout $branch
>>>      make clean
>> I'm a bit torn on this. They are no longer officially supported releases now but
>> it may make sense to rebuild all the sphinx docs in this script rather than some
>> subset?
>>
> I think we want to make sure we have all docs up-to-date, even for the
> branches that aren't maintained anymore. Especially since it's not
> taking a lot of CPU time to build them, it's fine IMO. We could always
> make minor changes to old docs. E.g. the releases.rst might get updates
> until we figure something out.

Thanks for casting your vote. It makes sense. I'll send another patch
with this decision in the comments.

I agree with Quentin here. Until we have a better mechanism (to rebuild only modified branches not all of them each time, ..) I think we should continue to build them all.
 
Cheers
Michael.

--
Michael Opdenacker, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com





Selectively disable uninative for a recipe

Mohammed Billoo
 

Hi,

I need to add TI MCU firmware as part of an OE/Yocto build for a
project that I am working on. Basically, the firmware needs to be
built during the overall image build and be stored in the RFS. My
strategy to build the firmware was to set up the TI toolchain as part
of the image build. Since I have the toolchain components and believe
it to be unnecessary (if not impossible) to build them from source, I
have the necessary pre-built components (provided by TI) as tarballs
uploaded somewhere.

In determining the best way to use a pre-built toolchain, I just
copied the gcc-linaro-baremetal-arm-native recipe. But, when I go to
bake my recipe for the toolchain, I get the following error:

ERROR: ti-bios-1.0-r0 do_populate_sysroot: Error executing a python
function in exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:uninative_changeinterp(d)
0003:
File: '/home/mbilloo/yocto-builds/norbert/yocto/build-doris/../layers/poky/meta/classes/uninative.bbclass',
lineno: 170, function: uninative_changeinterp
0166: continue
0167: if not elf.isDynamic():
0168: continue
0169:
*** 0170: subprocess.check_output(("patchelf-uninative",
"--set-interpreter", d.getVar("UNINATIVE_LOADER"), f),
stderr=subprocess.STDOUT)
0171:}
File: '/usr/lib64/python3.8/subprocess.py', lineno: 411, function: check_output
0407: # Explicitly passing input=None was previously
equivalent to passing an
0408: # empty string. That is maintained here for
backwards compatibility.
0409: kwargs['input'] = '' if
kwargs.get('universal_newlines', False) else b''
0410:
*** 0411: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
0412: **kwargs).stdout
0413:
0414:
0415:class CompletedProcess(object):
File: '/usr/lib64/python3.8/subprocess.py', lineno: 512, function: run
0508: # We don't call process.wait() as .__exit__ does
that for us.
0509: raise
0510: retcode = process.poll()
0511: if check and retcode:
*** 0512: raise CalledProcessError(retcode, process.args,
0513: output=stdout, stderr=stderr)
0514: return CompletedProcess(process.args, retcode, stdout, stderr)
0515:
0516:
Exception: subprocess.CalledProcessError: Command
'('patchelf-uninative', '--set-interpreter',
'/home/mbilloo/yocto-builds/norbert/yocto/build-doris/tmp/sysroots-uninative/x86_64-linux/lib/ld-linux-x86-64.so.2',
'/home/mbilloo/yocto-builds/norbert/yocto/build-doris/tmp/work/x86_64-linux/ti-bios/1.0-r0/sstate-build-populate_sysroot/recipe-sysroot-native/usr/share/bios_6_73_01_01/packages/ti/platforms/sim6xxx/Solaris/kelvin')'
returned non-zero exit status 1.

Subprocess output:
patchelf: cannot find section '.gnu.version_r'

It looks like uninative is ultimately being inherited and there are
some steps taken to eliminate any host-dependencies for the toolchain
binaries. Is there a way to disable this inheritance only for this
recipe (if I globally disable uninative in my conf file, the recipe
bakes just fine, but obviously can't do that)? Or, is there a better
way of accomplishing my goal?

Thanks
--
Mohammed A Billoo
Founder
MAB Labs, LLC
www.mab-labs.com
www.linkedin.com/company/mab-labs
201-338-2022
22 East Quackenbush Ave Suite LL5
Dumont, NJ 07628


Re: [yocto-infrastructure] push.yoctoproject.org downtime Wednesday November 24th

Michael Halstead
 

The migration is complete. git.yoctoproject.org now serves from a pair of load balanced mirrors. push.yoctoproject.org is on a new dedicated secure host.

If git push hangs please double check the git remote is set to push.yoctoproject.org. We switched to this domain years ago but still allowed pushing to git.yoctoproject.org for convenience. That will not work any longer. When updating repository remotes make sure to push from the command line to add the new hostname to your known_hosts file. Scripts will hang at the ssh prompt until the new hostname is added.

The mirrors pull in new changes rapidly but there is a delay between pushing and those changes appearing on the mirrors. Any scripts that push and then reference the commits will need a delay added to allow for the sync. 

Please email or reach out in IRC if you encounter any unexpected issues.



On Thu, Nov 18, 2021 at 2:38 PM Michael Halstead via lists.yoctoproject.org <mhalstead=linuxfoundation.org@...> wrote:
push.yoctoproject.org is going offline for a short downtime between 1700 UTC and 2200 UTC while we move it to a new server. During the downtime pushes may timeout or be rejected. If your push is rejected try again in 15 minutes. I will announce the start and end of the downtime in #yocto on Libera.Chat.

git.yoctoproject.org will be moved to a new pair of high availability servers during the same window. No downtime is planned for this move. 




--
Michael Halstead
Linux Foundation / Yocto Project
Systems Operations Engineer


[PATCH yocto-autobuilder2] schedulers: add deploy_artefacts to all builders

Ross Burton <ross@...>
 

Instead of having a limited set of builders which can deploy artefacts,
let every builder have the ability to deploy. This makes it easier to
experiment with deploy steps.

Signed-off-by: Ross Burton <ross.burton@...>
---
schedulers.py | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/schedulers.py b/schedulers.py
index 86fc6e4..bfd0e60 100644
--- a/schedulers.py
+++ b/schedulers.py
@@ -241,12 +241,11 @@ def props_for_builder(builder):
default=3Dswat_default))
if builder =3D=3D 'build-appliance':
props.append(buildappsrcrev_param())
- if builder in ['build-appliance', 'buildtools', 'eclipse-plugin-neon=
', 'eclipse-plugin-oxygen']:
- props.append(util.BooleanParameter(
- name=3D"deploy_artefacts",
- label=3D"Do we want to deploy artefacts? ",
- default=3DFalse
- ))
+ props.append(util.BooleanParameter(
+ name=3D"deploy_artefacts",
+ label=3D"Do we want to deploy artefacts?",
+ default=3DFalse
+ ))
props =3D props + repos_for_builder(builder)
worker_list =3D config.builder_to_workers.get(builder, config.builde=
r_to_workers['default'])
props.append(util.ChoiceStringParameter(name=3D"worker",
--=20
2.25.1


[PATCH yocto-autobuilder-helper] scripts/run-docs-build: add comments

Michael Opdenacker
 

Signed-off-by: Michael Opdenacker <michael.opdenacker@...>
---
scripts/run-docs-build | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/scripts/run-docs-build b/scripts/run-docs-build
index 3db4a97..4451018 100755
--- a/scripts/run-docs-build
+++ b/scripts/run-docs-build
@@ -15,6 +15,7 @@ mkdir buildtools
$docs_buildtools -y -d $builddir/buildtools
. $builddir/buildtools/environment-setup*

+# Getting the old docbook built docs from an archive. Not rebuilding them.
#wget https://downloads.yoctoproject.org/mirror/docbook-mirror/docbook-archives-20201105.tar.xz
docbookarchive=/srv/autobuilder/autobuilder.yoctoproject.org/pub/docbook-mirror/docbook-archives-20201105.tar.xz
mkdir $outputdir
@@ -35,6 +36,9 @@ mkdir $outputdir/bitbake/next
cp -r ./_build/final/* $outputdir/bitbake/next

# stable branches
+# A decision was made to keep updating all the Sphinx generated docs for the moment,
+# even the ones corresponding to no longer supported releases
+# https://lists.yoctoproject.org/g/docs/message/2193
for branch in 1.46 1.48 1.50 1.52; do
git checkout $branch
make clean
@@ -68,6 +72,7 @@ mkdir $outputdir/next
cp -r ./_build/final/* $outputdir/next

# stable branches
+# Again, keeping even the no longer supported releases (see above comment)
for branch in dunfell gatesgarth hardknott honister; do
cd $ypdocs
git checkout $branch
--
2.25.1


Re: [docs] [PATCH yocto-autobuilder-helper] scripts/run-docs-build: remove gatesgarth

Michael Opdenacker
 

Hi Quentin,

On 11/24/21 7:47 PM, Quentin Schulz wrote:
Hi Michael, Richard,

On Wed, Nov 24, 2021 at 06:10:56PM +0000, Richard Purdie wrote:
On Wed, 2021-11-24 at 18:16 +0100, Michael Opdenacker wrote:
Together with the corresponding Bitbake version, which are no
longer supported.

Signed-off-by: Michael Opdenacker <michael.opdenacker@...>
---
scripts/run-docs-build | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/run-docs-build b/scripts/run-docs-build
index 3db4a97..5e1d649 100755
--- a/scripts/run-docs-build
+++ b/scripts/run-docs-build
@@ -35,7 +35,7 @@ mkdir $outputdir/bitbake/next
cp -r ./_build/final/* $outputdir/bitbake/next

# stable branches
-for branch in 1.46 1.48 1.50 1.52; do
+for branch in 1.46 1.50 1.52; do
git checkout $branch
make clean
make publish
@@ -68,7 +68,7 @@ mkdir $outputdir/next
cp -r ./_build/final/* $outputdir/next

# stable branches
-for branch in dunfell gatesgarth hardknott honister; do
+for branch in dunfell hardknott honister; do
cd $ypdocs
git checkout $branch
make clean
I'm a bit torn on this. They are no longer officially supported releases now but
it may make sense to rebuild all the sphinx docs in this script rather than some
subset?
I think we want to make sure we have all docs up-to-date, even for the
branches that aren't maintained anymore. Especially since it's not
taking a lot of CPU time to build them, it's fine IMO. We could always
make minor changes to old docs. E.g. the releases.rst might get updates
until we figure something out.
Thanks for casting your vote. It makes sense. I'll send another patch
with this decision in the comments.
Cheers
Michael.

--
Michael Opdenacker, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


Re: [docs] [PATCH yocto-autobuilder-helper] scripts/run-docs-build: remove gatesgarth

Michael Opdenacker
 

Hi Richard,

Thanks for the review!

On 11/24/21 7:10 PM, Richard Purdie wrote:
On Wed, 2021-11-24 at 18:16 +0100, Michael Opdenacker wrote:
Together with the corresponding Bitbake version, which are no
longer supported.

Signed-off-by: Michael Opdenacker <michael.opdenacker@...>
---
scripts/run-docs-build | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/run-docs-build b/scripts/run-docs-build
index 3db4a97..5e1d649 100755
--- a/scripts/run-docs-build
+++ b/scripts/run-docs-build
@@ -35,7 +35,7 @@ mkdir $outputdir/bitbake/next
cp -r ./_build/final/* $outputdir/bitbake/next

# stable branches
-for branch in 1.46 1.48 1.50 1.52; do
+for branch in 1.46 1.50 1.52; do
git checkout $branch
make clean
make publish
@@ -68,7 +68,7 @@ mkdir $outputdir/next
cp -r ./_build/final/* $outputdir/next

# stable branches
-for branch in dunfell gatesgarth hardknott honister; do
+for branch in dunfell hardknott honister; do
cd $ypdocs
git checkout $branch
make clean
I'm a bit torn on this. They are no longer officially supported releases now but
it may make sense to rebuild all the sphinx docs in this script rather than some
subset?

I understand. Your decision to make.
I just proposed this change for consistency with the current implementation.

Any other opinions?

Cheers
Michael.

--
Michael Opdenacker, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


Re: [PATCH yocto-autobuilder-helper] config.json: set ZSTD_THREADS like XZ_THREADS

Khem Raj
 

On Wed, Nov 24, 2021 at 2:18 AM Richard Purdie
<richard.purdie@...> wrote:

On Wed, 2021-11-24 at 09:00 +0100, Alexander Kanavin wrote:
But the AB has not been exhibiting any problems with zstd, and this will
degrade performance. Let's only fix what is broken.
I'm not sure I agree with that.

We have 60+ "intermittent" bugs and some of us are in weekly meetings trying to
do something about working out why these are failing. It feels like we're not
really getting too far with some subset of them and it is using up a lot of the
SWAT and bug triage time.

We've made a few changes to try and reduce the load spikes on the systems and
this fits with the other changes we've made.
From a different data point, we have clipped the parallelism for XZ
and ZSTD internally to very low ( 2 and 4 )
and it has in fact reduced unexpected failures and seen no impact on
build performance. Since parallelism settings
are myopic for these tools, they greedily take every CPU which is not
best for overall build thats doing many
things in parallel, so its best to curtail them to a conservative value.

Cheers,

Richard






Re: cross-localedef file not found in do_rootfs #yocto #zeus

Khem Raj
 

On Wed, Nov 24, 2021 at 1:02 AM Bel Hadj Salem Talel <bhstalel@...> wrote:

Hello All,

I created a simple image recipe for initramfs type of image with no IMAGE_FEATURES and simply:

IMAGE_INSTALL = "packagegroup-core-boot busybox"
to get more info, can you try adding

IMAGE_LINGUAS = " "

and see if this changes anything ?

When I bitbake the image I get the following error:

---------------------------
ERROR: menzu-image-initramfs-1.0-r0 do_rootfs: Error executing a python function in exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_rootfs(d)
0003:
File: '/home/talel/Desktop/YoctoWork/sources/poky/meta/classes/image.bbclass', lineno: 245, function: do_rootfs
0241: progress_reporter.next_stage()
0242:
0243: # generate rootfs
0244: d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
*** 0245: create_rootfs(d, progress_reporter=progress_reporter, logcatcher=logcatcher)
0246:
0247: progress_reporter.finish()
0248:}
0249:do_rootfs[dirs] = "${TOPDIR}"
File: '/home/talel/Desktop/YoctoWork/sources/poky/meta/lib/oe/rootfs.py', lineno: 978, function: create_rootfs
0974: img_type = d.getVar('IMAGE_PKGTYPE')
0975: if img_type == "rpm":
0976: RpmRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
0977: elif img_type == "ipk":
*** 0978: OpkgRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
0979: elif img_type == "deb":
0980: DpkgRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
0981:
0982: os.environ.clear()
File: '/home/talel/Desktop/YoctoWork/sources/poky/meta/lib/oe/rootfs.py', lineno: 204, function: create
0200: if self.progress_reporter:
0201: self.progress_reporter.next_stage()
0202:
0203: # call the package manager dependent create method
*** 0204: self._create()
0205:
0206: sysconfdir = self.image_rootfs + self.d.getVar('sysconfdir')
0207: bb.utils.mkdirhier(sysconfdir)
0208: with open(sysconfdir + "/version", "w+") as ver:
File: '/home/talel/Desktop/YoctoWork/sources/poky/meta/lib/oe/rootfs.py', lineno: 922, function: _create
0918:
0919: if self.progress_reporter:
0920: self.progress_reporter.next_stage()
0921:
*** 0922: self.pm.install_complementary()
0923:
0924: if self.progress_reporter:
0925: self.progress_reporter.next_stage()
0926:
File: '/home/talel/Desktop/YoctoWork/sources/poky/meta/lib/oe/package_manager.py', lineno: 614, function: install_complementary
0610:
0611: target_arch = self.d.getVar('TARGET_ARCH')
0612: localedir = oe.path.join(self.target_rootfs, self.d.getVar("libdir"), "locale")
0613: if os.path.exists(localedir) and os.listdir(localedir):
*** 0614: generate_locale_archive(self.d, self.target_rootfs, target_arch, localedir)
0615: # And now delete the binary locales
0616: self.remove(fnmatch.filter(self.list_installed(), "glibc-binary-localedata-*"), False)
0617:
0618: def deploy_dir_lock(self):
File: '/home/talel/Desktop/YoctoWork/sources/poky/meta/lib/oe/package_manager.py', lineno: 140, function: generate_locale_archive
0136: if os.path.isdir(path):
0137: cmd = ["cross-localedef", "--verbose"]
0138: cmd += arch_options
0139: cmd += ["--add-to-archive", path]
*** 0140: subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT)
0141:
0142:class Indexer(object, metaclass=ABCMeta):
0143: def __init__(self, d, deploy_dir):
0144: self.d = d
File: '/usr/lib/python3.8/subprocess.py', lineno: 415, function: check_output
0411: else:
0412: empty = b''
0413: kwargs['input'] = empty
0414:
*** 0415: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
0416: **kwargs).stdout
0417:
0418:
0419:class CompletedProcess(object):
File: '/usr/lib/python3.8/subprocess.py', lineno: 493, function: run
0489: 'with capture_output.')
0490: kwargs['stdout'] = PIPE
0491: kwargs['stderr'] = PIPE
0492:
*** 0493: with Popen(*popenargs, **kwargs) as process:
0494: try:
0495: stdout, stderr = process.communicate(input, timeout=timeout)
0496: except TimeoutExpired as exc:
0497: process.kill()
File: '/usr/lib/python3.8/subprocess.py', lineno: 858, function: __init__
0854: if self.text_mode:
0855: self.stderr = io.TextIOWrapper(self.stderr,
0856: encoding=encoding, errors=errors)
0857:
*** 0858: self._execute_child(args, executable, preexec_fn, close_fds,
0859: pass_fds, cwd, env,
0860: startupinfo, creationflags, shell,
0861: p2cread, p2cwrite,
0862: c2pread, c2pwrite,
File: '/usr/lib/python3.8/subprocess.py', lineno: 1704, function: _execute_child
1700: else:
1701: err_filename = orig_executable
1702: if errno_num != 0:
1703: err_msg = os.strerror(errno_num)
*** 1704: raise child_exception_type(errno_num, err_msg, err_filename)
1705: raise child_exception_type(err_msg)
1706:
1707:
1708: def _handle_exitstatus(self, sts, _WIFSIGNALED=os.WIFSIGNALED,
Exception: FileNotFoundError: [Errno 2] No such file or directory: 'cross-localedef'

ERROR: Logfile of failure stored in: /home/talel/Desktop/YoctoWork/arken/tmp/work/menzu-poky-linux/menzu-image-initramfs/1.0-r0/temp/log.do_rootfs.143822
ERROR: Task (/home/talel/Documents/FinalGit/SelfArkenWork/arken/meta-menzu/recipes-core/images/menzu-image-initramfs.bb:do_rootfs) failed with exit code '1'
---------------------------

I was building the image with success, but now it fails, I don't know why.
The other normal images build successfully.

Thanks,
Talel


2421 - 2440 of 57807