Date   

Re: Debating best tool to use

Khem Raj
 

On Mon, Sep 6, 2021 at 9:10 AM KL <deco33000@...> wrote:


Hi,

I plan to have a minimal embedded system with BT, wifi, no gui, to host my application with my own webserver.

My webserver+app are responsible to manage the OS (so I don't want anything else to do it). They receive new programs and load them according to my rules (that is our manager at work).

I would like to avoid the plague of multiple configuration files put everywhere in files (like now in different profile directories...), but instead have my custom manager handle ALL the different settings for the OS (centralized point).

We, at the office, compile and maintain the different software needed and send them by wifi.

I want that OS to be built on top of Linux.

Would Yocto enable me to do that? Is it the best tool for that?
Yes, it lets you build custom Linux distribution and eases the way to
add needed customization that one might require.


Thanks,


--
MKL




Re: Building Yocto on AWS container getting ERROR: The postinstall intercept hook 'update_icon_cache' failed, #yocto

Khem Raj
 

On Mon, Sep 6, 2021 at 1:46 AM <mail2uvijay@...> wrote:

Hi All,

I am trying to build Yocto image on aws code buildfor SAMA5D27 from Microchip. Created container.
Able to setup the environment successfully with the dependencies installed on container. At the final step while creating rootfs image i am getting below error.

"ERROR: The postinstall intercept hook 'update_icon_cache' failed,...
ERROR:
DEBUG: Python function do_rootfs finished
ERROR: Function failed: do_rootfs
"
Usually these errors pop up when qemu usermode fails to run the needed
programs during cross build.
check the logs thoroughly and see if it leaves traces when executing
update_icon_cache of any such issues in executing qemu

Tried couple of options by installing libgdk-pixbuf2.0-0 in container, for bb image script DEPENDS += "qemuwrapper-cross"

However I was able to build Yocto in virtual box ubuntu 16.04.

Please let me know what i have missed..

Regards,
Vijay


Debating best tool to use

KL <deco33000@...>
 


Hi,
 
I plan to have a minimal embedded system with BT, wifi, no gui, to host my application with my own webserver.
 
My webserver+app are responsible to manage the OS (so I don't want anything else to do it). They receive new programs and load them according to my rules (that is our manager at work).
 
I would like to avoid the plague of multiple configuration files put everywhere in files (like now in different profile directories...), but instead have my custom manager handle ALL the different settings for the OS (centralized point).
 
We, at the office, compile and maintain the different software needed and send them by wifi.
 
I want that OS to be built on top of Linux.
 
Would Yocto enable me to do that? Is it the best tool for that?
 
Thanks,
 
 
-- 
MKL


Re: [PATCH v2] bitbake/fetch2: Add a new variable 'BB_FETCH_ENV' to export Fetcher env

Richard Purdie
 

Hi,

Thanks for the patch. This isn't ready to be merged yet though as there are some
issues.

On Mon, 2021-08-23 at 15:18 +0800, jiladahe1997@... wrote:
From 1b0d7b4bb4a5b39f7ae0ce7d7ae5897a33637972 Mon Sep 17 00:00:00 2001
From: Mingrui Ren <jiladahe1997@...>
Date: Mon, 23 Aug 2021 14:49:03 +0800
Subject: [PATCH v2] bitbake/fetch2: Add a new variable 'BB_FETCH_ENV' to export Fetcher env

The environment variables used by Fetcher are hard-coded, and are obtained
from HOST env instead of bitbake datastore
This isn't true, they are looked at first in bitbake's datastore, then taken
from the original host environment if not in the datastore.

This patch add a new variable 'BB_FETCH_ENV',and modify the default
BB_ENV_EXTRAWHITE_OE for backwards compatibility, trying to fix the
problems above.
Why is this a problem? You need to state what the problem is, not what the code
currently does (or in the above case doesn't do).

Signed-off-by: Mingrui Ren <jiladahe1997@...>
---
changes in v2:
a.changes the variable name from 'FETCH_ENV_WHITELIST' to 'BB_FETCH_ENV'.
b.add 'BB_FETCH_ENV' in local.conf, rather than export it in host
enviroment.
c.modify existing BB_ENV_EXTRAWHITE_OE for backwards compatibility.
d.Two commits recently modified this variable. The commit ID is:
348384135272ae7c62a11eeabcc43eddc957811f and 5dce2f3da20a14c0eb5229696561b0c5f6fce54c,
So I adjusted the new variables in the patch.

bitbake/lib/bb/fetch2/__init__.py | 34 ++++++++-----------------------
bitbake/lib/bb/fetch2/wget.py | 2 +-
meta-poky/conf/local.conf.sample | 12 +++++++++++
scripts/oe-buildenv-internal | 3 ++-
4 files changed, 24 insertions(+), 27 deletions(-)

diff --git a/bitbake/lib/bb/fetch2/__init__.py b/bitbake/lib/bb/fetch2/__init__.py
index 914fa5c024..cbbe32d1df 100644
--- a/bitbake/lib/bb/fetch2/__init__.py
+++ b/bitbake/lib/bb/fetch2/__init__.py
@@ -808,28 +808,13 @@ def localpath(url, d):
fetcher = bb.fetch2.Fetch([url], d)
return fetcher.localpath(url)

-# Need to export PATH as binary could be in metadata paths
-# rather than host provided
-# Also include some other variables.
-FETCH_EXPORT_VARS = ['HOME', 'PATH',
- 'HTTP_PROXY', 'http_proxy',
- 'HTTPS_PROXY', 'https_proxy',
- 'FTP_PROXY', 'ftp_proxy',
- 'FTPS_PROXY', 'ftps_proxy',
- 'NO_PROXY', 'no_proxy',
- 'ALL_PROXY', 'all_proxy',
- 'GIT_PROXY_COMMAND',
- 'GIT_SSH',
- 'GIT_SSL_CAINFO',
- 'GIT_SMART_HTTP',
- 'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
- 'SOCKS5_USER', 'SOCKS5_PASSWD',
- 'DBUS_SESSION_BUS_ADDRESS',
- 'P4CONFIG',
- 'SSL_CERT_FILE',
- 'AWS_ACCESS_KEY_ID',
- 'AWS_SECRET_ACCESS_KEY',
- 'AWS_DEFAULT_REGION']
Firstly, I'd prefer not to move this list out of the fetcher code. Bitbake can
be in theory used standalone without the OE-Core metadata and this data makes
more sense to be maintained in the fetcher.


+def getfetchenv(d):
+ # Need to export PATH as binary could be in metadata paths
+ # rather than host provided
+ # Also include some other variables.
+ vars = ['HOME', 'PATH']
+ vars.extend((d.getVar("BB_FETCH_ENV") or "").split())
+ return vars

def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
"""
@@ -839,7 +824,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
Optionally remove the files/directories listed in cleanup upon failure
"""

- exportvars = FETCH_EXPORT_VARS
+ exportvars = getfetchenv(d)

if not cleanup:
cleanup = []
@@ -855,9 +840,8 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
d.setVar("PV", "fetcheravoidrecurse")
d.setVar("PR", "fetcheravoidrecurse")

- origenv = d.getVar("BB_ORIGENV", False)
for var in exportvars:
- val = d.getVar(var) or (origenv and origenv.getVar(var))
+ val = d.getVar(var)
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
Please don't drop the BB_ORIGENV handling. Why is that being removed?


diff --git a/bitbake/lib/bb/fetch2/wget.py b/bitbake/lib/bb/fetch2/wget.py
index 29fcfbb3d1..0ce06ddb4f 100644
--- a/bitbake/lib/bb/fetch2/wget.py
+++ b/bitbake/lib/bb/fetch2/wget.py
@@ -306,7 +306,7 @@ class Wget(FetchMethod):
# to scope the changes to the build_opener request, which is when the
# environment lookups happen.
newenv = {}
- for name in bb.fetch2.FETCH_EXPORT_VARS:
+ for name in bb.fetch2.getfetchenv(d):
value = d.getVar(name)
if not value:
origenv = d.getVar("BB_ORIGENV")
diff --git a/meta-poky/conf/local.conf.sample b/meta-poky/conf/local.conf.sample
index f1f6d690fb..4e8a6f0c77 100644
--- a/meta-poky/conf/local.conf.sample
+++ b/meta-poky/conf/local.conf.sample
@@ -267,6 +267,18 @@ PACKAGECONFIG:append:pn-qemu-system-native = " sdl"
#
#BB_SERVER_TIMEOUT = "60"

+# Bitbake Fetcher Environment Variables
+#
+# Specific which environment variables in bitbake datastore used by fetcher when
+# executing fetch task.
+# NOTE: You may need to modify BB_ENV_EXTRAWHITE, in order to add environment
+# variable into bitbake datastore first.
+BB_FETCH_ENV ?= "HTTP_PROXY http_proxy HTTPS_PROXY https_proxy \
+FTP_PROXY ftp_proxy FTPS_PROXY ftps_proxy NO_PROXY no_proxy ALL_PROXY all_proxy \
+GIT_PROXY_COMMAND GIT_SSH GIT_SSL_CAINFO GIT_SMART_HTTP SSH_AUTH_SOCK SSH_AGENT_PID \
+SOCKS5_USER SOCKS5_PASSWD DBUS_SESSION_BUS_ADDRESS P4CONFIG SSL_CERT_FILE AWS_ACCESS_KEY_ID\
+AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION"
+
I'd like to see this preserved in bitbake, not in bitbake.conf in OE-Core.

# CONF_VERSION is increased each time build/conf/ changes incompatibly and is used to
# track the version of this file when it was generated. This can safely be ignored if
# this doesn't mean anything to you.
diff --git a/scripts/oe-buildenv-internal b/scripts/oe-buildenv-internal
index e0d920f2fc..29cb694790 100755
--- a/scripts/oe-buildenv-internal
+++ b/scripts/oe-buildenv-internal
@@ -111,7 +111,8 @@ HTTPS_PROXY https_proxy FTP_PROXY ftp_proxy FTPS_PROXY ftps_proxy ALL_PROXY \
all_proxy NO_PROXY no_proxy SSH_AGENT_PID SSH_AUTH_SOCK BB_SRCREV_POLICY \
SDKMACHINE BB_NUMBER_THREADS BB_NO_NETWORK PARALLEL_MAKE GIT_PROXY_COMMAND \
SOCKS5_PASSWD SOCKS5_USER SCREENDIR STAMPS_DIR BBPATH_EXTRA BB_SETSCENE_ENFORCE \
-BB_LOGCONFIG"
+BB_LOGCONFIG HOME PATH GIT_SSH GIT_SSL_CAINFO GIT_SMART_HTTP DBUS_SESSION_BUS_ADDRESS \
+P4CONFIG SSL_CERT_FILE AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION"

BB_ENV_EXTRAWHITE="$(echo $BB_ENV_EXTRAWHITE $BB_ENV_EXTRAWHITE_OE | tr ' ' '\n' | LC_ALL=C sort --unique | tr '\n' ' ')"
I think if you leave BB_ORIGENV handling in place, these latter changes
shouldn't be needed?

If you need to be able to pass extra variables into the fetcher, I think we
could/should add API for additions rather than allowing the whole list to be
customised. Without stating which problem you're solving, guessing what you
really need is hard though. A better description of the issue you're seeing
would help a lot.

Cheers,

Richard


Building Yocto on AWS container getting ERROR: The postinstall intercept hook 'update_icon_cache' failed, #yocto

mail2uvijay@...
 

Hi All,

I am trying to build Yocto image on aws code buildfor SAMA5D27 from Microchip. Created container.
Able to setup the environment successfully with the dependencies installed on container. At the final step while creating rootfs image i am getting below error.

"ERROR: The postinstall intercept hook 'update_icon_cache' failed,...
ERROR: 
DEBUG: Python function do_rootfs finished
ERROR: Function failed: do_rootfs
"

Tried couple of options by installing libgdk-pixbuf2.0-0 in container, for bb image script DEPENDS += "qemuwrapper-cross"

However I was able to build Yocto in virtual box ubuntu 16.04.

Please let me know what i have missed..

Regards,
Vijay


Re: Create a service on the raspberry pi #yocto

Marco Cavallini
 

Hi,
because you are using Yocto Project to generate your distribution you have to create a recipe that does what you need.

In this case you need a systemd service.
See here how to add a systemd service file into a Yocto image

--
Marco Cavallini | KOAN sas
Bergamo - Italia
embedded software engineering
https://KoanSoftware.com


Re: [qa-build-notification] QA notification for completed autobuilder build (yocto-3.4_M3.rc1)

Sangeeta Jain
 

Hi all,

Intel and WR YP QA is planning for QA execution for YP build yocto-3.4_M3.rc1. We are planning to execute following tests for this cycle:

OEQA-manual tests for following module:
1. OE-Core
2. BSP-hw

Runtime auto test for following platforms:
1. MinnowTurbot 32-bit
2. Coffee Lake
3. NUC 7
4. NUC 6
5. Edgerouter
6. Beaglebone

ETA for completion is next Thursday, September 9.

Thanks,
Sangeeta

-----Original Message-----
From: qa-build-notification@... <qa-build-
notification@...> On Behalf Of Richard Purdie
Sent: Saturday, 4 September, 2021 9:27 PM
To: <yocto@...> <yocto@...>
Cc: qa-build-notification <qa-build-notification@...>
Subject: [qa-build-notification] QA notification for completed autobuilder build
(yocto-3.4_M3.rc1)

A build flagged for QA (yocto-3.4_M3.rc1) was completed on the autobuilder
and is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.4_M3.rc1


Build hash information:

bitbake: 0a11696e0898c3c5108e6d7c5ad28da50e00ea66
meta-agl: 60344efa7a50dc2548fc4b5d68b5ad4d60c4023a
meta-arm: 46e8fc6a67efbcc357cf507b903a3e3e133c78f7
meta-aws: 32ae20566a39454ab0ba4c80c23a32ed7c14dcaf
meta-gplv2: f04e4369bf9dd3385165281b9fa2ed1043b0e400
meta-intel: cb1bf2bdc1b20f76fde8b291a12b361a4bc2511e
meta-mingw: f5d761cbd5c957e4405c5d40b0c236d263c916a8
meta-openembedded: 1511e25cea69b98bf2778984d7a649dad5597878
oecore: ffb886497390d4de2631bda671f2f631bc0bc7be
poky: f2728d3ec8c0589e02e9a3ce7cf8aca902cae0a3



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...






Create a service on the raspberry pi #yocto

yasminebenghozzi6@...
 

Hello , 

To create a service in the rpi , do I need to create a new recipe for it, or can I just create it on the rpi directly, m trying the next method now but as I searched I found that they created new recipe, so creating it directly on the rpi does it really work? 
THank you 


QA notification for completed autobuilder build (yocto-3.3.3.rc1)

Richard Purdie
 

A build flagged for QA (yocto-3.3.3.rc1) was completed on the autobuilder and is
available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.3.3.rc1


Build hash information:

bitbake: 9b2d96b27f550da0fa68ba9ea96be98eb3a832a6
meta-agl: 60344efa7a50dc2548fc4b5d68b5ad4d60c4023a
meta-arm: ba82ea920a3a43244a0a72bd74817e2f00f4a1af
meta-aws: 171aa2cf4d12ff4877e9104b6ec46be54128e3d8
meta-gplv2: 9e119f333cc8f53bd3cf64326f826dbc6ce3db0f
meta-intel: 5c4a6b02f650a99a5ec55561443fcf880a863d19
meta-mingw: 422b96cb2b6116442be1f40dfb5bd77447d1219e
meta-openembedded: 5741b949a875b07335d4920aefa6defd13ed45c6
oecore: e3a7eaf9fe1420b2525e14f0c0f2936e7818b8a3
poky: 4624b855ed47c5da08953191bfbb39e764ecb343



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...


QA notification for completed autobuilder build (yocto-3.4_M3.rc1)

Richard Purdie
 

A build flagged for QA (yocto-3.4_M3.rc1) was completed on the autobuilder and
is available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.4_M3.rc1


Build hash information:

bitbake: 0a11696e0898c3c5108e6d7c5ad28da50e00ea66
meta-agl: 60344efa7a50dc2548fc4b5d68b5ad4d60c4023a
meta-arm: 46e8fc6a67efbcc357cf507b903a3e3e133c78f7
meta-aws: 32ae20566a39454ab0ba4c80c23a32ed7c14dcaf
meta-gplv2: f04e4369bf9dd3385165281b9fa2ed1043b0e400
meta-intel: cb1bf2bdc1b20f76fde8b291a12b361a4bc2511e
meta-mingw: f5d761cbd5c957e4405c5d40b0c236d263c916a8
meta-openembedded: 1511e25cea69b98bf2778984d7a649dad5597878
oecore: ffb886497390d4de2631bda671f2f631bc0bc7be
poky: f2728d3ec8c0589e02e9a3ce7cf8aca902cae0a3



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...


Yocto Technical Team Minutes, Engineering Sync, for August 31, 2021

Trevor Woerner
 

Yocto Technical Team Minutes, Engineering Sync, for August 31, 2021
archive: https://docs.google.com/document/d/1ly8nyhO14kDNnFcW2QskANXW3ZT7QwKC5wWVDg9dDH4/edit

== disclaimer ==
Best efforts are made to ensure the below is accurate and valid. However,
errors sometimes happen. If any errors or omissions are found, please feel
free to reply to this email with any corrections.

== attendees ==
Trevor Woerner, Stephen Jolley, Richard Purdie, Saul Wold, Trevor Gamblin,
Scott Murray, Richard Elberger, Peter Kjellerstedt, Michael Opdenacker,
Joshua Watt, Randy MacLeod, Jon Mason, Armin Kuster, Michael Halstead, Ross
Burton, Jan-Simon Möller, Tim Orling, Alejandro Hernandez, Bruce Ashfield

== project status ==
- feature freeze for 3.4 (honister)
- there are a number of issues holding up an -m3 build:
- rust was merged into oe-core but then a problem was found with
cargo-native on centos7
- a kernel issue regarding kernel module versioning changed
- a pseudo fix went in for the glibc 2.34 problem, but isn’t the most
ideal way (binary shim) to solve the problem, investigation into other
approaches would be good
- the glibc 2.34 upgrade introduced a bug with docker and the clone3 syscall
which now returns EPERM instead of ENOSYS. this is an upstream problem,
but we’ll probably need a local patch until this is resolved

== discussion ==
RP: -m3 not built yet, waiting on stuff, e.g. there’s a glibc issue with
docker (probably an issue with docker, but we’ll probably need to
carry a patch) which probably means a new uninative release as well
unfortunately, there’s a kernel issue with newly added full version
strings but there is a patch so it’ll need testing, and there’s a rust
issue on centos7
Randy: i’ll take a look at the rust thing, i didn’t realize it was
blocking -m3, i was looking at upgrading librsvg and the things that
depend on it
RP: librsvg is important, but thanks
Randy: i was hoping for a smooth upgrade and that i’ve have something for
you today, but there were some problems, so i think it’ll have to wait
until the next release.
RP: were the problems with rust, or rsvg?
Randy: rsvg builds, but gstreamer-bad (or something) has a configuration
issue, so that’s where i left it

RP: there’s pressure to get a hardknott build in, but we’ll wait until we
get the openssl fix in. so whether there’ll be a new hardknott release
or it’ll be -m3 i’m not sure. i’m hoping it’ll be -m3 but we’ll
see how things go with builds
SJ: i’ll update the schedule
RP: there’s a little bit of pressure there because of the overrides changes
and i think people are anxious for a hardknott release so we’ll try to
fit one in. i’m told there is QA time available.

Ross: what’s the state of the sbom work?
JPEW: Saul found a bug with packaging native recipes, i need a way to figure
out how to skip reading the subpackage metadata when there’s no packages
actually created. i haven’t figured out a good way to do that other
than: if it inherits class native? seems a bit kludgy, but i can do that 
Ross: that’s probably the easiest way
JPEW: we can skip the step where we read in the package variables and try to
create all the packages if it inherits from native. i already had the
check in to say “if there’s nothing actually packaged, skip creating
the package spdx files and i thought that was sufficient, but apparently
it’s not as you have to disregard all things related to packaging in
native recipes, not just whether it created packages or not. so we can do
that fairly simply, i think, after that, as far as i’m aware, it’s
ready to go. it generates a bunch of warnings because of licensing things.
we’re validating the licenses against the spdx license list so that we
generate a valid spdx license and most of the warnings are if the user
specifies “BSD” instead of BSD-2/3/4 clause. so i’ve been slowly
fixing those
Ross: i thought that was fixed. i thought we had code to rationalize the
license terms to spdx names?
JPEW: we do, but just “BSD”, by itself, is not a valid spdx license
string. so i’ve gone in and changed a bunch of recipes where it was
obvious that it was a BSD 2/3/4 clause. there is support in spdx for
including your own custom licenses, but i don’t have that in yet. we
could pull the generic license text from the paths that we have, so for
any license string that we don’t recognize as spdx we could go and put
our own. but i don’t have the working yet; the code to find a generic
licence file is… “intense” so i wasn’t up for dissecting it
right now. so for now it just adds a placeholder license and issues a
warning. but the vast majority of them are the “BSD” thing needing
to specify the 2/3/4 clause. other than that i think it’s ready to go.
unfortunately, until they’re all fixed we’ll get warnings on the AB,
but we could put it in and maybe not turn it on
RP: or we could just test a smaller subset
JPEW: i’m just only doing core-image-minimal and core-image-base now
RP: oh
JPEW: if you did world i think you’d get, thousands of errors 
Saul: i’ve been playing with world builds and meta-openembedded as well;
there’s a lot of issues
JPEW: like i said, we could just include the generic license text, but those
BSD ones are almost assuredly wrong because they’re not going to match
the generic BSD license we have
Ross: the generic one we ship is 3-clause so we could just tell it that
“BSD” means 3-clause?
JPEW: from what i’ve seen that’s not correct often enough
Ross: okay… and we are trying to generate correct data
JPEW: in the long run we should eliminate the bare BSD license. we should just
force people to specify the clause correctly. there are also a bunch of
one-off licensing things, not sure what to do about that
RP: there was an effort a while back to get rid of the generic BSD stuff, and
that went so far but it sounds like it hasn’t gone far enough. we should
go through and we should rationalize those. in spdx i think there’s a
way to do an external license? an additional license that is not spdx?
JPEW: yes, if we encounter a license that doesn’t have an spdx mapping
then we put a placeholder that is an external license but it’s still
“wrong”. but what we could do is search through the license search
path and pick out the generic license file the corresponds with that and
put that in there. and then that would cover all those unknown licenses
(e.g. bzip2 1.0.4). so it’ll know it can put that in as an external
reference. however i would rather not do that until all the BSD ones are
handled. it’ll eliminate the warning but then all these BSD ones would
be silently wrong
RP: i’m thinking about having something we can actually merge
JPEW: i think we can merge what we have, it will generate some warnings and
people should go fixup the licenses
RP: i’d like to get rid of the warnings before we merge it. if we allow
one warning on the AB, then it breeds quietly and then suddenly lots of
warnings get ignored
JPEW: i could comment out the warning. it does still generate the placeholder,
it’s just not a very good placeholder; it literally says: “this is a
BSD license”
RP: i think the spdx people would be interested in having a list of the
licenses required to create a basic linux system that aren’t in their
system
JPEW: the spdx spec does say that putting in a placeholder like that is okay,
the warning is just nice to let you know where you have to fix things up
RP: i think having the warning commented out for getting it merged at this
point is desirable at this point
JPEW: ok
PR: i saw Saul’s patches for native packages. what we’ve tried in the
past, and is only half complete, is rather than zeroing out the packages
field what we tried to do was preserve the packages field and then just do
detection on whether native was being inherited or not. sometimes there is
dependency information that is hidden in the RDEPENDS fields, and in order
to know which RDEPENDS fields to lookup we need the PACKAGES variable.
and in the native case if you zero out the PACKAGES variable then you no
longer know which packages that that thing might be producing to know
which variables to go query. the overrides changes might help clarify. but
the intent was to stop destroying the PACKAGES variable so we could use it
in more places
JPEW: if detecting whether or not it “inherits native” is the way to go
then i’m happy to go with that

TrevorW: there were further replies to your email about the pseudo problem on
the libc-help mailing list. one suggestion was to use ptrace, another was
to use seccomp, yet another was to use the newer seccomp notify mechanism.
also crOS does something similar to pseudo but uses lddtree.
RP: the seccomp notify is interesting, but there is some small print in the
man page of particular note: we can’t write data back to the process,
so while you can do all this cool stuff in the supervisory process, we
can only change return value, but not the return data. we can poke into
the process’s memory to read things but there’s all sorts of locking
constraints. you have to be very careful how you read the data because
you have to make sure that the process didn’t disappear while you were
halfway through reading from its memory. so because of that you could
never write data back to the process. pseudo is modifying data because it
wants to fake the file permissions and therefore it would have to write
data back and change the return data and i don’t think that would be
viable. so i don’t think the new module would work. i’m not familiar
with the existing seccomp stuff to know whether a module like that would
be able to do the intercept and be able to do the writing. i don’t think
doing this for every linux syscall (and all their quirks) is easier than
doing it for every libc call (and all their quirks). we’ll just end up
replacing one set of problems with another
JPEW: i suspect the advantage of doing it at the syscall lever would be that
you’d catch things that don’t use the standard C library, e.g. Rust
RP: totally. there are advantages yes. you would catch things that are
statically linked (and don’t have dynamic library support) so yes
there are some advantages do doing that. whether it’s enough of an
advantage… the jury’s still out; not sure. ptrace would be far too
slow for what we need
TrevorW: what about if we used musl instead of glibc?
RP: that would give us a whole load of headaches because we would end up
writing an entire intercept library. all of these libraries are linked
against glibc, therefore pseudo links against glibc and intercepts glibc
syscalls. it would be interesting to see whether musl has a pthread
implementation that is simpler than glibc so it would let us statically
link it into libpseudo
TrevorW: then maybe the versioning problem wouldn’t be there?
RP: no, there are some problems it would solve, but there are some problems it
would create. libpseudo is a glibc intercept library, as built. because
it’s intercepting things on a glibc system. if you put musl in there
you’ll then having it linking against 2 different C libraries. which
means you’ll have the .start and .end sections from 2 different C
libraries involved. you can’t replace glibc with musl, you would have to
have both
Scott: with pseudo we’re intercepting things from the host side as well as
in the sysroot, correct? so the problem is on both sides because the host
tools are getting intercepted
RP: it doesn’t matter that the target it because the host tools are linked
against glibc and are dynamic
Scott: does the buildtools tarball cover everything we need from the host?
RP: no, probably 90%, but not everything
Scott: would it help if it did cover everything, then we could say always
glibc 2.34 and not have the mismatch
RP: not sure. we don’t build our own git for example, and we do the same
for a couple others. we do 2 levels of build tools because we do the
basic one, then we do the one which has gcc and some of the “heavier”
stuff in. so at the moment we do have a split-level approach. regardless,
people could add to the host tools and bypass the thing, and it would have
to work regardless. unless we mandate that we only ever build using our
tools, but that feels like a backwards step
Scott: down the road, the problem goes away once everything is glibc 2.34?
RP: there are things like centos7 that hang around for a long time
Scott: and i’ve seen customer who play around with host tools, so it would
be problematic for sure
RP: it’s an interesting idea. you’re getting to the point where you’re
mandating the host environment, which i get nervous about. might as well
use a container or force everyone to use docker (lol)
Scott: down the road we could use pseudo with user-id name-spacing but at that
point we might as well use a full container
RP: it would be nice if those features were available in such a way that we
could actually use them for this. so far i haven’t seen anything that we
could use, but so far everything always needs privilege
Scott: it’s coming along, but not there yet
JPEW: podman might be a possibility, but it struggles with the number of file
descriptors that it can open
RP: the seccomp interface was designed by someone with a specific use-case,
i can’t help wonder if we couldn’t get a kernel interface designed
that would work for us. our requirements aren’t that high. and there are
other users of this sort of use-case (e.g. fakeroot) that might benefit as
well
Scott: if you use BPF there might be a whole bunch of people interested
RP: …and write it in Rust (lol)
TrevorW: it’s a coin toss
RP: sometimes the act of putting together the proposal and getting feedback;
it might not fly on the first attempt, but you would learn enough to make
a second attempt fly. that would be the long shot, but perhaps worth it;
it’s not out of this world and we’ve used fakeroot technology for most
of that time in one form or another
JPEW: according to the seccomp man page regarding seccomp notify, and i think
what it’s saying about the writing is you can’t know if you’re
writing back to the target’s process’s memory space that it hasn’t
interrupted the syscall for the thing you’re writing. and thus you’re
corrupting something
RP: yes, basically you can’t use that to write the data back. and i
couldn’t spot any other mechanism that was making that available either.
the kernel should know, but perhaps the supervisor can’t know. i don’t
think a supervisor module could work, but i was curious about seccomp
itself, with the filtering, but i don’t think the program itself can
make its own library calls
JPEW: the supervisor seems quite limited; it’s just for blocking system
calls?
RP: yes, it just modifies return codes. we already have a model in pseudo
where everything gets serialized and stuffed over a socket to the actual
pseudo database. we’re mostly there with our code, it’s just a
question of whether seccomp can get us the rest of the way. i suspect with
seccomp having a security focus has a different set of constraints. what
we want might be counter to what a security mechanism might work.

Randy: we’re still finding edge cases with the overrides thing. it would be
nice to document the rationale for why this was done as a flag day instead
of a warning period?
RP: what would you warn on?
Randy: i’d like to document the reasoning
RP: the reason for the change is because bitbake could never tell what was an
override and what isn’t. so there was no way to tell where that colon
should or shouldn’t be. take SRC_URI, would you like a warning that the
underscore in SRC_URI may or may not be an override. that one is obvious,
but there are a number that aren’t. have you looked at the migration
guide?
Randy: no, i didn’t check that
RP: we did try to document it. we did find a quirk in bitbake that sometimes
because of the order of processing various configs/layers, it would
produce nicer error messages for some things but not for others. it would
have been nice if all the warning/error messages would be uniform and
clear, but they’re not

TimO: what’s the status of Rust on the centos7 worker?
RP: cargo-native fails to build with a glibc symbol mismatch error that looks
a lot like the uninative issues. the interesting thing is that centos7 is
using the buildtools tarball, so it does look like a bad interaction with
the buildtools tarball. but this doesn’t seem to happen anywhere else
we’re using the buildtools tarball (including my own local worker). so
there’s something odd the centos7 machine is doing that the others are
not. it looks like there’s something in the centos7 environment that’s
letting it use the wrong linker. because buildtools should be using its
own linker, and the host paths are all pointing at the buildtools linker.
and the buildtools compiler should know how to use the buildtools linker.
but something is escaping that somehow, but i’m struggling to prove
what’s going on. but i think we need to investigate this to understand
what’s going on to make sure it’s not going to affect other systems in
some weird way.

RP: we were having some issue with dunfell with meta-aws (yocto check layer),
there was a patch that was supposed to be on the AB but wasn’t. but we
got the patch in and it looks like everything is okay now
RE: awesome, thank you


Yocto Technical Team Minutes, Engineering Sync, for August 24, 2021

Trevor Woerner
 

Yocto Technical Team Minutes, Engineering Sync, for August 24, 2021
archive: https://docs.google.com/document/d/1ly8nyhO14kDNnFcW2QskANXW3ZT7QwKC5wWVDg9dDH4/edit

== disclaimer ==
Best efforts are made to ensure the below is accurate and valid. However,
errors sometimes happen. If any errors or omissions are found, please feel
free to reply to this email with any corrections.

== attendees ==
Trevor Woerner, Stephen Jolley, Peter Kjellerstedt, Randy MacLeod, Armin
Kuster, Jan-Simon Möller, Joshua Watt, Richard Elberger, Scott Murray,
Steve Sakoman, Richard Purdie, Saul Wold, Tim Orling, Alejandro Hernandez,
Bruce Ashfield, Denys Dmytriyenko, Jon Mason, Ross Burton, Trevor Gamblin

== project status ==
- now in feature freeze for 3.4 (honister)
- read-only prserv and switch to asyncio merged
- rust merge is problematic (issues with uninative), will need to be fixed in
next day or two to make it into 3.4
- glibc 2.34 causes significant issues for pseudo, this will get worse as more
host distros upgrade
- tune file refactorization merged
- still hoping to get some sbom stuff into 3.4

== discussion ==
RP: we’re now at feature freeze

RP: the asycio stuff is finally working, thanks Scott

RP: the news isn’t so good with rust - there’s some weird uninative issue
(something to do with the linker relocations that we do). we were seeing
issues on debian 8, but it looks like we can reproduce that issue by
using the buildtools’ extended tarball as the compiler, which also
provides its own libc, which them seems to cause the problems. i could
get rid of the relocations that uninative causes, but at a cost of it not
working with the eSDK, but i decided to ignore that. but even if we do
that there’s always the relocation issue with the buildtools tarball
which we can’t avoid. for a while i could reproduce reliably, but then
it stopped and i can’t reproduce anymore
Randy: i tried reproducing but couldn't. my impression is that the rust
community is happy with meta-rust and use it for specific use-cases but
they don’t go beyond that very much (and therefore aren’t seeing
issues). even if we fixed the things you call blockers, i’d still call
it beta quality for or-core if we merge it. do you want to merge it now
(as beta quality) or wait for the next window?
RP: there’s no winning scenario. if we merge it then i’m signing myself
up to maintain and fix it (esp before release). on the other hand if we
push it out then we’ll be in feature freeze and nobody will pay any
attention to it until later, then other things will bump its priority
down. i can see that there are some open issues dating back to 2016, that
obviously nobody cares much about, so pushing it out isn’t going to
change anything.
Randy: not having rust in is holding back a bunch of things, but i,
relatively, don’t know rust very well and without the rust community’s
help i don’t know how to move this forward. ideally someone with rust
experience could step up; maybe ARM?
Ross: we’d like to see it in core, we’re using it but with meta-rust so
we’re happy with it so far. my preference would be to hone it and push
it early in the next release cycle
Randy: schedules are dancing around, so we’ll try to get things moving along

RP: the pseudo glibc problem has me scared. any distro that upgrades to
glibc 2.34 (natively) will break. we have a ticking timebomb, and it was
discovered by our toolchain testing (thanks Ross)
RP: we make interesting assumptions with unintave and pseudo. we end up with
host tools that are linked, potentially, against a newer glibc, therefore
pseudo has to run as an LD_PRELOAD against multiple libc versions, so if
it links against a newer one but then has to run against an older one it
breaks with symbol location problems. we’ve had these issues before, and
we’ve implemented various fixes. libpseudo only links against libdl and
libpthread and we can’t get rid of those things (libdl because that’s
how it works (fundamentally loading libraries dynamically), and threads
because of the mutex that we use for locking). the release can’t go out
if, when people upgrade their host systems, it’s going to break; badly.
we’ve tried every technique that we’ve tried before and then some. in
2.34 all the symbols are merged back into the main library, so there are
no libpthread symbols, it’s all part of libc.so. in the past we’ve
been able to link against uninative 2.33 (libdl and libpthread) and then
link pseudo-native against those binaries. thereby force-linking against
older versions using the newer glibc headers (which is horrible). what
worries me is i’m basically the only one paying attention; i don’t
even have anyone to bounce ideas off of or talk to about it. so we have a
solution, it is horrendous, but it’s the only thing we’ve got right
now. so if there’s anyone who knows about weak linking or strong linking
or mutex locks without pthreads i’d like to talk to them.
JEPW: would you be opposed to making the direct kernel call to do the locking?
that would bypass pthreads
RP: i’m not adverse to it, you mean the futex calls?
JPEW: yes
RP:  i’m not opposed, but i don’t think it’s as simple as making direct
calls to the kernel. i read up on it but decided implementing our own
locks wasn’t quite the direction i wanted to take. the number of ways to
get this wrong is… interesting. 
JPEW: i know the futex call does a million things, and that’s one of the
problems with it. i wonder if it would be possible to look at the pthreads
mutex code and copy the parts that deal with futex?
RP: i did think of doing that; just distilling the pthreads code into what
we need. we just need a very simple lock so it might be possible. may be
something we need to look at
PeterK: wouldn’t you still need to link against libdl
RP: yes, but the scary stuff that goes on is in pthreads (headers and
declarations). the libdl stuff is 3 function calls that are plain; no
dependencies, no crazy symbols, etc. long term, ideally, we’d get rid of
the libpthread dependency, then libdl should be comparatively simpler
TrevorW: i could take a stab at it, i’ve done dynamic library things before:
loading a library, looking for a symbol, doing one thing or another based
on whether it’s found
RP: it’s more complicated than that. what they’ve done in libc is
there’s now a libdl with weak globbing symbols that redirect the
previous symbols back to libc, so you only get a libc linkage. i haven’t
worked out how you’d force it to link to the libdl (which you have to
do if you run against an older binary). specifying versions is one thing
(easy to do), specifying the library… there’s no way to specify
the library, it’s hard-coded at link time… as far as i can tell.
the other viable solution (instead of the current one which is to use
an older libc and force the link) my other plan was to create a dummy
binary to link against that would put the symbols in the right place. so
we could just take the linker and generate a specially-crafted binary,
and then use it in the linking process to force libpseudo to look in the
correct form. however i realized that it was probably easier just for
testing purposes to download the glibc 2.33 binaries, rather than try to
create a specially-crafted one. so another thing to potentially look at
(besides those pre-built 2.33) would be a binary that would do the right
things. then we could do it as part of the build process. so that could be
something to look at
TrevorW: my first step would be to reduce the problem to a simple test case
RP: generating a simple test case isn’t so much the issue, it’s
the fact it only breaks when you have a build within a build.
but creating a test case would be easy. there is a bugzilla:
https://bugzilla.yoctoproject.org/show_bug.cgi?id=14521 longer term,
getting rid of the pthread dependency would be helpful then the libdl
thing would be relatively simple.

RP: i’ve talked about the things i know about which are gating m3, is there
anything i don’t know about or haven’t mentioned
JPEW: sbom stuff. it’s pretty hands-off, the only thing it touches that
might affect anyone is the package data extended.
RP: we should try that
JPEW: is everyone okay with it (Saul and Ross) has anyone had a look at it. is
it ready to go in (i know there still are things to add)
Ross: looks good to me, the only thing i would mention is the path that’s
used, but there’s a fix for that
JPEW: yep
Ross: i haven’t run selftest myself, but i don’t think there’s any
massive problem with what’s there now
Saul: i agree with Ross, there is one thing, but we can work around it, so
i’m okay with it going in
RP: Anuj and myself have started and killed loads so quickly recently that the
AB is keeling over because it can’t delete things fast enough, so it’s
running out of space
SS: i think i’ve been contributing to that as well this week
JPEW: is it a matter of “rm -fr” being slow
RP: actually when we delete we actually move stuff to a junk area then do the
actual deletion at idle, but there hasn’t been enough idle lately, so
it’s running out of disk space
Randy: is this something that TrevorG should look at? i.e. I/O load too high
meaning builds won’t take place
RP: not sure how we’d go about solving it
TrevorG: i could look at it once i’m done my current stuff
RP: maybe adding a task that runs early in a build that would block the start
of new builds until a certain amount of resources are available
TrevorG: sounds good

JonM: with the last mesa update (2 days ago), anything that doesn’t have
hard float on arm won’t compile. i don’t know if we’re going to need
to have that as a requirement. it tries to do neon regardless of anything
else
RP: is it something they did intentionally, or by mistake?
JonM: according to the mesa build logs, they were trying to speed up their
build times by using neon instructions. this isn’t a problem even if
you you have semi-modern arm hardware. anything with cortex is going to
have hard float but we’re blowing up on the armv5 stuff because it’s
ancient and we’re intentionally using it for the soft-float
RP: is there something in mesa that we can configure to disable this
JonM: don’t think so, it looks like it’s just checking for arm and then
going ahead and doing it
RP: maybe Ross has a friend or two who we could ask. perhaps ask upstream why
the change was and if we couldn’t at least configure it
TrevorW: curiously enough i do know of at least 1 armv5 soc that does have
hard float (or vfp at least) because it is optional. but the vast majority
of them don’t do hard float. i’m wondering about the pi 0’s and the
pi 1’s, i believe those are armv6.
JonM: the qemu that we’re using has hard float natively
TrevorW: so you’re saying the pi’s shouldn’t be affected?
JonM: probably not. although it would be affected if you had one of those but
purposefully disabled hard float. you could configure yourself into a hole
RP: we should figure out if they did this intentionally or not, because it is
easy to do things like this unintentionally

RP: the tune updates seemed to have gone well
TrevorW: speaking of tunes i did run into one that doesn’t seem happy
(mips32r2el-24kc)
JonM: i could take a look at it

RP: speaking of older platforms, we’re seeing an issue with serial port
emulation on qemuppc that is causing lots of problems. paulg is looking
at it. hopefully we can get a band-aid that will keep the AB happy. i do
wonder how many people are using ppc, but everytime i try to remove it i
get lots of pushback. it does show the project is multiplatform

PeterK: i did the conversion to the new override syntax the other day, we
now have a brand new syntax that is used for real overrides and wannabe
overrides (e.g. FILES:${PN} and RDEPENDS:). these look like real overrides
but they aren’t. ${PN} has to be first, but with real overrides the
order doesn't matter. also you can’t say the :append has to be first 
because it has to come after the override-wannabe
RP: i can see what you’re saying because the ordering is important. it’s
not fair to call them wannabe overrides because the code does treat them
as overrides
PeterK: but they, technically, don’t use the override mechanism, so you
can’t change the order of them
RP: you can, it’s just that they get appended to the overrides variable in
a limited context. e.g. when it’s writing the pn-package it will have
${PN} in overrides, when it’s writing the pn-debug it will put the ${PN}
in overrides. so they are used as overrides.
PeterK: yea, but there are a lot of places where you do things like
getvar_foo:${PN} to get these variables
RP: right. it is a compromise. going forward into the future when you do a
getvar_files:${PN}, behind the scenes we put ${PN} in overrides then
fetch that variable. in the future we can get creative and use this
more effectively. i can’t promise you what the future will look like,
but, code-wise, we had painted ourselves into a corner and we had to do
something. so i don’t think they should be considered wannabe-overrides,
they are overrides they’re just used in a slightly different context
to say “machine override”. i know what you mean about the :append
being a little bit tricky because i have seen a couple cases where some
code was using the alternative format you alluded to which doesn’t
quite make sense, but sorta does. the nice thing is we can at least now
detect this which gives us more options going forward. this opens up the
possibility to be more creative in the future, but it’s not like i have
a concrete plan yet going forward. in my spare time i have been looking
into the bitbake code, there’s a huge override data variable bitbake
uses globally and it was hard to tell what was a variable and what was an
override (e.g. SRC_URI). so we can move things from global scope to local
scope which will give us a cleaner syntax and make things faster. as a
worse case, even if there was no parsing advantages, it would at least
make the syntax cleaner, which i think is a huge win.
TrevorW: any plans to do the corner cases, e.g. layer.conf. these might not be
overrides in the code, technically, but conceptually they are overrides
RP: i do have a branch where i played with this (making layer.conf variables
overrides). there are some interesting side effects. yes, they do look
like overrides, but they aren’t ever used as overrides, which is why
they weren’t converted, and there would be problems if some of them were
converted because of the way they get used. the nice thing is that it is
a very specific namespace. the : change was huge and global, but this is
localized so it might not be too bad. maybe in the next release. there
are things to do with collections and things that perhaps could go away.
nobody today knows what a collection is, it’s only something you’d
know if you used bitbake 12 years ago.

SteveS: there are some updates we’re still waiting for on the AB restart
RP: there are patches to swapbot that need to be applied as well. remind
MichaelH. it’ll happen as part of the regular maintenance

JPEW: did you want to enable spdx output on the AB?
RP: we should at least have some tests for it
JPEW: there are a couple knobs to balance the time it takes to generate it vs
the amount of stuff you’re generating
RP: we should at least have something somewhere exercising those
TimO: recipetool/devtool don’t know about the spdx license identifier so
they failed to pick up the right license for a couple things i was looking
at recently
RP: please open a bug

TimO: OEHH tomorrow!

RP: there’s a patch on the list, involving changes to glibc testing that
concerns me. there has always been a dilemma regarding glibc’s testing:
whether to include as a ptest or run with its own test runner? in other
words, run it as a special case. and we already have a handful of special
cases: binutils, gcc, and glibc. they’re big and unwieldy and aren’t
easy to turn into ptests therefore we did run them standalone. the patch
enables turning it into a ptest. so we now have the options of running
it under system emulation using NFS, user-mode emulation, or using
ptests. i’m worried it enables too many options where we have too many
half-working solutions.
Randy: for people who are concerned about the integrity of the toolchains it
sounds like a good idea; more options sounds good
RP: options are good to a point, but if you have two things doing,
effectively, the same thing, then that can be problematic
Randy: is there a way to run the current glibc tests on a target?
RP: yes, not easy to setup, but can be done (give it an IP address etc)
Randy: maybe give it to the doc person (MO)
RP: there might be other higher priority things for docs right now
TrevorW: are the 2 sets of tests orthogonal?
RP: exact same tests, just run different ways

Denys: OEHH tomorrow, Asia-Pacific, 9pm UTC

Randy: do we have a test suite for self-hosted builds?
RP: Ross’s tests for buildtools is close to that
Randy: how do i find that? do you have a keyword?
RP: the way you would run it is: bitbake buildtools-extended-tarball -c
testsdk
Ross: it only builds libc, as it depends on how much of the builds works
RP: it’s the closest thing we have, it could be easily extended

Denys: nomination period for OE TSC ends of today

TrevorW: Joshua: was you video posted?
JPEW: not yet, i think it should be soon
Ross: what i read said it should be soon
RP: something else i heard today says it would be soon, if not today. it was a
good presentation, thanks Joshua


Re: Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

Steve Sakoman
 

On Thu, Sep 2, 2021 at 7:48 PM Matthias Klein
<matthias.klein@...> wrote:

Hello Steve,

Can you push the commit to the dunfell branch, or do I need to open an official bug somewhere, or what is the correct procedure?
I completed testing and the commit is in the group I sent to the list
for review earlier today. If there are no objections I'll send a pull
request early next week.

Steve

-----Ursprüngliche Nachricht-----
Von: Matthias Klein
Gesendet: Donnerstag, 2. September 2021 16:36
An: Steve Sakoman <steve@...>
Cc: yocto@...
Betreff: AW: [yocto] Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

Hello Steve,

yes, that commit resolves the issue.

Best regards,
Matthias

-----Ursprüngliche Nachricht-----
Von: Steve Sakoman <steve@...>
Gesendet: Donnerstag, 2. September 2021 16:09
An: Matthias Klein <matthias.klein@...>
Cc: yocto@...
Betreff: Re: [yocto] Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

On Thu, Sep 2, 2021 at 2:55 AM Matthias Klein <matthias.klein@...> wrote:

Hello,

the following commit needs the variable SDK_BUILD_PATH which doesn't
seem to exist in the dunfell branch:
https://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/files/tool
chain-shar-relocate.sh?h=dunfell&id=d6f40be29bf56a835f5825692a22365f04
aeb6c3 This leads to countless messages being displayed during the
installation:

sed: -e expression #1, char 0: no previous regular expression
It appears that dunfell also should have:
https://git.openembedded.org/openembedded-core/commit/?id=bc4ee5453560dcefc4a4ecc5657df5cc1666e153

Could you see if this resolves the issue?

Steve

Shouldn't the process be aborted with an error message at the first problem?

Many greetings,
Matthias




PXE boot a yocto installed image and use it as install yocto on system

msg board
 

Hello,

I usually burn hddimg on a usb flash drive and use this usb flash drive to install yocto on my system. Now we have few systems deployed which are non-yocto. They can PXE boot. I wanted to use PXE to upgrade and change those systems to run yocto. I am able to PXE boot a system with bzImage as kernel and normal NFS mounted filesystem. But not able to find a way to PXE boot an installer so that I can PXE boot and installer image and use it to install yocto on this system.

Any ideas?


Re: Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

Matthias Klein
 

Hello Steve,

Can you push the commit to the dunfell branch, or do I need to open an official bug somewhere, or what is the correct procedure?

Best regards,
Matthias

-----Ursprüngliche Nachricht-----
Von: Matthias Klein
Gesendet: Donnerstag, 2. September 2021 16:36
An: Steve Sakoman <steve@...>
Cc: yocto@...
Betreff: AW: [yocto] Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

Hello Steve,

yes, that commit resolves the issue.

Best regards,
Matthias

-----Ursprüngliche Nachricht-----
Von: Steve Sakoman <steve@...>
Gesendet: Donnerstag, 2. September 2021 16:09
An: Matthias Klein <matthias.klein@...>
Cc: yocto@...
Betreff: Re: [yocto] Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

On Thu, Sep 2, 2021 at 2:55 AM Matthias Klein <matthias.klein@...> wrote:

Hello,

the following commit needs the variable SDK_BUILD_PATH which doesn't
seem to exist in the dunfell branch:
https://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/files/tool
chain-shar-relocate.sh?h=dunfell&id=d6f40be29bf56a835f5825692a22365f04
aeb6c3 This leads to countless messages being displayed during the
installation:

sed: -e expression #1, char 0: no previous regular expression
It appears that dunfell also should have:
https://git.openembedded.org/openembedded-core/commit/?id=bc4ee5453560dcefc4a4ecc5657df5cc1666e153

Could you see if this resolves the issue?

Steve

Shouldn't the process be aborted with an error message at the first problem?

Many greetings,
Matthias




Minutes: Yocto Project Weekly Triage Meeting 9/2/2021

Trevor Gamblin
 

Wiki: https://wiki.yoctoproject.org/wiki/Bug_Triage

Attendees: Alex, Bruce, Joshua, Randy, Richard, Ross, Saul, Stephen, Steve, Tim, Trevor

ARs:

N/A

Notes:

- (carried over) Steve encountered build failures such as the one in https://errors.yoctoproject.org/Errors/Details/593109/ when attempting to run dunfell builds with the PARALLEL_MAKE load averaging added. WR is testing/investigating on internal Autobuilder instance - Trevor is still planning on looking into this!

Medium+ 3.4 Unassigned Enhancements/Bugs: 77 (No change)

Medium+ 3.99 Unassigned Enhancements/Bugs: 38 (No change)

AB-INT Bugs: 48 (No change)


Re: Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

Matthias Klein
 

Hello Steve,

yes, that commit resolves the issue.

Best regards,
Matthias

-----Ursprüngliche Nachricht-----
Von: Steve Sakoman <steve@...>
Gesendet: Donnerstag, 2. September 2021 16:09
An: Matthias Klein <matthias.klein@...>
Cc: yocto@...
Betreff: Re: [yocto] Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

On Thu, Sep 2, 2021 at 2:55 AM Matthias Klein <matthias.klein@...> wrote:

Hello,

the following commit needs the variable SDK_BUILD_PATH which doesn't
seem to exist in the dunfell branch:
https://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/files/tool
chain-shar-relocate.sh?h=dunfell&id=d6f40be29bf56a835f5825692a22365f04
aeb6c3 This leads to countless messages being displayed during the
installation:

sed: -e expression #1, char 0: no previous regular expression
It appears that dunfell also should have:
https://git.openembedded.org/openembedded-core/commit/?id=bc4ee5453560dcefc4a4ecc5657df5cc1666e153

Could you see if this resolves the issue?

Steve

Shouldn't the process be aborted with an error message at the first problem?

Many greetings,
Matthias




Re: Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

Steve Sakoman
 

On Thu, Sep 2, 2021 at 2:55 AM Matthias Klein
<matthias.klein@...> wrote:

Hello,

the following commit needs the variable SDK_BUILD_PATH which doesn't seem to exist in the dunfell branch: https://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/files/toolchain-shar-relocate.sh?h=dunfell&id=d6f40be29bf56a835f5825692a22365f04aeb6c3
This leads to countless messages being displayed during the installation:

sed: -e expression #1, char 0: no previous regular expression
It appears that dunfell also should have:
https://git.openembedded.org/openembedded-core/commit/?id=bc4ee5453560dcefc4a4ecc5657df5cc1666e153

Could you see if this resolves the issue?

Steve

Shouldn't the process be aborted with an error message at the first problem?

Many greetings,
Matthias




Bug in dunfell branch because of commit "sdk: fix relocate symlink failed"?

Matthias Klein
 

Hello,

the following commit needs the variable SDK_BUILD_PATH which doesn't seem to exist in the dunfell branch: https://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/files/toolchain-shar-relocate.sh?h=dunfell&id=d6f40be29bf56a835f5825692a22365f04aeb6c3
This leads to countless messages being displayed during the installation:

sed: -e expression #1, char 0: no previous regular expression

Shouldn't the process be aborted with an error message at the first problem?

Many greetings,
Matthias


[PATCH][gatesgarth] config.json: drop redundant meta-kernel mentions

Ross Burton <ross@...>
 

Signed-off-by: Ross Burton <ross.burton@...>
---
config.json | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/config.json b/config.json
index 5dda653..bee5350 100644
--- a/config.json
+++ b/config.json
@@ -277,9 +277,8 @@
"TEMPLATE" : "ltp-qemu"
},
"meta-arm" : {
- "NEEDREPOS" : ["poky", "meta-arm", "meta-kernel"],
+ "NEEDREPOS" : ["poky", "meta-arm"],
"ADDLAYER" : [
- "${BUILDDIR}/../meta-kernel",
"${BUILDDIR}/../meta-arm/meta-arm-toolchain",
"${BUILDDIR}/../meta-arm/meta-arm",
"${BUILDDIR}/../meta-arm/meta-arm-bsp"
--=20
2.25.1

2781 - 2800 of 57400