Date   

[opkg-utils PATCH] Makefile: add opkg-feed to UTILS

Alex Stewart
 

* Add the opkg-feed script to UTILS so that it is installed with a `make
install`.

* Clean up the UTILS variable declaration to be a little more diffable.

Signed-off-by: Alex Stewart <alex.stewart@ni.com>
---
Makefile | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index 817a8c1..4049654 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,17 @@
-UTILS = opkg-build opkg-unbuild opkg-make-index opkg.py opkg-list-fields \
- arfile.py opkg-buildpackage opkg-diff opkg-extract-file opkg-show-deps \
- opkg-compare-indexes update-alternatives
+UTILS = \
+ arfile.py \
+ opkg-build \
+ opkg-buildpackage \
+ opkg-compare-indexes \
+ opkg-diff \
+ opkg-extract-file \
+ opkg-feed \
+ opkg-list-fields \
+ opkg-make-index \
+ opkg-show-deps \
+ opkg-unbuild \
+ opkg.py \
+ update-alternatives

MANPAGES = opkg-build.1

--
2.25.0


Re: master/master-next missing from poky repo?

Armin Kuster
 

Thanks for working on a holiday Micheal.

- armin

On 2/17/20 2:49 PM, Michael Halstead wrote:
The caching issues are resolved as well as a tricky permissions issue. These repositories are operating correctly again.

Please let me know if issues persist.

-- 
Michael Halstead

On Sat, Feb 15, 2020 at 2:44 AM Richard Purdie <richard.purdie@...> wrote:
On Sat, 2020-02-15 at 11:41 +0100, Alexander Kanavin wrote:
> Something is wrong, as master and master-next are missing here?
> http://git.yoctoproject.org/cgit/cgit.cgi/poky/

There is something wrong with the http caching on the git server. The
branches are there, they're just not appearing immediately in the web
protocol. I know Michael is aware and has been working on it.

You can access them using the git protocol if that helps.

Cheers,

Richard



    


Re: master/master-next missing from poky repo?

Michael Halstead
 

The caching issues are resolved as well as a tricky permissions issue. These repositories are operating correctly again.

Please let me know if issues persist.

-- 
Michael Halstead

On Sat, Feb 15, 2020 at 2:44 AM Richard Purdie <richard.purdie@...> wrote:
On Sat, 2020-02-15 at 11:41 +0100, Alexander Kanavin wrote:
> Something is wrong, as master and master-next are missing here?
> http://git.yoctoproject.org/cgit/cgit.cgi/poky/

There is something wrong with the http caching on the git server. The
branches are there, they're just not appearing immediately in the web
protocol. I know Michael is aware and has been working on it.

You can access them using the git protocol if that helps.

Cheers,

Richard


Re: Creating a build system which can scale. #yocto

Richard Purdie
 

On Mon, 2020-02-17 at 04:27 -0800, philip.lewis@domino-uk.com wrote:
Something using the built-in cache mirror in Yocto–there are a few
ways it can do this, as it’s essentially a file share somewhere.
https://pelux.io/2017/06/19/How-to-create-a-shared-sstate-dir.html
for an example shows how to share it via NFS, but you can also use http or ftp.
Sharing sstate between the workers is the obvious win, as is rm_work to
reduce individual build sizes.

Having a single cache largely solves the storage issue as there is
only one cache, so having solved that issue, it introduces a few more
questions and constraints:

How do we manage the size of the cache?
There’s no built-in expiry mechanism I could find. This means we’d
probably have to create something ourselves (parse access logs from
the server hosting the cache and apply a garbage collector process).
The system is setup to "touch" files it uses if it has write access so
you can tell which artefacts are being used.

How/When do we update the cache?
All environments contributing to the cache need to be identical (that
ansible playbook just grabs the latest of everything) to avoid subtle
differences in the build artefacts depending on which environment
populated the cache.
All environments contributing to the cache don't have to identical, we
aim to build reproducible binaries regardless of the host OS.

Obviously you reduce risk by doing so but I just wanted to be clear
that we have protection in place for this and sstate does support it.

How much time will fetching the cache from a remote server add to the
build?
Mostly depends on your interconnecting network speed.

Some mentioned NFS, we do support NFS for sstate, our autobuilders make
extensive use of that.

Cheers,

Richard


Re: Creating a build system which can scale. #yocto

Rudolf J Streif
 

Hi Philip,

We have done this with many Yocto Project builds using AWS EC2, Docker, Gitlab and Artifactory.

Rest inlined below.

:rjs

On 2/17/20 4:27 AM, philip.lewis@... wrote:
Hi,

I'm looking for some advice about the best way to implement a build environment in the cloud for multiple dev teams which will scale as the number of dev teams grow.

Our devs are saying:

What do we want?

To scale our server-based build infrastructure, so that engineers can build branches using the same infrastructure that produces a releasable artefact, before pushing it into develop. As much automation of this as possible is desired..

It can be configured that any check in to branches can trigger a build. That is what we do with developers on their own branches as well as with the master branches. The master branch is the integration branch. Then there are release and development branches but they all use the same build environment.


Blocker: Can’t just scale current system – can’t keep throwing more hardware at it, particularly storage. The main contributor to storage requirements is using a local cache in each build workspace and there will be one workspace for each branch, per Jenkins agent: 3 teams x 10 branches per team x 70Gb per branch/workspace x number of build agents (let say 5) = 10Tb. As you can see this doesn’t scale well as we add branches, teams or build agents. Most of this 10Tb is the caches in each workspace, where most of the contents of each individual cache is identical.

A possible solution:

Disclaimer/admission: I’ve not really researched/considered _all_  possible solutions to the problem above, I just started searching and reading and came up with/felt led towards  this. I think there is some value in spending some of the meeting exploring other options to see if anything sounds better (for a what definition of better?).


We do this with Gitlab runners and working instances on EC2. Since it can take some time to spin up a new instance we hold a certain amount running during business hours. If more are needed more are spun up transparently. Of course this costs money in particular when large instances with a lot of memory and a lot of vCPUs are used. Instances can automatically be terminated if there is overcapacity. There are other cost control options. Docker images inside the instances provide the controlled build environment.

Something using the built-in cache mirror in Yocto–there are a few ways it can do this, as it’s essentially a file share somewhere. https://pelux.io/2017/06/19/How-to-create-a-shared-sstate-dir.html for an example shows how to share it via NFS, but you can also use http or ftp.

EC2 elastic storage works via NFS (pretty straight forward). Artifactory can be used too.

Having a single cache largely solves the storage issue as there is only one cache, so having solved that issue, it introduces a few more questions and constraints:

  1. How do we manage the size of the cache?

There’s no built-in expiry mechanism I could find. This means we’d probably have to create something ourselves (parse access logs from the server hosting the cache and apply a garbage collector process).

You have to prune it yourself. Typically based on age and when the development moves to a new release of YP.
  1. How/When do we update the cache?

All environments contributing to the cache need to be identical (that ansible playbook just grabs the latest of everything) to avoid subtle differences in the build artefacts depending on which environment populated the cache.

We do this with the release builds only.
  1. How much time will fetching the cache from a remote server add to the build?

I think this is probably something we will have to just live with, but if it’s all in the cloud the network speed between VMs is fast.

There is no generic answer to this. It depends on the storage and of course the networking infrastructure.

This shared cache solution removes the per agent cost on storage, and also – a varying extent – the per branch costs (assuming that you’re not working on something at the top/start beginning of the dependency tree)  from the equation above.

 

Yes, that is the idea. Since the builds are running inside a Docker instance there is a local cache but it will be discarded when the container is discarded and the VM is spun down. Cache misses require additional time but that's the nature of it.

I’d love to see some other ideas as well as I worry I’m missing something easier or more obvious – better.

Any thoughts?
Thanks
Phill

 

 

:rjs


    
-- 
-----
Rudolf J Streif
CEO/CTO ibeeto
+1.855.442.3386 x700


Re: do we have a slack channel for yocto?

Alexander Kanavin
 

There is irc. #yocto @freenode.

Alex

On Mon, 17 Feb 2020 at 17:07, ILJUN LEE <iljun.lee@...> wrote:

Hi All,

 

Do we have a slack channel for yocto?

 

Thanks,

iljun

/Users/p2774198/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_190838216

ILJUN LEE | Principal S/W Engineer II | 720-492-7975 (M)

CTECII,  Suite B - 8560 Upland Drive, Englewood, CO 80112

iljun.lee@...

The contents of this e-mail message and
any attachments are intended solely for the
addressee(s) and may contain confidential
and/or legally privileged information. If you
are not the intended recipient of this message
or if this message has been addressed to you
in error, please immediately alert the sender
by reply e-mail and then delete this message
and any attachments. If you are not the
intended recipient, you are notified that
any use, dissemination, distribution, copying,
or storage of this message or any attachment
is strictly prohibited.


do we have a slack channel for yocto?

ILJUN LEE
 

Hi All,

 

Do we have a slack channel for yocto?

 

Thanks,

iljun

/Users/p2774198/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_190838216

ILJUN LEE | Principal S/W Engineer II | 720-492-7975 (M)

CTECII,  Suite B - 8560 Upland Drive, Englewood, CO 80112

iljun.lee@...

The contents of this e-mail message and
any attachments are intended solely for the
addressee(s) and may contain confidential
and/or legally privileged information. If you
are not the intended recipient of this message
or if this message has been addressed to you
in error, please immediately alert the sender
by reply e-mail and then delete this message
and any attachments. If you are not the
intended recipient, you are notified that
any use, dissemination, distribution, copying,
or storage of this message or any attachment
is strictly prohibited.


Re: Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Mikko Rapeli
 

On Mon, Feb 17, 2020 at 01:35:02PM +0000, Georgi Georgiev via Lists.Yoctoproject.Org wrote:
Hi Mikko,
Your patch is upstreamed and...It is actually released. If you see the error output you will see your codeline. I decided to try with your old version but the result is the same. Can you say something for the previous issue I have about offline build.
It seem svn is not very well supported by yocto.
Ah, indeed.

svn is supported and for me it's working quite well. It has some issues
like download cache locking when multiple recipes include the same URLs.

Where possible, I've switched to using http fetcher with tar balls and large
binary blobs in svn, but there one needs to remember to include version
details into file names...

-Mikko


Re: Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Georgi Georgiev
 

Hi Mikko,
Your patch is upstreamed and...It is actually released. If you see the error output you will see your codeline. I decided to try with your old version but the result is the same. Can you say something for the previous issue I have about offline build.
It seem svn is not very well supported by yocto.

Cordially
Georgi

-----Original Message-----
From: yocto@lists.yoctoproject.org [mailto:yocto@lists.yoctoproject.org] On Behalf Of Mikko Rapeli
Sent: Monday, February 17, 2020 3:04 PM
To: Georgi Georgiev <Georgi.Georgiev@woodward.com>
Cc: raj.khem@gmail.com; yocto@lists.yoctoproject.org
Subject: [EXTERNAL] Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

On Mon, Feb 17, 2020 at 12:56:13PM +0000, Georgi Georgiev via Lists.Yoctoproject.Org wrote:
Another serious issue with svn fetcher, always occurring when doing incremental builds. I am putting it here because I think both issues are related. Please suggest what to do. The error we get is shown below. This time it happens always. The only workaround is to clean and build. As soon as we make modification of the recipe (PV change only) it fails:

ERROR: ww-pacman-121204-r0 do_fetch: Error executing a python function in exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:base_do_fetch(d)
0003:
File: '/data/home/w23698/projects/proj/build/../sources/poky/meta/classes/base.bbclass', lineno: 163, function: base_do_fetch
0159: return
0160:
0161: try:
0162: fetcher = bb.fetch2.Fetch(src_uri, d)
*** 0163: fetcher.download()
0164: except bb.fetch2.BBFetchException as e:
0165: bb.fatal(str(e))
0166:}
0167:
File: '/data/home/w23698/projects/proj/sources/poky/bitbake/lib/bb/fetch2/__init__.py', lineno: 1678, function: download
1674: try:
1675: if not trusted_network(self.d, ud.url):
1676: raise UntrustedUrl(ud.url)
1677: logger.debug(1, "Trying Upstream")
*** 1678: m.download(ud, self.d)
1679: if hasattr(m, "build_mirror_data"):
1680: m.build_mirror_data(ud, self.d)
1681: localpath = ud.localpath
1682: # early checksum verify, so that if checksum mismatched,
File: '/data/home/w23698/projects/proj/sources/poky/bitbake/lib/bb/fetch2/svn.py', lineno: 150, function: download
0146: if not ("externals" in ud.parm and ud.parm["externals"] == "nowarn"):
0147: # Warn the user if this had externals (won't catch them all)
0148: output = runfetchcmd("svn propget svn:externals || true", d, workdir=ud.moddir)
0149: if output:
*** 0150: if "--ignore-externals" in svnfetchcmd.split():
0151: bb.warn("%s contains svn:externals." % ud.url)
0152: bb.warn("These should be added to the recipe SRC_URI as necessary.")
0153: bb.warn("svn fetch has ignored externals:\n%s" % output)
0154: bb.warn("To disable this warning add ';externals=nowarn' to the url.")
Exception: UnboundLocalError: local variable 'svnfetchcmd' referenced
before assignment

ERROR: Logfile of failure stored in:
/data/home/w23698/projects/proj/build/tmp/work/cortexa9hf-neon-poky-li
nux-gnueabi/ww-pacman/121204-r0/temp/log.do_fetch.18853
Hmm. Rings a bell. I run builds with a local non-upstreamed patch:

Author: Mikko Rapeli <mikko.rapeli@bmw.de>
AuthorDate: Fri Sep 6 14:15:20 2019 +0200
Commit: Mikko Rapeli <mikko.rapeli@bmw.de>
CommitDate: Fri Sep 6 14:25:15 2019 +0200

svn fetcher: allow "svn propget svn:externals" to fail

Not all servers and repositories have this property set
which results in failures like this when actual svn checkout
command succeeded:

svn: warning: W200017: Property 'svn:externals' not found on ''
svn: E200000: A problem occurred; see other errors for details

Signed-off-by: Mikko Rapeli <mikko.rapeli@bmw.de>

--- a/bitbake/lib/bb/fetch2/svn.py
+++ b/bitbake/lib/bb/fetch2/svn.py
@@ -145,7 +145,7 @@ class Svn(FetchMethod):

if not ("externals" in ud.parm and ud.parm["externals"] == "nowarn"):
# Warn the user if this had externals (won't catch them all)
- output = runfetchcmd("svn propget svn:externals", d, workdir=ud.moddir)
+ output = runfetchcmd("svn propget svn:externals ||
+ true", d, workdir=ud.moddir)
if output:
if "--ignore-externals" in svnfetchcmd.split():
bb.warn("%s contains svn:externals." % ud.url)

I think this would help.

Cheers,

-Mikko


-----Original Message-----
From: Georgi Georgiev
Sent: Wednesday, February 05, 2020 6:57 PM
To: Georgi Georgiev <Georgi.Georgiev@woodward.com>; Khem Raj
<raj.khem@gmail.com>; yocto@lists.yoctoproject.org
Subject: Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack
failures with offline/online svn build! #yocto #python

Sorry Khem,
With esc character '\' before & it can't take the full path. So briefly:

Yocto build with char '\' before & in SRC_URI:
online - OK
offline - Error - svn: E170013: Unable to connect to a repository...
Yocto build without '\' in SRC_URI:
online and offline same error - do_unpack: Unpack failure for URL -
"package".tar.gz is present in DL_DIR

Thanks



-----Original Message-----
From: yocto@lists.yoctoproject.org
[mailto:yocto@lists.yoctoproject.org] On Behalf Of Georgi Georgiev via
Lists.Yoctoproject.Org
Sent: Wednesday, February 05, 2020 5:26 PM
To: Khem Raj <raj.khem@gmail.com>; yocto@lists.yoctoproject.org
Cc: yocto@lists.yoctoproject.org
Subject: Re: [EXTERNAL] Re: [yocto] Yocto [thud], [zeus] do_fetch and
do_unpack failures with offline/online svn build! #yocto #python

Hi Khem,

Yes, no issues with that. When I am connected to network it fetches the code with and without escape character before & in the path.
svn co " https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_svn_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=vXia-xlcxuLr2fSmBogZpMIaOntXLKWv3mkxcRgtUc4&s=UYH9lueDIeMGZndMxgbZgb76A5EfqN-MR58DBv1iriI&e= " works fine and fetches the package.

Georgi

-----Original Message-----
From: Khem Raj [mailto:raj.khem@gmail.com]
Sent: Friday, January 31, 2020 8:33 PM
To: Georgi Georgiev <Georgi.Georgiev@woodward.com>;
yocto@lists.yoctoproject.org
Subject: [EXTERNAL] Re: [yocto] Yocto [thud], [zeus] do_fetch and
do_unpack failures with offline/online svn build! #yocto #python

On 1/31/20 4:02 AM, georgi.georgiev via Lists.Yoctoproject.Org wrote:
Hello Community,
This is the third time I am asking for support on this issue. This
time I decided to use the web form.
In our project we have a requirement to be able to build the project
offline. E.g. on the field without any network connection. When we
are connected with the recipe mentioned below we don't have issues:

svn: E170013: Unable to connect to a repository at URL 'https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_svn_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys_trunk&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=ocx6BoVSrDQGWUvUUVvgKtuJbT7eH7jFSjCy1Ys73Vw&s=OTunnmsi_tCUt3JytmU2Hs7i7Xnqhl8-2CMsqmzIl90&e= '
svn: E670003: Temporary failure in name resolution

can you try checking out the repo out side of fetcher and see if the machine can fetch the url you are using in SRC_URI ?

*SUMMARY = "PACMan - Parameter And Configuration MANager"* *LICENSE
=
"CLOSED"* *inherit systemd useradd* *REQUIRED_DISTRO_FEATURES =
"systemd"* *# SVN revision* *PV = "121026"* *# Name of SVN project*
*PACMAN_PROJ_NAME="SOCPACManEnvEngKeys"*
*SRC_URI =
"svn://cocosubversion/svn/Embedded/Valve%5C&Actuator/DVPII/trunk/$%7
BP
ACMAN_PROJ_NAME%7D;module=trunk;protocol=http;externals=allowed;rev=
$%
7BPV%7D%22* *SRC_URI += "file://ww-authpacman.service%22* *SRC_URI
+=
"file://ww-pacman.service%22* *S = "${WORKDIR}/trunk/"* *# ${PN}-sys:
content related to system, which goes to base rootfs (only .service
file and symlinks)* *# ${PN}:      real content which may go to
separate partition* *PACKAGES =+ " ${PN}-sys"* .........
When disconnect the network, erase sstate-cache, cache and tmp I see
log file attached (log.do_fetch.32757) and the following output:

*ERROR: ww-pacman-121026-r0 do_fetch: Fetcher failure: Fetch command
export PSEUDO_DISABLED=1; unset _PYTHON_SYSCONFIGDATA_NAME; export
DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"; export
SSH_AGENT_PID="11412"; export
SSH_AUTH_SOCK="/run/user/1000/keyring/ssh"; export
PATH="/home/w23698/projects/proj_dvp2/build_dvp2/tmp/sysroots-uninat
iv
e/x86_64-linux/usr/bin:/home/w23698/projects/proj_dvp2/sources/poky/
sc
ripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf
-n
eon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr
/b
in/arm-poky-linux-gnueabi:/home/w23698/projects/proj_dvp2/build_dvp2
/t
mp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recip
e-
sysroot/usr/bin/crossscripts:/home/w23698/projects/proj_dvp2/build_d
vp
2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/re
ci
pe-sysroot-native/usr/sbin:/home/w23698/projects/proj_dvp2/build_dvp
2/
tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/reci
pe
-sysroot-native/usr/bin:/home/w23698/projects/proj_dvp2/build_dvp2/t
mp
/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-
sy
sroot-native/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/wor
k/
cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroo
t-
native/bin:/home/w23698/projects/proj_dvp2/sources/poky/bitbake/bin:
/h ome/w23698/projects/proj_dvp2/build_dvp2/tmp/hosttools";
export HOME="/home/w23698"; /usr/bin/env svn --non-interactive
--trust-server-cert update --no-auth-cache -r 121026 failed with
exit code 1, output:* *Updating '.':*
*svn: E170013: Unable to connect to a repository at URL
'https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_
sv
n_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys_trunk-27
-2
A&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzf
H1
ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=ocx6BoVSrDQGWUvUUVvgKtuJbT7eH7jFS
jC y1Ys73Vw&s=Ffv1JU1QYBh4g49fmoLnsDSFgMMBc_5MbOpy59QUS18&e=
*svn: E670003: Temporary failure in name resolution*
*ERROR: ww-pacman-121026-r0 do_fetch: Fetcher failure for URL:
'svn://cocosubversion/svn/Embedded/Valve%5C&Actuator/DVPII/trunk/SOCPACManEnvEngKeys;module=trunk;protocol=http;externals=allowed;rev=121026'.
Unable to fetch URL from any source.*
*ERROR: Logfile of failure stored in:
/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-
po
ky-linux-gnueabi/ww-pacman/121026-r0/temp/log.do_fetch.32757*
*ERROR: Task
(/home/w23698/projects/proj_dvp2/build_dvp2/../sources/meta-ww-dvp2/
re
cipes-ww/ww-pacman/ww-pacman.bb:do_fetch)
failed with exit code '1'*

When remove the '\' character in SRC_URI, e.g. to become:

*SRC_URI =
"svn://cocosubversion/svn/Embedded/Valve&Actuator/DVPII/trunk/$%7BPA
CM
AN_PROJ_NAME%7D;module=trunk;protocol=http;externals=allowed;rev=$%7
BP
V%7D%22*

In connected and not connected to network do_fetch() passes
successfully but I see one and same error (log.do_unpack.25226) output:

*ERROR: ww-pacman-121026-r0 do_unpack: Unpack failure for URL:
'svn://cocosubversion/svn/Embedded/Valve&Actuator/DVPII/trunk/SOCPACManEnvEngKeys;module=trunk;protocol=http;externals=allowed;rev=121026'.
Unpack command
PATH="/home/w23698/projects/proj_dvp2/build_dvp2/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/w23698/projects/proj_dvp2/sources/poky/scripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot/usr/bin/crossscripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/bin:/home/w23698/projects/proj_dvp2/sources/poky/bitbake/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/hosttools"
tar xz --no-same-owner -f
/home/w23698/projects/proj_dvp2/build_dvp2/downloads/trunk_cocosubve
rs
ion_.svn.Embedded.Valve&Actuator.DVPII.trunk.SOCPACManEnvEngKeys_121
02
6_.tar.gz
failed with return value 127*
*ERROR: Logfile of failure stored in:
/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-
po
ky-linux-gnueabi/ww-pacman/121026-r0/temp/log.do_unpack.25226*
*ERROR: Task
(/home/w23698/projects/proj_dvp2/build_dvp2/../sources/meta-ww-dvp2/
re
cipes-ww/ww-pacman/ww-pacman.bb:do_unpack)
failed with exit code '1'
*
I don't it this matters but, the build machine is baremetal Ubuntu
18.04.3 LTS. In all cases the packed tar.gz remain in downloads
directory with one name!

Cordially,
Georgi




***
The information in this email is confidential and intended solely for the individual or entity to whom it is addressed.  If you have received this email in error please notify the sender by return e-mail, delete this email, and refrain from any disclosure or action based on the information.
***

***
The information in this email is confidential and intended solely for the individual or entity to whom it is addressed.  If you have received this email in error please notify the sender by return e-mail, delete this email, and refrain from any disclosure or action based on the information.
***
***
The information in this email is confidential and intended solely for the individual or entity to whom it is addressed.  If you have received this email in error please notify the sender by return e-mail, delete this email, and refrain from any disclosure or action based on the information.
***


Re: Creating a build system which can scale. #yocto

Thomas Goodwin
 

Since Docker was mentioned, I use the community's CROPS containers via Docker in GitLab CI on a shared build server, providing the builders' downloads and sstate caches to the team to accelerate their own builds (these paths are volume-mounted to the runners).  One of the caveats to this approach is that if you use the containers in a shared build host, you should limit the individual builder's bitbake environment in terms of parallelization (PARALLEL_MAKE and the like).  This will prevent a single containers from causing one another to fail by not sharing effectively (yes, you can set GitLab docker runner limits but those limits are invisible to the container).  The good news is that these variables are in the white list, so you do not have to set them in a conf file; exporting them in the build environment is enough, meaning your build runner can be tuned according to that build host executing the runner.

I based my tuning on this person work, https://elinux.org/images/d/d4/Goulart.pdf, a presentation from a few years back at an ELC event.  It contains a significant amount of information about project flow and other things that you might also find interesting.

Cheers,

Thomas

On Mon, Feb 17, 2020 at 7:52 AM rpjday@... <rpjday@...> wrote:
On Mon, 17 Feb 2020, Quentin Schulz wrote:

> Hi Philip,
>
> *Very* quick and vague answer as it's not something I'm doing right now.
> I can only give hints to where to look next.
>
> On Mon, Feb 17, 2020 at 04:27:17AM -0800, philip.lewis@... wrote:
> > Hi,
> >
> > I'm looking for some advice about the best way to implement a
> > build environment in the cloud for multiple dev teams which will
> > scale as the number of dev teams grow.
> >
> > Our devs are saying:
> >
> > *What do we want?*
> >
> > To scale our server-based build infrastructure, so that engineers
> > can build branches using the same infrastructure that produces a
> > releasable artefact, before pushing it into develop. As much
> > automation of this as possible is desired..
> >
> > *Blocker* : Can’t just scale current system – can’t keep throwing
> > more hardware at it, particularly storage. The main contributor to
> > storage requirements is using a local cache in each build
> > workspace and there will be one workspace for each branch, per
> > Jenkins agent: 3 teams x 10 branches per team x 70Gb per
> > branch/workspace x number of build agents (let say 5) = 10Tb. As
> > you can see this doesn’t scale well as we add branches, teams or
> > build agents. Most of this 10Tb is the caches in each workspace,
> > where most of the contents of each individual cache is identical.
> >
>
> Have you had a look at INHERIT += "rm_work"? Should get rid of most of
> the space in the work directory (we use this one, tremendous benefit in
> terms of storage space).
>
> c.f.
> https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#ref-classes-rm-work

  in addition, you can always override that build-wide setting with
RM_WORK_EXCLUDE if you want to keep generated work from a small set of
recipes for debugging.

rday


Re: Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Mikko Rapeli
 

On Mon, Feb 17, 2020 at 12:56:13PM +0000, Georgi Georgiev via Lists.Yoctoproject.Org wrote:
Another serious issue with svn fetcher, always occurring when doing incremental builds. I am putting it here because I think both issues are related. Please suggest what to do. The error we get is shown below. This time it happens always. The only workaround is to clean and build. As soon as we make modification of the recipe (PV change only) it fails:

ERROR: ww-pacman-121204-r0 do_fetch: Error executing a python function in exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:base_do_fetch(d)
0003:
File: '/data/home/w23698/projects/proj/build/../sources/poky/meta/classes/base.bbclass', lineno: 163, function: base_do_fetch
0159: return
0160:
0161: try:
0162: fetcher = bb.fetch2.Fetch(src_uri, d)
*** 0163: fetcher.download()
0164: except bb.fetch2.BBFetchException as e:
0165: bb.fatal(str(e))
0166:}
0167:
File: '/data/home/w23698/projects/proj/sources/poky/bitbake/lib/bb/fetch2/__init__.py', lineno: 1678, function: download
1674: try:
1675: if not trusted_network(self.d, ud.url):
1676: raise UntrustedUrl(ud.url)
1677: logger.debug(1, "Trying Upstream")
*** 1678: m.download(ud, self.d)
1679: if hasattr(m, "build_mirror_data"):
1680: m.build_mirror_data(ud, self.d)
1681: localpath = ud.localpath
1682: # early checksum verify, so that if checksum mismatched,
File: '/data/home/w23698/projects/proj/sources/poky/bitbake/lib/bb/fetch2/svn.py', lineno: 150, function: download
0146: if not ("externals" in ud.parm and ud.parm["externals"] == "nowarn"):
0147: # Warn the user if this had externals (won't catch them all)
0148: output = runfetchcmd("svn propget svn:externals || true", d, workdir=ud.moddir)
0149: if output:
*** 0150: if "--ignore-externals" in svnfetchcmd.split():
0151: bb.warn("%s contains svn:externals." % ud.url)
0152: bb.warn("These should be added to the recipe SRC_URI as necessary.")
0153: bb.warn("svn fetch has ignored externals:\n%s" % output)
0154: bb.warn("To disable this warning add ';externals=nowarn' to the url.")
Exception: UnboundLocalError: local variable 'svnfetchcmd' referenced before assignment

ERROR: Logfile of failure stored in: /data/home/w23698/projects/proj/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121204-r0/temp/log.do_fetch.18853
Hmm. Rings a bell. I run builds with a local non-upstreamed patch:

Author: Mikko Rapeli <mikko.rapeli@bmw.de>
AuthorDate: Fri Sep 6 14:15:20 2019 +0200
Commit: Mikko Rapeli <mikko.rapeli@bmw.de>
CommitDate: Fri Sep 6 14:25:15 2019 +0200

svn fetcher: allow "svn propget svn:externals" to fail

Not all servers and repositories have this property set
which results in failures like this when actual svn checkout
command succeeded:

svn: warning: W200017: Property 'svn:externals' not found on ''
svn: E200000: A problem occurred; see other errors for details

Signed-off-by: Mikko Rapeli <mikko.rapeli@bmw.de>

--- a/bitbake/lib/bb/fetch2/svn.py
+++ b/bitbake/lib/bb/fetch2/svn.py
@@ -145,7 +145,7 @@ class Svn(FetchMethod):

if not ("externals" in ud.parm and ud.parm["externals"] == "nowarn"):
# Warn the user if this had externals (won't catch them all)
- output = runfetchcmd("svn propget svn:externals", d, workdir=ud.moddir)
+ output = runfetchcmd("svn propget svn:externals || true", d, workdir=ud.moddir)
if output:
if "--ignore-externals" in svnfetchcmd.split():
bb.warn("%s contains svn:externals." % ud.url)

I think this would help.

Cheers,

-Mikko


-----Original Message-----
From: Georgi Georgiev
Sent: Wednesday, February 05, 2020 6:57 PM
To: Georgi Georgiev <Georgi.Georgiev@woodward.com>; Khem Raj <raj.khem@gmail.com>; yocto@lists.yoctoproject.org
Subject: Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Sorry Khem,
With esc character '\' before & it can't take the full path. So briefly:

Yocto build with char '\' before & in SRC_URI:
online - OK
offline - Error - svn: E170013: Unable to connect to a repository...
Yocto build without '\' in SRC_URI:
online and offline same error - do_unpack: Unpack failure for URL - "package".tar.gz is present in DL_DIR

Thanks



-----Original Message-----
From: yocto@lists.yoctoproject.org [mailto:yocto@lists.yoctoproject.org] On Behalf Of Georgi Georgiev via Lists.Yoctoproject.Org
Sent: Wednesday, February 05, 2020 5:26 PM
To: Khem Raj <raj.khem@gmail.com>; yocto@lists.yoctoproject.org
Cc: yocto@lists.yoctoproject.org
Subject: Re: [EXTERNAL] Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Hi Khem,

Yes, no issues with that. When I am connected to network it fetches the code with and without escape character before & in the path.
svn co " https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_svn_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=vXia-xlcxuLr2fSmBogZpMIaOntXLKWv3mkxcRgtUc4&s=UYH9lueDIeMGZndMxgbZgb76A5EfqN-MR58DBv1iriI&e= " works fine and fetches the package.

Georgi

-----Original Message-----
From: Khem Raj [mailto:raj.khem@gmail.com]
Sent: Friday, January 31, 2020 8:33 PM
To: Georgi Georgiev <Georgi.Georgiev@woodward.com>; yocto@lists.yoctoproject.org
Subject: [EXTERNAL] Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

On 1/31/20 4:02 AM, georgi.georgiev via Lists.Yoctoproject.Org wrote:
Hello Community,
This is the third time I am asking for support on this issue. This
time I decided to use the web form.
In our project we have a requirement to be able to build the project
offline. E.g. on the field without any network connection. When we are
connected with the recipe mentioned below we don't have issues:

svn: E170013: Unable to connect to a repository at URL 'https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_svn_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys_trunk&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=ocx6BoVSrDQGWUvUUVvgKtuJbT7eH7jFSjCy1Ys73Vw&s=OTunnmsi_tCUt3JytmU2Hs7i7Xnqhl8-2CMsqmzIl90&e= '
svn: E670003: Temporary failure in name resolution

can you try checking out the repo out side of fetcher and see if the machine can fetch the url you are using in SRC_URI ?

*SUMMARY = "PACMan - Parameter And Configuration MANager"* *LICENSE =
"CLOSED"* *inherit systemd useradd* *REQUIRED_DISTRO_FEATURES =
"systemd"* *# SVN revision* *PV = "121026"* *# Name of SVN project*
*PACMAN_PROJ_NAME="SOCPACManEnvEngKeys"*
*SRC_URI =
"svn://cocosubversion/svn/Embedded/Valve%5C&Actuator/DVPII/trunk/$%7BP
ACMAN_PROJ_NAME%7D;module=trunk;protocol=http;externals=allowed;rev=$%
7BPV%7D%22* *SRC_URI += "file://ww-authpacman.service%22* *SRC_URI +=
"file://ww-pacman.service%22* *S = "${WORKDIR}/trunk/"* *# ${PN}-sys:
content related to system, which goes to base rootfs (only .service
file and symlinks)* *# ${PN}:      real content which may go to
separate partition* *PACKAGES =+ " ${PN}-sys"* .........
When disconnect the network, erase sstate-cache, cache and tmp I see
log file attached (log.do_fetch.32757) and the following output:

*ERROR: ww-pacman-121026-r0 do_fetch: Fetcher failure: Fetch command
export PSEUDO_DISABLED=1; unset _PYTHON_SYSCONFIGDATA_NAME; export
DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"; export
SSH_AGENT_PID="11412"; export
SSH_AUTH_SOCK="/run/user/1000/keyring/ssh"; export
PATH="/home/w23698/projects/proj_dvp2/build_dvp2/tmp/sysroots-uninativ
e/x86_64-linux/usr/bin:/home/w23698/projects/proj_dvp2/sources/poky/sc
ripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-n
eon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/b
in/arm-poky-linux-gnueabi:/home/w23698/projects/proj_dvp2/build_dvp2/t
mp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-
sysroot/usr/bin/crossscripts:/home/w23698/projects/proj_dvp2/build_dvp
2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/reci
pe-sysroot-native/usr/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/
tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe
-sysroot-native/usr/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp
/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sy
sroot-native/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/
cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-
native/bin:/home/w23698/projects/proj_dvp2/sources/poky/bitbake/bin:/h
ome/w23698/projects/proj_dvp2/build_dvp2/tmp/hosttools";
export HOME="/home/w23698"; /usr/bin/env svn --non-interactive
--trust-server-cert update --no-auth-cache -r 121026 failed with exit
code 1, output:* *Updating '.':*
*svn: E170013: Unable to connect to a repository at URL
'https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_sv
n_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys_trunk-27-2
A&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1
ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=ocx6BoVSrDQGWUvUUVvgKtuJbT7eH7jFSjC
y1Ys73Vw&s=Ffv1JU1QYBh4g49fmoLnsDSFgMMBc_5MbOpy59QUS18&e=
*svn: E670003: Temporary failure in name resolution*
*ERROR: ww-pacman-121026-r0 do_fetch: Fetcher failure for URL:
'svn://cocosubversion/svn/Embedded/Valve%5C&Actuator/DVPII/trunk/SOCPACManEnvEngKeys;module=trunk;protocol=http;externals=allowed;rev=121026'.
Unable to fetch URL from any source.*
*ERROR: Logfile of failure stored in:
/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-po
ky-linux-gnueabi/ww-pacman/121026-r0/temp/log.do_fetch.32757*
*ERROR: Task
(/home/w23698/projects/proj_dvp2/build_dvp2/../sources/meta-ww-dvp2/re
cipes-ww/ww-pacman/ww-pacman.bb:do_fetch)
failed with exit code '1'*

When remove the '\' character in SRC_URI, e.g. to become:

*SRC_URI =
"svn://cocosubversion/svn/Embedded/Valve&Actuator/DVPII/trunk/$%7BPACM
AN_PROJ_NAME%7D;module=trunk;protocol=http;externals=allowed;rev=$%7BP
V%7D%22*

In connected and not connected to network do_fetch() passes
successfully but I see one and same error (log.do_unpack.25226) output:

*ERROR: ww-pacman-121026-r0 do_unpack: Unpack failure for URL:
'svn://cocosubversion/svn/Embedded/Valve&Actuator/DVPII/trunk/SOCPACManEnvEngKeys;module=trunk;protocol=http;externals=allowed;rev=121026'.
Unpack command
PATH="/home/w23698/projects/proj_dvp2/build_dvp2/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/w23698/projects/proj_dvp2/sources/poky/scripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot/usr/bin/crossscripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/bin:/home/w23698/projects/proj_dvp2/sources/poky/bitbake/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/hosttools"
tar xz --no-same-owner -f
/home/w23698/projects/proj_dvp2/build_dvp2/downloads/trunk_cocosubvers
ion_.svn.Embedded.Valve&Actuator.DVPII.trunk.SOCPACManEnvEngKeys_12102
6_.tar.gz
failed with return value 127*
*ERROR: Logfile of failure stored in:
/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-po
ky-linux-gnueabi/ww-pacman/121026-r0/temp/log.do_unpack.25226*
*ERROR: Task
(/home/w23698/projects/proj_dvp2/build_dvp2/../sources/meta-ww-dvp2/re
cipes-ww/ww-pacman/ww-pacman.bb:do_unpack)
failed with exit code '1'
*
I don't it this matters but, the build machine is baremetal Ubuntu
18.04.3 LTS. In all cases the packed tar.gz remain in downloads
directory with one name!

Cordially,
Georgi




***
The information in this email is confidential and intended solely for the individual or entity to whom it is addressed.  If you have received this email in error please notify the sender by return e-mail, delete this email, and refrain from any disclosure or action based on the information.
***

***
The information in this email is confidential and intended solely for the individual or entity to whom it is addressed.  If you have received this email in error please notify the sender by return e-mail, delete this email, and refrain from any disclosure or action based on the information.
***


Re: Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Georgi Georgiev
 

Another serious issue with svn fetcher, always occurring when doing incremental builds. I am putting it here because I think both issues are related. Please suggest what to do. The error we get is shown below. This time it happens always. The only workaround is to clean and build. As soon as we make modification of the recipe (PV change only) it fails:

ERROR: ww-pacman-121204-r0 do_fetch: Error executing a python function in exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:base_do_fetch(d)
0003:
File: '/data/home/w23698/projects/proj/build/../sources/poky/meta/classes/base.bbclass', lineno: 163, function: base_do_fetch
0159: return
0160:
0161: try:
0162: fetcher = bb.fetch2.Fetch(src_uri, d)
*** 0163: fetcher.download()
0164: except bb.fetch2.BBFetchException as e:
0165: bb.fatal(str(e))
0166:}
0167:
File: '/data/home/w23698/projects/proj/sources/poky/bitbake/lib/bb/fetch2/__init__.py', lineno: 1678, function: download
1674: try:
1675: if not trusted_network(self.d, ud.url):
1676: raise UntrustedUrl(ud.url)
1677: logger.debug(1, "Trying Upstream")
*** 1678: m.download(ud, self.d)
1679: if hasattr(m, "build_mirror_data"):
1680: m.build_mirror_data(ud, self.d)
1681: localpath = ud.localpath
1682: # early checksum verify, so that if checksum mismatched,
File: '/data/home/w23698/projects/proj/sources/poky/bitbake/lib/bb/fetch2/svn.py', lineno: 150, function: download
0146: if not ("externals" in ud.parm and ud.parm["externals"] == "nowarn"):
0147: # Warn the user if this had externals (won't catch them all)
0148: output = runfetchcmd("svn propget svn:externals || true", d, workdir=ud.moddir)
0149: if output:
*** 0150: if "--ignore-externals" in svnfetchcmd.split():
0151: bb.warn("%s contains svn:externals." % ud.url)
0152: bb.warn("These should be added to the recipe SRC_URI as necessary.")
0153: bb.warn("svn fetch has ignored externals:\n%s" % output)
0154: bb.warn("To disable this warning add ';externals=nowarn' to the url.")
Exception: UnboundLocalError: local variable 'svnfetchcmd' referenced before assignment

ERROR: Logfile of failure stored in: /data/home/w23698/projects/proj/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121204-r0/temp/log.do_fetch.18853

-----Original Message-----
From: Georgi Georgiev
Sent: Wednesday, February 05, 2020 6:57 PM
To: Georgi Georgiev <Georgi.Georgiev@woodward.com>; Khem Raj <raj.khem@gmail.com>; yocto@lists.yoctoproject.org
Subject: Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Sorry Khem,
With esc character '\' before & it can't take the full path. So briefly:

Yocto build with char '\' before & in SRC_URI:
online - OK
offline - Error - svn: E170013: Unable to connect to a repository...
Yocto build without '\' in SRC_URI:
online and offline same error - do_unpack: Unpack failure for URL - "package".tar.gz is present in DL_DIR

Thanks



-----Original Message-----
From: yocto@lists.yoctoproject.org [mailto:yocto@lists.yoctoproject.org] On Behalf Of Georgi Georgiev via Lists.Yoctoproject.Org
Sent: Wednesday, February 05, 2020 5:26 PM
To: Khem Raj <raj.khem@gmail.com>; yocto@lists.yoctoproject.org
Cc: yocto@lists.yoctoproject.org
Subject: Re: [EXTERNAL] Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

Hi Khem,

Yes, no issues with that. When I am connected to network it fetches the code with and without escape character before & in the path.
svn co " https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_svn_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=vXia-xlcxuLr2fSmBogZpMIaOntXLKWv3mkxcRgtUc4&s=UYH9lueDIeMGZndMxgbZgb76A5EfqN-MR58DBv1iriI&e= " works fine and fetches the package.

Georgi

-----Original Message-----
From: Khem Raj [mailto:raj.khem@gmail.com]
Sent: Friday, January 31, 2020 8:33 PM
To: Georgi Georgiev <Georgi.Georgiev@woodward.com>; yocto@lists.yoctoproject.org
Subject: [EXTERNAL] Re: [yocto] Yocto [thud], [zeus] do_fetch and do_unpack failures with offline/online svn build! #yocto #python

On 1/31/20 4:02 AM, georgi.georgiev via Lists.Yoctoproject.Org wrote:
Hello Community,
This is the third time I am asking for support on this issue. This
time I decided to use the web form.
In our project we have a requirement to be able to build the project
offline. E.g. on the field without any network connection. When we are
connected with the recipe mentioned below we don't have issues:

svn: E170013: Unable to connect to a repository at URL 'https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_svn_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys_trunk&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=ocx6BoVSrDQGWUvUUVvgKtuJbT7eH7jFSjCy1Ys73Vw&s=OTunnmsi_tCUt3JytmU2Hs7i7Xnqhl8-2CMsqmzIl90&e= '
svn: E670003: Temporary failure in name resolution

can you try checking out the repo out side of fetcher and see if the machine can fetch the url you are using in SRC_URI ?

*SUMMARY = "PACMan - Parameter And Configuration MANager"* *LICENSE =
"CLOSED"* *inherit systemd useradd* *REQUIRED_DISTRO_FEATURES =
"systemd"* *# SVN revision* *PV = "121026"* *# Name of SVN project*
*PACMAN_PROJ_NAME="SOCPACManEnvEngKeys"*
*SRC_URI =
"svn://cocosubversion/svn/Embedded/Valve%5C&Actuator/DVPII/trunk/$%7BP
ACMAN_PROJ_NAME%7D;module=trunk;protocol=http;externals=allowed;rev=$%
7BPV%7D%22* *SRC_URI += "file://ww-authpacman.service%22* *SRC_URI +=
"file://ww-pacman.service%22* *S = "${WORKDIR}/trunk/"* *# ${PN}-sys:
content related to system, which goes to base rootfs (only .service
file and symlinks)* *# ${PN}:      real content which may go to
separate partition* *PACKAGES =+ " ${PN}-sys"* .........
When disconnect the network, erase sstate-cache, cache and tmp I see
log file attached (log.do_fetch.32757) and the following output:

*ERROR: ww-pacman-121026-r0 do_fetch: Fetcher failure: Fetch command
export PSEUDO_DISABLED=1; unset _PYTHON_SYSCONFIGDATA_NAME; export
DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"; export
SSH_AGENT_PID="11412"; export
SSH_AUTH_SOCK="/run/user/1000/keyring/ssh"; export
PATH="/home/w23698/projects/proj_dvp2/build_dvp2/tmp/sysroots-uninativ
e/x86_64-linux/usr/bin:/home/w23698/projects/proj_dvp2/sources/poky/sc
ripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-n
eon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/b
in/arm-poky-linux-gnueabi:/home/w23698/projects/proj_dvp2/build_dvp2/t
mp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-
sysroot/usr/bin/crossscripts:/home/w23698/projects/proj_dvp2/build_dvp
2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/reci
pe-sysroot-native/usr/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/
tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe
-sysroot-native/usr/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp
/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sy
sroot-native/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/
cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-
native/bin:/home/w23698/projects/proj_dvp2/sources/poky/bitbake/bin:/h
ome/w23698/projects/proj_dvp2/build_dvp2/tmp/hosttools";
export HOME="/home/w23698"; /usr/bin/env svn --non-interactive
--trust-server-cert update --no-auth-cache -r 121026 failed with exit
code 1, output:* *Updating '.':*
*svn: E170013: Unable to connect to a repository at URL
'https://urldefense.proofpoint.com/v2/url?u=http-3A__cocosubversion_sv
n_Embedded_Valve-26Actuator_DVPII_trunk_SOCPACManEnvEngKeys_trunk-27-2
A&d=DwIGaQ&c=y6L7g950KfMp92YmLM0QlMdXcRn6b-Cq4AApnSJOenE&r=kHtJrQGzfH1
ZmfsNkJpYuH-jtNpv8yMDkqAmsRP99mc&m=ocx6BoVSrDQGWUvUUVvgKtuJbT7eH7jFSjC
y1Ys73Vw&s=Ffv1JU1QYBh4g49fmoLnsDSFgMMBc_5MbOpy59QUS18&e=
*svn: E670003: Temporary failure in name resolution*
*ERROR: ww-pacman-121026-r0 do_fetch: Fetcher failure for URL:
'svn://cocosubversion/svn/Embedded/Valve%5C&Actuator/DVPII/trunk/SOCPACManEnvEngKeys;module=trunk;protocol=http;externals=allowed;rev=121026'.
Unable to fetch URL from any source.*
*ERROR: Logfile of failure stored in:
/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-po
ky-linux-gnueabi/ww-pacman/121026-r0/temp/log.do_fetch.32757*
*ERROR: Task
(/home/w23698/projects/proj_dvp2/build_dvp2/../sources/meta-ww-dvp2/re
cipes-ww/ww-pacman/ww-pacman.bb:do_fetch)
failed with exit code '1'*

When remove the '\' character in SRC_URI, e.g. to become:

*SRC_URI =
"svn://cocosubversion/svn/Embedded/Valve&Actuator/DVPII/trunk/$%7BPACM
AN_PROJ_NAME%7D;module=trunk;protocol=http;externals=allowed;rev=$%7BP
V%7D%22*

In connected and not connected to network do_fetch() passes
successfully but I see one and same error (log.do_unpack.25226) output:

*ERROR: ww-pacman-121026-r0 do_unpack: Unpack failure for URL:
'svn://cocosubversion/svn/Embedded/Valve&Actuator/DVPII/trunk/SOCPACManEnvEngKeys;module=trunk;protocol=http;externals=allowed;rev=121026'.
Unpack command
PATH="/home/w23698/projects/proj_dvp2/build_dvp2/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/w23698/projects/proj_dvp2/sources/poky/scripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot/usr/bin/crossscripts:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/usr/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/sbin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ww-pacman/121026-r0/recipe-sysroot-native/bin:/home/w23698/projects/proj_dvp2/sources/poky/bitbake/bin:/home/w23698/projects/proj_dvp2/build_dvp2/tmp/hosttools"
tar xz --no-same-owner -f
/home/w23698/projects/proj_dvp2/build_dvp2/downloads/trunk_cocosubvers
ion_.svn.Embedded.Valve&Actuator.DVPII.trunk.SOCPACManEnvEngKeys_12102
6_.tar.gz
failed with return value 127*
*ERROR: Logfile of failure stored in:
/home/w23698/projects/proj_dvp2/build_dvp2/tmp/work/cortexa9hf-neon-po
ky-linux-gnueabi/ww-pacman/121026-r0/temp/log.do_unpack.25226*
*ERROR: Task
(/home/w23698/projects/proj_dvp2/build_dvp2/../sources/meta-ww-dvp2/re
cipes-ww/ww-pacman/ww-pacman.bb:do_unpack)
failed with exit code '1'
*
I don't it this matters but, the build machine is baremetal Ubuntu
18.04.3 LTS. In all cases the packed tar.gz remain in downloads
directory with one name!

Cordially,
Georgi




***
The information in this email is confidential and intended solely for the individual or entity to whom it is addressed.  If you have received this email in error please notify the sender by return e-mail, delete this email, and refrain from any disclosure or action based on the information.
***

***
The information in this email is confidential and intended solely for the individual or entity to whom it is addressed.  If you have received this email in error please notify the sender by return e-mail, delete this email, and refrain from any disclosure or action based on the information.
***


Re: Creating a build system which can scale. #yocto

Robert P. J. Day
 

On Mon, 17 Feb 2020, Quentin Schulz wrote:

Hi Philip,

*Very* quick and vague answer as it's not something I'm doing right now.
I can only give hints to where to look next.

On Mon, Feb 17, 2020 at 04:27:17AM -0800, philip.lewis@domino-uk.com wrote:
Hi,

I'm looking for some advice about the best way to implement a
build environment in the cloud for multiple dev teams which will
scale as the number of dev teams grow.

Our devs are saying:

*What do we want?*

To scale our server-based build infrastructure, so that engineers
can build branches using the same infrastructure that produces a
releasable artefact, before pushing it into develop. As much
automation of this as possible is desired..

*Blocker* : Can’t just scale current system – can’t keep throwing
more hardware at it, particularly storage. The main contributor to
storage requirements is using a local cache in each build
workspace and there will be one workspace for each branch, per
Jenkins agent: 3 teams x 10 branches per team x 70Gb per
branch/workspace x number of build agents (let say 5) = 10Tb. As
you can see this doesn’t scale well as we add branches, teams or
build agents. Most of this 10Tb is the caches in each workspace,
where most of the contents of each individual cache is identical.
Have you had a look at INHERIT += "rm_work"? Should get rid of most of
the space in the work directory (we use this one, tremendous benefit in
terms of storage space).

c.f.
https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#ref-classes-rm-work
in addition, you can always override that build-wide setting with
RM_WORK_EXCLUDE if you want to keep generated work from a small set of
recipes for debugging.

rday


Re: Creating a build system which can scale. #yocto

Quentin Schulz
 

Hi Philip,

*Very* quick and vague answer as it's not something I'm doing right now.
I can only give hints to where to look next.

On Mon, Feb 17, 2020 at 04:27:17AM -0800, philip.lewis@domino-uk.com wrote:
Hi,

I'm looking for some advice about the best way to implement a build environment in the cloud for multiple dev teams which will scale as the number of dev teams grow.

Our devs are saying:

*What do we want?*

To scale our server-based build infrastructure, so that engineers can build branches using the same infrastructure that produces a releasable artefact, before pushing it into develop. As much automation of this as possible is desired..

*Blocker* : Can’t just scale current system – can’t keep throwing more hardware at it, particularly storage. The main contributor to storage requirements is using a local cache in each build workspace and there will be one workspace for each branch, per Jenkins agent: 3 teams x 10 branches per team x 70Gb per branch/workspace x number of build agents (let say 5) = 10Tb. As you can see this doesn’t scale well as we add branches, teams or build agents. Most of this 10Tb is the caches in each workspace, where most of the contents of each individual cache is identical.
Have you had a look at INHERIT += "rm_work"? Should get rid of most of
the space in the work directory (we use this one, tremendous benefit in
terms of storage space).

c.f. https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#ref-classes-rm-work

Incidently, also highlights broken recipes (e.g. one getting files from other
sysroots/elsewhere in the FS).

*A possible solution:
*

Disclaimer/admission: I’ve not really researched/considered _ all _  possible solutions to the problem above, I just started searching and reading and came up with/felt led towards  this. I think there is some value in spending some of the meeting exploring other options to see if anything sounds better (for a what definition of better?).

**

Something using the built-in cache mirror in Yocto–there are a few ways it can do this, as it’s essentially a file share somewhere. https://pelux.io/2017/06/19/How-to-create-a-shared-sstate-dir.html for an example shows how to share it via NFS, but you can also use http or ftp.

Having a single cache largely solves the storage issue as there is only one cache, so having solved that issue, it introduces a few more questions and constraints:

* How do we manage the size of the cache?

There’s no built-in expiry mechanism I could find. This means we’d probably have to create something ourselves (parse access logs from the server hosting the cache and apply a garbage collector process).
Provided you're not using a webserver with a cache (or a cache that is
refreshed every now and then), cronjob with find -atime -delete and
you're good.

* How/When do we update the cache?

All environments contributing to the cache need to be identical (that ansible playbook just grabs the latest of everything) to avoid subtle differences in the build artefacts depending on which environment populated the cache.

* How much time will fetching the cache from a remote server add to the build?

I think this is probably something we will have to just live with, but if it’s all in the cloud the network speed between VMs is fast.
I remember (wrongly?) reading that sharing sstate-cache over NFS isn't a very
good idea (latency outweights the benefits in terms of storage/shared
sstate cache).

This shared cache solution removes the per agent cost on storage, and also – a varying extent – the per branch costs (assuming that you’re not working on something at the top/start beginning of the dependency tree)  from the equation above.

I’d love to see some other ideas as well as I worry I’m missing something easier or more obvious – better.
I'm not too sure to have understood the exact use case but maybe you
would want to have a look at:

- shared DL_DIR (this one can be served by an NFS, there isn't too much
access to it during a build).
- SSTATE_MIRRORS (c.f. https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#var-SSTATE_MIRRORS),
is basically a webserver serving the sstate-cache from an already-built
image/system. This is RO, would make sense if your Jenkins is building
a system and then your devs are basing their work on top of it. They
would get the sstate-cache from your Jenkins and AFAIK, does not
duplicate the sstate-cache locally = more free storage space.
- investigate docker containers for guaranteed identical build environment,
Pyrex has been often suggested on IRC.
https://github.com/garmin/pyrex/

That's all I could think of about your issue, I unfortunately do not
have more knowledge to share on that topic.

Good luck, let us know what you decided to do :)

Quentin


Creating a build system which can scale. #yocto

philip.lewis@...
 

Hi,

I'm looking for some advice about the best way to implement a build environment in the cloud for multiple dev teams which will scale as the number of dev teams grow.

Our devs are saying:

What do we want?

To scale our server-based build infrastructure, so that engineers can build branches using the same infrastructure that produces a releasable artefact, before pushing it into develop. As much automation of this as possible is desired..

Blocker: Can’t just scale current system – can’t keep throwing more hardware at it, particularly storage. The main contributor to storage requirements is using a local cache in each build workspace and there will be one workspace for each branch, per Jenkins agent: 3 teams x 10 branches per team x 70Gb per branch/workspace x number of build agents (let say 5) = 10Tb. As you can see this doesn’t scale well as we add branches, teams or build agents. Most of this 10Tb is the caches in each workspace, where most of the contents of each individual cache is identical.

A possible solution:

Disclaimer/admission: I’ve not really researched/considered _all_  possible solutions to the problem above, I just started searching and reading and came up with/felt led towards  this. I think there is some value in spending some of the meeting exploring other options to see if anything sounds better (for a what definition of better?).

 

Something using the built-in cache mirror in Yocto–there are a few ways it can do this, as it’s essentially a file share somewhere. https://pelux.io/2017/06/19/How-to-create-a-shared-sstate-dir.html for an example shows how to share it via NFS, but you can also use http or ftp.

Having a single cache largely solves the storage issue as there is only one cache, so having solved that issue, it introduces a few more questions and constraints:

  1. How do we manage the size of the cache?

There’s no built-in expiry mechanism I could find. This means we’d probably have to create something ourselves (parse access logs from the server hosting the cache and apply a garbage collector process).

  1. How/When do we update the cache?

All environments contributing to the cache need to be identical (that ansible playbook just grabs the latest of everything) to avoid subtle differences in the build artefacts depending on which environment populated the cache.

  1. How much time will fetching the cache from a remote server add to the build?

I think this is probably something we will have to just live with, but if it’s all in the cloud the network speed between VMs is fast.

This shared cache solution removes the per agent cost on storage, and also – a varying extent – the per branch costs (assuming that you’re not working on something at the top/start beginning of the dependency tree)  from the equation above.

 

I’d love to see some other ideas as well as I worry I’m missing something easier or more obvious – better.

Any thoughts?
Thanks
Phill

 

 


meta-java on git.yoctoproject.org has gone away.

Eilís Ní Fhlannagáin
 

It looks like there is an issue with cgit or something?

http://git.yoctoproject.org/cgit.cgi/meta-java

returns 500 Internal Server Error.


Which parameters are shared by kernel and u-boot?

JH
 

Hi,

Does kernel and u-boot both need be compiled by u-boot configure files
to include bootargs, mtdpars etc?

Thank you.

Kind regards,

- jh


Re: Modified GENIVI Cannelloni recipe with strange side effects

Zoran
 

The issue I see is that the following files have been build but NOT installed:

* libcannelloni-common.so.0
* libcannelloni-common.so.0.0.1
Not quite... The solution is outlined here (in function do_install):
+ ## ERROR: QA Issue: package cannelloni contains bad RPATH
+ ## quick fix is in a do_install or do_install_append do
+ chrpath -d ${D}${bindir}/cannelloni

https://github.com/ZoranStojsavljevic/meta-socketcan/blob/master/recipes-can/cannelloni/cannelloni.bb
https://github.com/ZoranStojsavljevic/meta-socketcan/blob/master/recipes-can/cannelloni/cannelloni.bb_GENIVI

I admit, your first email has shaken my head, so I can see things much
more clear. :-)

My best guess, this solution is just a workaround (not the final one),
since I have in ${D} the following:

cannelloni-1.0: package cannelloni contains bad RPATH
/home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/build:
in file /home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/packages-split/cannelloni/usr/bin/cannelloni
[rpaths]

So, since my limited knowledge about bitbake build systems ends here,
somebody from YOCTO primes (potentially Khem Raj, Ross Burton, maybe
even Richard Purdie) should look more closely into this issue
(apologies for my unsolicited suggestions).

Laurent,

Once again, thank you for unselfish help,
Zoran
_______


On Fri, Feb 14, 2020 at 2:20 PM Laurent Gauthier
<laurent.gauthier@soccasys.com> wrote:

Hi Zoran,

You are almost there! I can feel it... :-)

The issue I see is that the following files have been build but NOTinstalled:

* libcannelloni-common.so.0
* libcannelloni-common.so.0.0.1

If you make sure that they are installed that should fix your issue.

Based on the info you provided no RDEPENDS seems to be required as it
all appears that everything is in one package named "cannelloni",
rather than a package for the main executable and then packages for
libraries.

Kind regards, Laurent.

On Fri, Feb 14, 2020 at 12:43 PM Zoran Stojsavljevic
<zoran.stojsavljevic@gmail.com> wrote:

Hello Laurent,

Many thanks to you for the help. :-)

I did some modifications, and now I have all the elements in there/in place:

[user@fedora31-ssd cannelloni]$ cd ../../../build/tmp
[user@fedora31-ssd tmp]$ find . -name libcannelloni*
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/image/usr/lib/libcannelloni-common.so
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/sysroot-destdir/usr/lib/libcannelloni-common.so
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/package/usr/lib/.debug/libcannelloni-common.so
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/package/usr/lib/libcannelloni-common.so
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/packages-split/cannelloni/usr/lib/libcannelloni-common.so
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/packages-split/cannelloni-dbg/usr/lib/.debug/libcannelloni-common.so
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/build/libcannelloni-common.so.0
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/build/libcannelloni-common.so.0.0.1
./work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/build/libcannelloni-common.so
./sysroots-components/cortexa8hf-neon/cannelloni/usr/lib/libcannelloni-common.so

I miss the very end of your thoughts. Namely:

The name of the package containing the shared library is name of the
xxx first-level directory "packages-split/xxx".
So, how should I write the RDEPENDS command?

Something as: RDEPENDS_${PN} = "???"

What should I put on the right side of the equation (according to the above traces)?

Thank you,
Zoran
_______

On Fri, Feb 14, 2020 at 11:49 AM Laurent Gauthier <laurent.gauthier@soccasys.com> wrote:

Hi Zoran,

The issue seems to be that the executable /usr/bin/cannelloni has a
reference to a shared library (libcannelloni-common.so.0) for which
the Yocto build system is not able to determine automatically which
package provides it.

Based on the name I would assume that this package should be created
by the same recipe that produces this executable (one recipe produces
multiple packages).

The most probable reason for this is that the new version of the
package you are trying to build does not install the "missing" shared
library properly. But here are some steps you could follow to try to
determine the stage of build/install/package where the shared library
goes missing.

To debug this I would suggest that you check that this
"libcannelloni-common.so.0" shared library is present in several
directories.

First in the build directory:

* /home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/build

If it is not there that would be very surprising. I will assume that
it is present. Let us know if it is not.

Then the next location to check for this shared library is the following:

* /home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/package

If the file is not there, then it means that the recipe did not
"install" it (as this directory is populated by do_install).

If the file is there then you can check if it is correctly assigned in
a package by determining if it is also found in:

* /home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/packages-split

If the file is not there, then it means that the recipe did not
"package" it properly (as this directory is populated by do_package).
You should review the recipe for any anomaly in assigning installed
files to individual packages.

If the file is there then you probably should add the package that
contains the shared library in the RDEPENDS for the "cannelloni"
package.

The name of the package containing the shared library is name of the
xxx first-level directory "packages-split/xxx".

Not sure if that will solve your issue, but hopefully that will help.

Kind Regards, Laurent.

On Fri, Feb 14, 2020 at 11:27 AM Zoran <zoran.stojsavljevic@gmail.com> wrote:

Hello List,

I am trying to solve very interesting ERROR I am getting with slightly modified GENIVI Canneloni recipe:
https://github.com/ZoranStojsavljevic/meta-socketcan/blob/master/recipes-can/cannelloni/cannelloni.bb

If I take the recipe as is, everything works fine, with:
## SRCREV = "${AUTOREV}"
SRCREV = "0fb6880b719b8acf2b4210b264b7140135e4be8a"

Everything works fine, but if I swap the static hash with auto latest hash (SRCREV = "${AUTOREV}":
SRCREV = "${AUTOREV}"
## SRCREV = "0fb6880b719b8acf2b4210b264b7140135e4be8a"

I am getting these ERRORS, which seems to me very strange?!
_______

Sstate summary: Wanted 11 Found 6 Missed 5 Current 1398 (54% match, 99% complete)
NOTE: Executing Tasks
NOTE: Setscene tasks completed
ERROR: cannelloni-1.0-r0 do_package_qa: QA Issue: package cannelloni contains bad RPATH /home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/build: in file /home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/packages-split/cannelloni/usr/bin/cannelloni [rpaths]
ERROR: cannelloni-1.0-r0 do_package_qa: QA Issue: /usr/bin/cannelloni contained in package cannelloni requires libcannelloni-common.so.0, but no providers found in RDEPENDS_cannelloni? [file-rdeps]
ERROR: cannelloni-1.0-r0 do_package_qa: QA run found fatal errors. Please consider fixing them.
ERROR: Logfile of failure stored in: /home/user/projects2/beaglebone-black/bbb-yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/temp/log.do_package_qa.255490
ERROR: Task (/home/user/projects2/beaglebone-black/bbb-yocto/meta-socketcan/recipes-can/cannelloni/cannelloni.bb:do_package_qa) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3791 tasks of which 3788 didn't need to be rerun and 1 failed.
_______

Any advise how to make GENIVI Cannelloni recipe to work with: SRCREV = "${AUTOREV}" ???

Thank you,
Zoran



--
Laurent Gauthier
Phone: +33 630 483 429
http://soccasys.com


--
Laurent Gauthier
Phone: +33 630 483 429
http://soccasys.com


[meta-security][PATCH] apparmor: update to tip

Armin Kuster
 

fixes Python3.8 configure issues

Signed-off-by: Armin Kuster <akuster808@gmail.com>
---
recipes-mac/AppArmor/apparmor_2.13.3.bb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/recipes-mac/AppArmor/apparmor_2.13.3.bb b/recipes-mac/AppArmor/apparmor_2.13.3.bb
index fa60752..11c931d 100644
--- a/recipes-mac/AppArmor/apparmor_2.13.3.bb
+++ b/recipes-mac/AppArmor/apparmor_2.13.3.bb
@@ -25,7 +25,7 @@ SRC_URI = " \
file://run-ptest \
"

-SRCREV = "2f9d9ea7e01a115b29858455d3b1b5c6a0bab75c"
+SRCREV = "d779dbf88a664f06c1265b9e27b93f87de4cfe44"
S = "${WORKDIR}/git"

PARALLEL_MAKE = ""
--
2.17.1


[meta-rockchip][PATCH] Install appropriate fw_env.config for rk3288

Sergey Bostandzhyan
 

From: Sergey 'Jin' Bostandzhyan <jin@mediatomb.cc>

libubootenv provides fw_printenv/fw_setenv utilities, however they only
work if a correct /etc/fw_env.config file is available.

Signed-off-by: Sergey Bostandzhyan <jin@mediatomb.cc>
---
recipes-bsp/u-boot/libubootenv_%.bbappend | 5 +++++
1 file changed, 5 insertions(+)
create mode 100644 recipes-bsp/u-boot/libubootenv_%.bbappend

diff --git a/recipes-bsp/u-boot/libubootenv_%.bbappend b/recipes-bsp/u-boot/libubootenv_%.bbappend
new file mode 100644
index 0000000..b0c70ca
--- /dev/null
+++ b/recipes-bsp/u-boot/libubootenv_%.bbappend
@@ -0,0 +1,5 @@
+do_install_append_rk3288() {
+ install -d ${D}${sysconfdir}
+ echo "/dev/${RK_BOOT_DEVICE} 0x3f8000 0x8000" > ${D}${sysconfdir}/fw_env.config
+}
+
--
2.24.1

5381 - 5400 of 53841