Re: Strange sporadic build issues (incremental builds in docker container)
Richard Purdie
On Wed, 2022-03-30 at 09:40 -0400, Trevor Woerner wrote:
Hi Matthias,The "good" news is I did work out how to reproduce this. bitbake keymaps -c clean bitbake keymaps bitbake keymaps -c unpack -f bitbake keymaps -c patch bitbake keymaps -c unpack -f bitbake keymaps -c patch I haven't looked at why but hopefully that helps us more forward with looking at the issue. The complications with S == WORKDIR were one of the reasons I did start work on patches to make it work better and maybe move fetching into a dedicated direction rather than WORKDIR and then symlink things. I never got that patch to work well enough to submit though (and it is too late for a major change like that in this release). Cheers, Richard |
|
Re: SCM usage in source urls and bandwidth
Richard Purdie
On Wed, 2022-03-30 at 10:05 -0400, Claude Bing wrote:
On 3/30/22 09:53, Alexandre Belloni via lists.yoctoproject.org wrote:We don't have any support for "per-layer" overrides at this time which would beOn 30/03/2022 11:42:46+0100, Richard Purdie wrote:Indeed, that would be concerning for us as well. Would it be possible to[list address fixed, sorry]I would simply drop PREMIRRORS, this is actually a privacy concern for the way to do that. It is something I think we probably do want to consider adding but I haven't had the bandwidth to look at it. I'd note that these mirrors in PREMIRRORS are also in MIRRORS already in OE-Core so there is a fallback, it just controls the order they're tried in. Cheers, Richard |
|
Re: SCM usage in source urls and bandwidth
Claude Bing
On 3/30/22 09:53, Alexandre Belloni via lists.yoctoproject.org wrote:
On 30/03/2022 11:42:46+0100, Richard Purdie wrote:Indeed, that would be concerning for us as well. Would it be possible to[list address fixed, sorry]I would simply drop PREMIRRORS, this is actually a privacy concern for ignore PREMIRRORS based on the recipe layer? Alternatively, we could create blocklists for heavy packages that need to fetch from upstream first rather than drop PREMIRRORS completely. Sometimes, having a secondary source could save valuable time when the upstream is not responsive. |
|
Re: SCM usage in source urls and bandwidth
Alexandre Belloni
On 30/03/2022 11:42:46+0100, Richard Purdie wrote:
[list address fixed, sorry]I would simply drop PREMIRRORS, this is actually a privacy concern for some of our customers that didn't realize they are leaking the names of their internal git repositories to downloads.yoctoproject.org. -- Alexandre Belloni, co-owner and COO, Bootlin Embedded Linux and Kernel engineering https://bootlin.com |
|
Re: Strange sporadic build issues (incremental builds in docker container)
Trevor Woerner
Hi Matthias,
On Wed 2022-03-30 @ 06:32:00 AM, Matthias Klein wrote: Yes, you are right, it is mostly the same recipes that fail. But they also change from time to time.And keymaps follows the exact same pattern as modutils-initscripts and initscripts; namely that their sources are entirely contained in-tree: keymaps/ ├── files │ ├── GPLv2.patch │ └── keymap.sh └── keymaps_1.0.bb keymaps/keymaps_1.0.bb 23 SRC_URI = "file://keymap.sh \ 24 file://GPLv2.patch" Any recipe that follows this pattern is susceptible, it's probably just a coincidence that most of my failures happened to be with the two recipes I mentioned. This issue has revealed a bug, and fixing that bug would be great. However, the thing is, keymap.sh is a shell program written 12 years ago which hasn't changed since. The GPL/COPYING file is only there for "reasons". The license file doesn't *need* to be moved into the build area for this recipe to get its job done (namely installing keymap.sh into the image's sysvinit). Best regards, Trevor |
|
Re : Fetching source from Private git repo
Poornesh G
Greetings !
Can anyone help me with the procedure , how I can fetch the source from a private gitlab account . Thanks in advance ! IMPORTANT: The contents of this email and any attachments are confidential. They are intended for the named recipient(s) only. If you have received this email by mistake, please notify the sender immediately and do not disclose the contents to anyone or make copies thereof. Please, consider your environmental responsibility. Before printing this e-mail ask yourself: "Do I need a hard copy?" |
|
Re: SCM usage in source urls and bandwidth
Richard Purdie
On Wed, 2022-03-30 at 12:18 +0100, Ross Burton wrote:
On Wed, 30 Mar 2022 at 12:10, Richard PurdieThe code doesn't do "--depth=1". https://git.yoctoproject.org/poky/commit/?id=27d56982c7ba05e86a100b0cca2411ee5ac7a85e """ This implements support for shallow mirror tarballs, not shallow clones. Supporting shallow clones directly is not really doable for us, as we'd need to hardcode the depth between branch HEAD and the SRCREV, and that depth would change as the branch is updated. """ Put another way, you didn't specify a revision in your clone above and if you try, it becomes rather tricky. To make this work we therefore need a mirror with the shallow tarballs on it. Just for info, the binutils mirror tarball is ~1.3GB, the shallow tarball is 65MB. Cheers, Richard |
|
Re: SCM usage in source urls and bandwidth
Ross Burton <ross@...>
On Wed, 30 Mar 2022 at 12:10, Richard Purdie
<richard.purdie@...> wrote: f) Switch the problematic recipes to use shallow clones with something like:Even without premirrors this is a lot faster for glibc: $ time git clone git://sourceware.org/git/glibc.git Cloning into 'glibc'... remote: Enumerating objects: 6956, done. remote: Counting objects: 100% (6956/6956), done. remote: Compressing objects: 100% (2938/2938), done. remote: Total 670093 (delta 5328), reused 4750 (delta 3932), pack-reused 663137 Receiving objects: 100% (670093/670093), 205.19 MiB | 16.39 MiB/s, done. Resolving deltas: 100% (573265/573265), done. Updating files: 100% (19011/19011), done. real 1m56.255s $ time git clone git://sourceware.org/git/glibc.git --depth 1 Cloning into 'glibc'... remote: Enumerating objects: 18809, done. remote: Counting objects: 100% (18809/18809), done. remote: Compressing objects: 100% (9704/9704), done. remote: Total 18809 (delta 8812), reused 12185 (delta 7968), pack-reused 0 Receiving objects: 100% (18809/18809), 41.79 MiB | 11.96 MiB/s, done. Resolving deltas: 100% (8812/8812), done. Updating files: 100% (19011/19011), done. real 0m8.701s A full clone fetches 200MB and takes 2 minutes (a lot of that is actually resolving the deltas, not the fetch). A shallow clone of the current HEAD fetches 40MB and is done in 8 seconds. Why would we need a premirror? Ross |
|
Re: SCM usage in source urls and bandwidth
Richard Purdie
On Wed, 2022-03-30 at 11:42 +0100, Richard Purdie via lists.yoctoproject.org
wrote: What are our options? As far as I can see we could:I meant to add: f) Switch the problematic recipes to use shallow clones with something like: BB_GIT_SHALLOW:pn-binutils = "1" BB_GIT_SHALLOW:pn-binutils-cross-${TARGET_ARCH} = "1" BB_GIT_SHALLOW:pn-binutils-cross-canadian-${TRANSLATED_TARGET_ARCH} = "1" BB_GIT_SHALLOW:pn-binutils-cross-testsuite = "1" BB_GIT_SHALLOW:pn-binutils-crosssdk-${SDK_SYS} = "1" BB_GIT_SHALLOW:pn-glibc = "1" The challenge here is that in order to be effective, there needs to be a PREMIRROR setup with the shallow tarballs on it. This means we couldn't do e) above and have this have much effect unless we craft some very specific PREMIRROR entries too. Cheers, Richard |
|
SCM usage in source urls and bandwidth
Richard Purdie
[list address fixed, sorry]
We've been having bandwidth trouble with downloads.yoctoproject.org so we did some quick analysis to see what the issue is. Basically in speeding up the server which was the rate limit, we hit the limits of the hosting pipe. I'd note a few things: a) it isn't the sstate mirroring, it is nearly all being used by downloads. b) 25% of all our bandwidth is going on "git2_sourceware.org.git.binutils- gdb.git.tar.gz" - i.e. downloading the source mirror binutils tarball c) 15% is on git2_sourceware.org.git.glibc.git.tar.gz i.e. glibc d) OE-Core has downloads.yoctoproject.org as a MIRROR e) poky has it as a PREMIRROR What are our options? As far as I can see we could: a) increase the pipe from downloads.yoctoproject.org but that does come at a non-trivial cost to the project. b) Seek help with hosting some of the larger mirror tarballs from people better able to host them and have that as a first premirror? c) Switch the binutils and glibc recipes to tarballs and patches. I know Khem finds this less convenient and they keep moving back and forward but we keep running into this issue and having to switch back from git. d) To soften the blow of c) we could add devupstream support to the recipes? We could script updating the recipe to add the patches? e) We could drop the PREMIRRORS from poky. This would stop the SCM targets from hitting our mirrors first. That does transfer load to the upstream project SCMs though and I'm not sure that will be appreciated. I did sent that patch, I'm not sure about it though. We are going to need to do *something* though as the current situation can't continue. I'm open to other ideas... Cheers, Richard |
|
Re: [OE-core] Which vendors maintain SDIO WiFi in mainline stable kernel
Quentin Schulz
Hi Jupiter,
On 3/29/22 07:16, JH wrote: Hi,It is extremely rare to have vendors not have out-of-tree drivers or forked branches (I don't know of any, personally). Some vendors do end up upstreaming some of their patches in the end to reduce the amount of maintenance they have to do on their downstream drivers/kernel tree. Upstreaming takes time, knowledge and soft (as in "communicating with people") skills that some vendors aren't willing to invest in. It's also usually not an urgent matter (as opposed to have *something* that works, so they can sell the product ASAP). Also, quality of vendor (understand downstream) code is often subpar (to be polite) and would not be accepted as-is in Linux kernel upstream git repository. Finally, it is also a strategical choice for vendors to have an out-of-tree driver so that people stuck with an older kernel can still use this driver/product. One simple example: a bad vendor sells you an SoC with BSP supporting kernel 4.4 (let's say). Now, you want to use a specific WiFi module with this SoC. Fortunately, there's upstream support for it, but only in 5.10 and later. Considering the number of changes between 4.4 and 5.10, you won't be able to easily backport the driver to work on 4.4. This means the WiFi module vendor loses you as a customer because you wont be able to use their solution. Now, you could also have a nicer SoC vendor which provides you with a 5.10 kernel. However, there's an important fix available in 5.16 that isn't in the WiFi driver you have on 5.10. You could try to backport this yourself but not all customers of said WiFi vendor are skilled enough to do this. The WiFi vendor needs to provide support for backporting this for the customer and/or deal with unhappy customers. However, with an out-of-tree driver with appropriate ifdefs everywhere to adapt for specific versions of the Linux kernel ABI, they have ONE driver that is known to work on many different Linux kernel versions. It also makes the maintenance of the driver much more simple for them. This also allows them to do releases much more often than the Linux kernel allows (one every 2-3 months). Considering the usually bad quality of code and maybe lack of proper reviews, you might end up with regressions and more importantly security issues that will never be discovered because less eyes will be on the code. Out-of-tree drivers make sense in a self-feeding loop of vendors not upstreaming stuff because they need to support other vendors not upstreaming, even if they wanted to in the first place. Finally, I've had to patch locally about 3-4 WiFi drivers and the changes weren't implemented by the vendor in the next releases. So you might just have issues other companies have fixed but it was never reported or fixed by the vendor. (Note that upstreaming does not necessarily fix this issue, it just makes it in theory less likely to happen since more people are supposed to use it than some vendor kernel). Also, some vendors are historically reluctant to contribute anything to the upstream Linux kernel and the support of their hardware was added by hobbyists or one of their clients, bearing the costs themselves. I looked at the following link, the mwifiex and mwifiex_sdio supportYou'd need to request the newer versions of mwifiex to NXP (which acquired Marvell some years ago) or patch it yourself. Welcome to the world of downstream support :) Same to Qualcomm, the old Atheros WiFi modules are supported, theYocto only builds what you tell it to build. The company I work for provides[1] Yocto support for a vendor kernel based on Android-flavored 4.4 (note: though we do actually support and encourage using mainline, GPU/VPU support was - years ago - just not comparable between vendor and upstream kernels) for our System on Module, all of this on Honister (latest release of Yocto to date). You just need to create your own recipe (or adapt an existing one) to point to the BSP components your vendor gave you (or whatever you want to use) and build it. Nothing forces you to use linux-yocto 5.14 or whatever else. [1] https://git.theobroma-systems.com/yocto-layers/meta-theobroma-systems-bsp.git/tree/recipes-kernel/linux/linux-tsd_4.4.bb Cheers, Quentin |
|
Running chromium with read-only-rootfs
benabid.houssem11@...
Hello Friends ; I have My own Image based on Wayland-Weston image. My Image should be on read-only- rootfs so i use this :
But I have a problem when running chromium in my image, i think that it can't work in read-only so i find that there's something called VOLATILE-BINDS that can help.can someone help me to configure this VOLATILE-BINDS to get the work done. The Error when running chromium in my image is : Error /root/.config/**/** is read-only Thanks in advance |
|
Re: Strange sporadic build issues (incremental builds in docker container)
Matthias Klein
Hi Trevor,
thank you very much for the detailed answer. Yes, you are right, it is mostly the same recipes that fail. But they also change from time to time. Today it happened to me even without Jenkins and Docker, normally in the console with the recipe keymaps_1.0.bb. With the nighly builds over the Jenkins I help myself at the moment that I delete build/tmp before. So far, the problem has not occurred again. Many greetings, Matthias -----Ursprüngliche Nachricht----- Von: Trevor Woerner <twoerner@...> Gesendet: Dienstag, 29. März 2022 18:23 An: Alexander Kanavin <alex.kanavin@...> Cc: Matthias Klein <matthias.klein@...>; yocto@... Betreff: Re: [yocto] Strange sporadic build issues (incremental builds in docker container) On Thu 2022-03-24 @ 09:31:25 AM, Alexander Kanavin wrote: I don't. You need to inspect the build tree to find clues why theYes I've been seeing exactly these issues as well. I'm not using any sort of virtualization, I'm using Jenkins to do nightly builds directly on my host. My host machine is openSUSE 15.3. These problems started on Feb 21 for me. Each of my builds starts by doing a "git pull" on each of the repositories, then kicks off a build if any of the repositories changed. A fresh build will always succeed. Doing a "clean" and rebuilding will (I believe) always succeed. My gut feeling is that it somehow has something to do with having an existing build, refreshing the repositories, then rebuilding. I spent weeks trying to find a reproducer. I wrote a script to checkout one version of the repositories (before), build, checkout a newer version of the repositories (after) and rebuilding. Even in cases where I used the exact same hashes that had failed on my Jenkins build and repeating 20 times, in some cases I wasn't able to reproduce the error. I was able to find 1 reproducer involving a build for an imx28evk MACHINE, but even then after 20 iterations 13 were bad and 7 were good. I repeated that set of 20 builds many times and it was never 100% bad. My investigations led me to believe that it might be related to rm_work and/or BB_NUMBER_THREADS/PARALLEL_MAKE. In my Jenkins builds I enable 'INHERIT += "rm_work"' and I also limit the BB_NUMBER_THREADS and set PARALLEL_MAKE. On the cmdline I was able to reduce the number of failures (sometimes to none) by removing the rm_work and THREADS/PARALLEL, but never completely eliminate it. In Jenkins the build failures still felt as random as they were without the change, so I can't say that it's having much effect in Jenkins, but seems to have some effect on the cmdline. I can say this with certainty: Matthias says it seems that the specific recipe that fails is random, but it's not. In every case the recipe that fails is a recipe whose source files are contained in the meta layer itself. For me the failing recipes were always: modutils-initscripts initscripts If you look at the recipes for those packages they do not have a SRC_URI that fetches code from some remote location then uses quilt to apply some patches. In both cases all of the "source" code exists in the layer itself, and somehow quilt is involved in placing them in the build area. I have dozens and dozens of these failures recorded and it is always with a recipe that follows that pattern. But 99%-ish percent of the failures are with the two packages I listed above. The failures aren't related to days when those packages change. The failures are just... sporadic. So the issue is related to: - recipes with in-layer sources - quilt (being run twice (?)) - updating layers, and rebuilding in a build area with an existing build - Feb 21 2022 (or thereabouts) The issue might be related to: - jenkins? - my build host? - rm_work? - BB_NUMBER_THREADS? - PARALLEL_MAKE? |
|
Re: OpenEmbedded Happy Hour March 30 5pm/1700 UTC
Denys Dmytriyenko
Reminder, OpenEmbedded Happy Hour is tomorrow. See you all there.
toggle quoted message
Show quoted text
On Thu, Mar 24, 2022 at 06:41:58PM -0400, Denys Dmytriyenko wrote:
All, --
Regards, Denys Dmytriyenko <denis@...> PGP: 0x420902729A92C964 - https://denix.org/0x420902729A92C964 Fingerprint: 25FC E4A5 8A72 2F69 1186 6D76 4209 0272 9A92 C964 |
|
[yocto 3.1] adding custom testsdk script in own layer
Karthik Poduval
Hi All, We were trying to add a custom sdktest script as shown in example https://docs.yoctoproject.org/test-manual/intro.html#testsdk. The script gets invoked when placed in meta/lib/oeqa/sdk/cases/mysdktest.py However when placed under. <my layer>/lib/oeqa/sdk/cases/mysdktest.py It does not get invoked when running bitbake <my image> -c testsdk The testimage scripts do work when placed under <my layer>/lib/oeqa/runtime/cases/ as they are controlled by the TEST_SUITES variable. Kindly advise on how to proceed. -- Regards, Karthik Poduval |
|
Re: firewalld isssue
#yocto
On 2022-03-28 03:18, Nicolas Jeker wrote:
On Sun, 2022-03-27 at 23:39 -0700, sateesh m wrote:Trevor was looking into this as well so I've CCed him.Hi Team,Judging by this stack exchange thread[1] from a quick search, you might ../Randy --JSON blob: # Randy MacLeod # Wind River Linux |
|
[meta-security][PATCH 1/1] LICENSE: adopt SPDX standard names
Joe Slater <joe.slater@...>
From: Robert Yang <liezhi.yang@...>
Modify LICENSE for ding-libs and libmhash. Signed-off-by: Joe Slater <joe.slater@...> --- recipes-security/libdhash/ding-libs_0.6.1.bb | 2 +- recipes-security/libmhash/libmhash_0.9.9.9.bb | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/recipes-security/libdhash/ding-libs_0.6.1.bb b/recipes-security/libdhash/ding-libs_0.6.1.bb index 6046fa0..843850f 100644 --- a/recipes-security/libdhash/ding-libs_0.6.1.bb +++ b/recipes-security/libdhash/ding-libs_0.6.1.bb @@ -2,7 +2,7 @@ SUMMARY = "Dynamic hash table implementation" DESCRIPTION = "Dynamic hash table implementation" HOMEPAGE = "https://fedorahosted.org/released/ding-libs" SECTION = "base" -LICENSE = "GPLv3+" +LICENSE = "GPL-3.0-or-later" LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504" SRC_URI = "https://fedorahosted.org/released/${BPN}/${BP}.tar.gz" diff --git a/recipes-security/libmhash/libmhash_0.9.9.9.bb b/recipes-security/libmhash/libmhash_0.9.9.9.bb index 9b34cb1..35c5ff8 100644 --- a/recipes-security/libmhash/libmhash_0.9.9.9.bb +++ b/recipes-security/libmhash/libmhash_0.9.9.9.bb @@ -7,7 +7,7 @@ DESCRIPTION = "\ " HOMEPAGE = "http://mhash.sourceforge.net/" -LICENSE = "LGPLv2.0" +LICENSE = "LGPL-2.0-only" LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7" S = "${WORKDIR}/mhash-${PV}" -- 2.35.1 |
|
Re: Yocto poky/meta/recipes-devtool/perl
Alexander Kanavin
Can you please attach log.do_patch where the problem can be seen?
toggle quoted message
Show quoted text
Alex On Tue, 29 Mar 2022 at 15:11, Mike Ulan <mausvt@...> wrote:
|
|
Re: [meta-security][PATCH 1/1] LICENSE: adopt standard SPDX names
Joe Slater <joe.slater@...>
I'll send again for ding-libs and libmhash. Joe
toggle quoted message
Show quoted text
-----Original Message----- |
|
Re: [meta-security][PATCH 1/1] LICENSE: adopt standard SPDX names
On 3/29/22 09:18, Joe Slater wrote:
Correct LICENSE for samhain, ecrypt-utils, ding-libs,Mater-next has these. https://git.yoctoproject.org/meta-security/commit/?h=master-next&id=ece41f7543bbd42c57f4208c7309f90cbd02e852 Looks like a few more need to be added based on these changes. -armin
|
|