Date   

Re: [OE-core] Hardknott (GCC10) Compiler Issues

Zoran
 

The target system should be independent of buildtools version and the target
system should also be binary reproducible so if that were changing through
changing buildtools tarball, that would be worrying in itself.
Even better, the rootfs built by YOCTO could be used, but anyone can
build U-Boot and kernel outside of the YOCTO, using their own cross
compilers (I use Fedora 33 ARM cross compilers, since my Linux host is
Fedora 33).

Then to install all the different components on the SDcard for the
target system.

And see if the issue repeats itself...

Zee
_______

On Mon, Jun 28, 2021 at 2:49 PM Richard Purdie
<richard.purdie@...> wrote:

On Thu, 2021-06-24 at 21:48 -0700, Chuck Wolber wrote:
All,

Please accept my apologies in advance for the detailed submission. I think
it is warranted in this case.

There is something... "odd" about the GCC 10 compiler that is delivered with
Hardknott. I am still chasing it down, so I am not yet ready to declare a
root cause or submit a bug, but I am posting what I have now in case anyone
has some insights to offer.
The issue you describe does sound strange. I was a little unclear about exactly
which combinations were passing/failing. Are you saying that some versions of
buildtools let the system work but some do not? We now have gcc 11 in master
so it would be interesting to know how things worked there and if any
regression was fixed.

I have also heard reports of issues with bison segfaulting from other sources
but I don't have anything I can point to specifically about it.

The target system should be independent of buildtools version and the target
system should also be binary reproducible so if that were changing through
changing buildtools tarball, that would be worrying in itself.

P.P.S. For the sake of completeness, I had to add the following files to the buildtools-extended
sysroot to fully complete the build of our images:

/usr/include/magic.h -> util-linux "more" command requires this.
/usr/include/zstd.h -> I do not recall which recipe required this.
/usr/bin/free -> The OpenJDK 8 build scripts need this.
/usr/include/sys/* -> openjdk-8-native
/lib/libcap.so.2 -> The binutils "dir" command quietly breaks the build without this. I am not a fan of the
lack of error checking in the binutils build...
/usr/include/sensors/error.h and sensors.h -> mesa-native
/usr/include/zstd_errors.h -> qemu-system-native
It is great to have this list, outside the non-jdk issues are probably issues we
should look at fixing in OE-Core. Do you mean binutils above for the dir command?

Cheers,




Re: [OE-core] Hardknott (GCC10) Compiler Issues

Zoran
 

At least seems that GCC 10.2 is not the cause of the problem for my
cannelloni recipe issue:

https://github.com/mguentner/cannelloni/issues/35

The same error repeats itself with GCC 11.2 (in hardknott).

The issue is most probably optimizing GCC switches and Include paths
in further cannelloni commits (after release 1.0.0, which compiles
with both 10.2 and 11.1 fine).

Best Regards,
Zee
_______

On Mon, Jun 28, 2021 at 2:49 PM Richard Purdie
<richard.purdie@...> wrote:

On Thu, 2021-06-24 at 21:48 -0700, Chuck Wolber wrote:
All,

Please accept my apologies in advance for the detailed submission. I think
it is warranted in this case.

There is something... "odd" about the GCC 10 compiler that is delivered with
Hardknott. I am still chasing it down, so I am not yet ready to declare a
root cause or submit a bug, but I am posting what I have now in case anyone
has some insights to offer.
The issue you describe does sound strange. I was a little unclear about exactly
which combinations were passing/failing. Are you saying that some versions of
buildtools let the system work but some do not? We now have gcc 11 in master
so it would be interesting to know how things worked there and if any
regression was fixed.

I have also heard reports of issues with bison segfaulting from other sources
but I don't have anything I can point to specifically about it.

The target system should be independent of buildtools version and the target
system should also be binary reproducible so if that were changing through
changing buildtools tarball, that would be worrying in itself.

P.P.S. For the sake of completeness, I had to add the following files to the buildtools-extended
sysroot to fully complete the build of our images:

/usr/include/magic.h -> util-linux "more" command requires this.
/usr/include/zstd.h -> I do not recall which recipe required this.
/usr/bin/free -> The OpenJDK 8 build scripts need this.
/usr/include/sys/* -> openjdk-8-native
/lib/libcap.so.2 -> The binutils "dir" command quietly breaks the build without this. I am not a fan of the
lack of error checking in the binutils build...
/usr/include/sensors/error.h and sensors.h -> mesa-native
/usr/include/zstd_errors.h -> qemu-system-native
It is great to have this list, outside the non-jdk issues are probably issues we
should look at fixing in OE-Core. Do you mean binutils above for the dir command?

Cheers,




Re: [OE-core] Hardknott (GCC10) Compiler Issues

Richard Purdie
 

On Thu, 2021-06-24 at 21:48 -0700, Chuck Wolber wrote:
All,

Please accept my apologies in advance for the detailed submission. I think 
it is warranted in this case.

There is something... "odd" about the GCC 10 compiler that is delivered with 
Hardknott. I am still chasing it down, so I am not yet ready to declare a 
root cause or submit a bug, but I am posting what I have now in case anyone 
has some insights to offer.
The issue you describe does sound strange. I was a little unclear about exactly
which combinations were passing/failing. Are you saying that some versions of 
buildtools let the system work but some do not? We now have gcc 11 in master 
so it would be interesting to know how things worked there and if any 
regression was fixed.

I have also heard reports of issues with bison segfaulting from other sources
but I don't have anything I can point to specifically about it.

The target system should be independent of buildtools version and the target
system should also be binary reproducible so if that were changing through
changing buildtools tarball, that would be worrying in itself.

P.P.S. For the sake of completeness, I had to add the following files to the buildtools-extended
sysroot to fully complete the build of our images:

/usr/include/magic.h -> util-linux "more" command requires this.
/usr/include/zstd.h -> I do not recall which recipe required this.
/usr/bin/free -> The OpenJDK 8 build scripts need this.
/usr/include/sys/* -> openjdk-8-native
/lib/libcap.so.2 -> The binutils "dir" command quietly breaks the build without this. I am not a fan of the
                            lack of error checking in the binutils build...
/usr/include/sensors/error.h and sensors.h -> mesa-native
/usr/include/zstd_errors.h -> qemu-system-native
It is great to have this list, outside the non-jdk issues are probably issues we 
should look at fixing in OE-Core. Do you mean binutils above for the dir command?

Cheers,


Re: [qa-build-notification] QA notification for completed autobuilder build (yocto-3.1.9.rc1)

Sangeeta Jain
 

Hello all,

This is the full report for yocto-3.1.9.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.

No new issue found.

Thanks,
Sangeeta

-----Original Message-----
From: qa-build-notification@... <qa-build-
notification@...> On Behalf Of Pokybuild User
Sent: Wednesday, 23 June, 2021 12:33 AM
To: yocto@...
Cc: qa-build-notification@...
Subject: [qa-build-notification] QA notification for completed autobuilder build
(yocto-3.1.9.rc1)


A build flagged for QA (yocto-3.1.9.rc1) was completed on the autobuilder and is
available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.1.9.rc1


Build hash information:

bitbake: 0e0af15b84e07e6763300dcd092b980086b9b9c4
meta-arm: 59974ccd5f1368b2a1c621ba3efd6d2c44c126dd
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: d8bf86ae6288ae520b8ddd7209a0b448b9693f48
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: ac8181d9b9ad8360f7dba03aba8b00f008c6ebb4
poky: 43060f59ba60a0257864f1f7b25b51fac3f2d2cf



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@...







[meta-security][PATCH 4/4] initramfs-framework: rename files dir

Armin Kuster
 

Fixes:
ERROR: initramfs-framework-1.0-r4 do_fetch: Fetcher failure for URL: 'file://dmverity'. Unable to fetch URL from any source.

Signed-off-by: Armin Kuster <akuster808@...>
---
.../{initramfs-framework => initramfs-framework-dm}/dmverity | 0
recipes-core/initrdscripts/initramfs-framework.inc | 2 +-
2 files changed, 1 insertion(+), 1 deletion(-)
rename recipes-core/initrdscripts/{initramfs-framework => initramfs-framework-dm}/dmverity (100%)

diff --git a/recipes-core/initrdscripts/initramfs-framework/dmverity b/recipes-core/initrdscripts/initramfs-framework-dm/dmverity
similarity index 100%
rename from recipes-core/initrdscripts/initramfs-framework/dmverity
rename to recipes-core/initrdscripts/initramfs-framework-dm/dmverity
diff --git a/recipes-core/initrdscripts/initramfs-framework.inc b/recipes-core/initrdscripts/initramfs-framework.inc
index dad9c96..12010bf 100644
--- a/recipes-core/initrdscripts/initramfs-framework.inc
+++ b/recipes-core/initrdscripts/initramfs-framework.inc
@@ -1,4 +1,4 @@
-FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
+FILESEXTRAPATHS_prepend := "${THISDIR}/initramfs-framework-dm:"

SRC_URI_append = "\
file://dmverity \
--
2.17.1


[meta-security][PATCH 3/4] packagegroup-core-security: add sshguard

Armin Kuster
 

Signed-off-by: Armin Kuster <akuster808@...>
---
recipes-core/packagegroup/packagegroup-core-security.bb | 1 +
1 file changed, 1 insertion(+)

diff --git a/recipes-core/packagegroup/packagegroup-core-security.bb b/recipes-core/packagegroup/packagegroup-core-security.bb
index e7b6d9b..8e06f30 100644
--- a/recipes-core/packagegroup/packagegroup-core-security.bb
+++ b/recipes-core/packagegroup/packagegroup-core-security.bb
@@ -40,6 +40,7 @@ RDEPENDS_packagegroup-security-utils = "\
softhsm \
libest \
opendnssec \
+ sshguard \
${@bb.utils.contains_any("TUNE_FEATURES", "riscv32 ", "", " libseccomp",d)} \
${@bb.utils.contains("DISTRO_FEATURES", "pam", "sssd google-authenticator-libpam", "",d)} \
${@bb.utils.contains("DISTRO_FEATURES", "pax", "pax-utils packctl", "",d)} \
--
2.17.1


[meta-security][PATCH 2/4] ssshgaurd: add packaage

Armin Kuster
 

Signed-off-by: Armin Kuster <akuster808@...>
---
recipes-security/sshguard/sshguard_2.4.2.bb | 11 +++++++++++
1 file changed, 11 insertions(+)
create mode 100644 recipes-security/sshguard/sshguard_2.4.2.bb

diff --git a/recipes-security/sshguard/sshguard_2.4.2.bb b/recipes-security/sshguard/sshguard_2.4.2.bb
new file mode 100644
index 0000000..bd7f979
--- /dev/null
+++ b/recipes-security/sshguard/sshguard_2.4.2.bb
@@ -0,0 +1,11 @@
+SUMARRY=" Intelligently block brute-force attacks by aggregating system logs "
+HOMEPAGE = "https://www.sshguard.net/"
+LIC_FILES_CHKSUM = "file://COPYING;md5=47a33fc98cd20713882c4d822a57bf4d"
+LICENSE = "BSD-1-Clause"
+
+
+SRC_URI="https://sourceforge.net/projects/sshguard/files/sshguard/${PV}/sshguard-${PV}.tar.gz"
+
+SRC_URI[sha256sum] = "2770b776e5ea70a9bedfec4fd84d57400afa927f0f7522870d2dcbbe1ace37e8"
+
+inherit autotools-brokensep
--
2.17.1


[meta-security][PATCH 1/4] initramfs-framework: fix typo in conditional

Armin Kuster
 

Signed-off-by: Armin Kuster <akuster808@...>
---
recipes-core/initrdscripts/initramfs-framework_1.0.bbappend | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/recipes-core/initrdscripts/initramfs-framework_1.0.bbappend b/recipes-core/initrdscripts/initramfs-framework_1.0.bbappend
index dc74e01..f5d476e 100644
--- a/recipes-core/initrdscripts/initramfs-framework_1.0.bbappend
+++ b/recipes-core/initrdscripts/initramfs-framework_1.0.bbappend
@@ -1 +1 @@
-require ${@bb.utils.contains('IMAGE_CLASSES', 'dm-verity', 'initramfs-framework.inc', '', d)}
+require ${@bb.utils.contains('IMAGE_CLASSES', 'dm-verity-img', 'initramfs-framework.inc', '', d)}
--
2.17.1


Re: [meta-rockchip][PATCH v2] console cleanup

Trevor Woerner
 

On Thu 2021-06-24 @ 08:39:59 AM, Trevor Woerner wrote:
Consolidate all the various console definitions to the common
conf/machine/include/rockchip-defaults.inc file and create
RK_CONSOLE_BAUD and RK_CONSOLE_DEVICE variables that can be
reused in the wks files.

The following variables were checked before and after this patch
to make sure they are sensible:
- SERIAL_CONSOLES
- RK_CONSOLE_DEVICE
- RK_CONSOLE_BAUD

A boot test was performed on the following boards to make sure
they all continue to boot to a cmdline:
- tinker-board
- rock-pi-e
- nanopi-m4-2gb
- rock64
- rock-pi-4b

Signed-off-by: Trevor Woerner <twoerner@...>
---
Changes from v1:
- In v1 I defined RK_CONSOLE_BAUD and RK_CONSOLE_DEVICE for each MACHINE
and then redefined SERIAL_CONSOLES to be the concatenation of these two
variables. Khem pointed out this is a bad approach because I'm redefining
an oe-core-defined variable that all BSPs expect.
- In v2 I set/consolidate SERIAL_CONSOLES for each MACHINE and then generate
RK_CONSOLE_BAUD and RK_CONSOLE_DEVICE based on the first-defined
<baud>;<device> pair found in SERIAL_CONSOLES; these generated variables are
then used in the wks files.
Applied to meta-rockchip master.


Re: what's the state of things with pushing the bounds on ASSUME_PROVIDED?

Chuck Wolber
 

On Fri, Jun 25, 2021 at 4:43 AM Richard Purdie <richard.purdie@...> wrote:

In summary, I see a lot of problems for what amounts to not much speed
gain. Particularly when we have a mechanism like sstate available
which allows binary reuse.

Very strong agreement here. My 2c is that Yocto/OE should be going in that direction even
further. One of the significant benefits of the OE build method is eliminating, to the greatest
extent possible, the (usually) undetectable influence of variations in the host platforms.

Any given distro is probably stable enough, but that does not guarantee a consistent result if
we attempted to build the same image on all available stable distros. We might get lucky and
actually achieve that, but I would not bet my life on it, particularly if we started using even more
native tooling.

"Stability is a local construct, not a global one."

For our own project, we have taken this as far as practical. We eliminated the third party
distro host platform (Ubuntu) about five years ago and built our host platform from Yocto/OE
sources. Each new version of our host platform is built from the previous one.

..Ch:W..

--
"Perfection must be reached by degrees; she requires the slow hand of time." - Voltaire


Re: Recipe for include-what-you-use and rpath problem #sdk

Khem Raj
 

On 6/25/21 7:00 AM, Francesco Cusolito wrote:
I was able to make it work correctly enabling |CMAKE_SKIP_RPATH|.
Here the working full recipe:
this is fine, if you are interested submit it as a patch to include in metadata in meta-python

|LICENSE = "NCSA" LIC_FILES_CHKSUM = "file://LICENSE.TXT;md5=59d01ad98720f3c50d6a8a0ef3108c88 \ file://iwyu-check-license-header.py;md5=cdc4ab52c0b26e216cbf434649d30403" SRC_URI = "git://github.com/include-what-you-use/include-what-you-use.git;protocol=https;branch=clang_10" PV = "0.14+git${SRCPV}" SRCREV = "0.14" S = "${WORKDIR}/git" DEPENDS = "clang" inherit cmake python3native EXTRA_OECMAKE_append_class-nativesdk = " \ -DCMAKE_SKIP_RPATH:BOOL=ON \ " BBCLASSEXTEND_append = " \ nativesdk \ " |


Re: Hardknott (GCC10) Compiler Issues

Zoran
 

GCCVERSION = "9.%"
Basically, do NOT use this instruction anywhere. It clearly does NOT work?!

I did replace the whole gcc/ in the: poky/meta/recipes-devtools/gcc
for hardknott branch:

Now I have a gcc_11.1 compiler (from master branch), instead of gcc_10.2.

poky/meta/recipes-devtools/gcc
[vuser@fedora33-ssd projects_yocto]$ cd
bbb-yocto-hardknott/poky/meta/recipes-devtools/gcc
[vuser@fedora33-ssd gcc]$ ls -al
total 180
drwxr-xr-x. 3 vuser vboxusers 4096 Jun 25 13:50 .
drwxr-xr-x. 94 vuser vboxusers 4096 Jun 25 14:45 ..
drwxr-xr-x. 2 vuser vboxusers 4096 Jun 25 13:50 gcc
-rw-r--r--. 1 vuser vboxusers 800 Jun 25 13:50 gcc_11.1.bb
-rw-r--r--. 1 vuser vboxusers 5330 Jun 25 13:50 gcc-11.1.inc
-rw-r--r--. 1 vuser vboxusers 4560 Jun 25 13:50 gcc-common.inc
-rw-r--r--. 1 vuser vboxusers 4426 Jun 25 13:50 gcc-configure-common.inc
-rw-r--r--. 1 vuser vboxusers 66 Jun 25 13:50 gcc-cross_11.1.bb
-rw-r--r--. 1 vuser vboxusers 77 Jun 25 13:50 gcc-cross-canadian_11.1.bb
-rw-r--r--. 1 vuser vboxusers 6971 Jun 25 13:50 gcc-cross-canadian.inc
-rw-r--r--. 1 vuser vboxusers 6383 Jun 25 13:50 gcc-cross.inc
-rw-r--r--. 1 vuser vboxusers 73 Jun 25 13:50 gcc-crosssdk_11.1.bb
-rw-r--r--. 1 vuser vboxusers 429 Jun 25 13:50 gcc-crosssdk.inc
-rw-r--r--. 1 vuser vboxusers 9593 Jun 25 13:50 gcc-multilib-config.inc
-rw-r--r--. 1 vuser vboxusers 67 Jun 25 13:50 gcc-runtime_11.1.bb
-rw-r--r--. 1 vuser vboxusers 12398 Jun 25 13:50 gcc-runtime.inc
-rw-r--r--. 1 vuser vboxusers 271 Jun 25 13:50 gcc-sanitizers_11.1.bb
-rw-r--r--. 1 vuser vboxusers 4407 Jun 25 13:50 gcc-sanitizers.inc
-rw-r--r--. 1 vuser vboxusers 208 Jun 25 13:50 gcc-shared-source.inc
-rw-r--r--. 1 vuser vboxusers 113 Jun 25 13:50 gcc-source_11.1.bb
-rw-r--r--. 1 vuser vboxusers 1468 Jun 25 13:50 gcc-source.inc
-rw-r--r--. 1 vuser vboxusers 8598 Jun 25 13:50 gcc-target.inc
-rw-r--r--. 1 vuser vboxusers 4924 Jun 25 13:50 gcc-testsuite.inc
-rw-r--r--. 1 vuser vboxusers 143 Jun 25 13:50 libgcc_11.1.bb
-rw-r--r--. 1 vuser vboxusers 5175 Jun 25 13:50 libgcc-common.inc
-rw-r--r--. 1 vuser vboxusers 1785 Jun 25 13:50 libgcc.inc
-rw-r--r--. 1 vuser vboxusers 151 Jun 25 13:50 libgcc-initial_11.1.bb
-rw-r--r--. 1 vuser vboxusers 2020 Jun 25 13:50 libgcc-initial.inc
-rw-r--r--. 1 vuser vboxusers 68 Jun 25 13:50 libgfortran_11.1.bb
-rw-r--r--. 1 vuser vboxusers 2574 Jun 25 13:50 libgfortran.inc
[vuser@fedora33-ssd gcc]$

Waiting for the compilation results (still compiles).

Zee
_______


On Fri, Jun 25, 2021 at 10:15 AM Zoran via lists.yoctoproject.org
<zoran.stojsavljevic=gmail.com@...> wrote:

I have no idea if this is possible in the current YOCTO development stage:
GCCVERSION = "11.%"
To do the FF to GCC 11.>
WARNING: preferred version 11.% of gcc-runtime not available (for item libg2c)
WARNING: versions of gcc-runtime available: 10.2.0

For hardknott. Guess, this answers my later question.

Let us see about my very first question!

BR,
Zee
_______

INCLUDED:
WARNING: preferred version 11.% of gcc-runtime not available (for item libssp-dev)
WARNING: versions of gcc-runtime available: 10.2.0
WARNING: preferred version 11.% of gcc-runtime not available (for item libg2c-dev)
WARNING: versions of gcc-runtime available: 10.2.0
WARNING: preferred version 11.% of gcc-runtime not available (for item libssp)
WARNING: versions of gcc-runtime available: 10.2.0

Build Configuration:
BB_VERSION = "1.50.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "fedora-33"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "beaglebone"
DISTRO = "poky"
DISTRO_VERSION = "3.3.1"
TUNE_FEATURES = "arm vfp cortexa8 neon callconvention-hard"
TARGET_FPU = "hard"
meta
meta-poky
meta-yocto-bsp = "hardknott:74dbb08c3709fec6563ee65a3661f66fdcbb3e2f"
meta-jumpnow = "hardknott:ac90f018ebb9de8d6ac12f22368e004aa7be69a2"
meta-bbb = "hardknott:d838aa54e3ed81d08597c08e112fc8966aaa501d"
meta-oe
meta-python
meta-networking = "hardknott:aca88908fd329f5cef6f19995b072397fb2d8ec6"
meta-qt5 = "upstream/hardknott:a00af3eae082b772469d9dd21b2371dd4d237684"
meta-socketcan = "master:cefd86cd1def9ad2e63be527f8ce36a076d7e17c"

NOTE: Fetching uninative binary shim http://downloads.yoctoproject.org/releases/uninative/3.2/x86_64-nativesdk-libc.tar.xz;sha256sum=3ee8c7d55e2d4c7ae3887cddb97219f97b94efddfeee2e24923c0cb0e8ce84c6 (will check PREMIRRORS first)
Initialising tasks: 100% |###########################################################################################| Time: 0:00:11
Sstate summary: Wanted 1709 Local 0 Network 0 Missed 1709 Current 0 (0% match, 0% complete)
NOTE: Executing Tasks


On Fri, Jun 25, 2021 at 7:58 AM Zoran via lists.yoctoproject.org <zoran.stojsavljevic=gmail.com@...> wrote:

An interesting issue, and I think I hit it as well (my best guess).

Here is my issue:
https://github.com/mguentner/cannelloni/issues/35

During the thud-to-hardknott upgrade process, we did nightly
builds of the new hardknott based target image from our thud
based SDK VM. I assumed that since GCC10 was being built
as part of the build sysroot bootstrap process, we were getting
a clean and consistent result irrespective of the underlying
build server OS.
Maybe you can try the following: in your local.conf to insert the
following line:

GCCVERSION = "9.%"

for hardknott release.

I need to try this myself, I just used gcc as is (default one which
comes with the release, I guess 10).

I have no idea if this is possible in the current YOCTO development stage:

GCCVERSION = "11.%"

To do the FF to GCC 11.

Zee
_______

On Fri, Jun 25, 2021 at 6:48 AM Chuck Wolber <chuckwolber@...> wrote:

All,

Please accept my apologies in advance for the detailed submission. I think it is warranted in this
case.

There is something... "odd" about the GCC 10 compiler that is delivered with Hardknott. I am still
chasing it down, so I am not yet ready to declare a root cause or submit a bug, but I am posting
what I have now in case anyone has some insights to offer.

For all I know it is something unusual that I am doing, but we have a lot of history with our
build/dev/release methods, so I would be surprised if that was actually the case. I have also
discussed aspects of this on IRC for the last few days, so some of this may be familiar to some
of you.

Background: We maintain a virtual machine SDK for our developers that is as close as possible to
the actual embedded hardware environment that we target. The SDK image is our baseline Linux
OS plus lots of the expected dev and debugging tools. The image deployed to our target devices is
the baseline Linux OS plus the core application suite. It is also important to note that we only
support the x86_64 machine architecture in our target devices and development workstations.

We also spin up and spin down the SDK VM for our nightly builds. This guarantees strict consistency
and eliminates lots of variables when we are trying to troubleshoot something hairy.

We just upgraded from Thud to Hardknott. This means we built our new Hardknott based SDK VM
image from our Thud based SDK VM (GCC 8 / glibc 2.28). When we attempted to build our target
device image in the new Hardknott based SDK VM, we consistently got a segfault when any build
task involves bison issuing a warning of some sort. I traced this down for a very long time and it
seemed to have something to do with the libtextstyle library from gettext and the way bison used it.
But I now believe that this to be a red herring. Bison seems to be very fragile, but in this case,
that may have actually been a good thing.

After some experimentation I found that the issue went away when I dropped down to the 3.6.4
recipe of bison found at OE-Core:bc95820cd. But this did not sit right with me. There is no way I
should be the only person seeing this issue.

Then I tried an experiment... I assumed I was encountering a compiler bootstrap issue with such a
big jump (GCC8 -> GCC10), so I rebuilt our hardknott based SDK VM with the 3.3.1 version of
buildtools-extended. The build worked flawlessly, but when I booted into the new SDK VM and
kicked off the build I got the same result (bison segfault when any build warnings are encountered).

This is when I started to mentally put a few more details together with other post-upgrade issues that
had been discovered in our lab. We attributed them to garden variety API and behavioral changes
expected during a Yocto upgrade, but now I am not so sure.

During the thud-to-hardknott upgrade process, we did nightly builds of the new hardknott based
target image from our thud based SDK VM. I assumed that since GCC10 was being built as part of
the build sysroot bootstrap process, we were getting a clean and consistent result irrespective of the
underlying build server OS.

One of the issues we were seeing in the lab was a periodic hang during the initramfs phase of the
boot process. We run a couple of setup scripts to manage the sysroot before the switch_root, so it
is not unusual to see some "growing pains" after an upgrade. The hangs were random with no
obvious cause, but systemd is very weird anyway so we attributed it to a new dependency or race
condition that we had to address after going from systemd 239 to 247.

It is also worth noting that systemd itself was not hung, it responded to the 'ole "three finger salute"
and dutifully filled the screen with shutdown messages. It was just that the boot process randomly
stopped cold in initramfs before the switch root. We would also occasionally see systemd
complaining in the logs, "Starting requested but asserts failed".

Historically, when asserts fail, it is a sign of a much larger problem, so I did another experiment...

Since we could build our SDK VM successfully with buildtools-extended, why not build the target
images? So I did. After a day of testing in the lab, none of the testers have seen the boot hang up in
the initramfs stage, whereas before it was happening about 50% of the time. I need a good week of
successful test activity before I am willing to declare success, but the results were convincing
enough to make it worth this summary post.

I did an extensive amount of trial and error testing, including meticulously comparing
buildtools-extended with our own versions of the same files. The only intersection point was gcc.

The gcc delivered with buildtools-extended works great. When I build hardknott's gcc10 from the
gcc in buildtools-extended, we are not able to build our target images with the resulting compiler.
When I build our target images from the old thud environment, we get a mysterious hang and
systemd asserts triggering during boot. Since GCC10 is an intermediate piece of the build, it is
also implicated despite the native environment running GCC8.

I will continue to troubleshoot this but I was hoping for some insight (or gentle guidance if I am
making a silly mistake). Overall, I am at a loss to think of a reason why I should not be able to build
a compiler from the buildtools-extended compiler and then use it to reliably build our target images.

Thank you,

..Ch:W..


P.S. For those who are curious, we started out on Pyro hosted on Ubuntu 16.04. From there we made
the jump to self hosting when we used that environment to build a thud based VM SDK. After years of
successful build, we are now in the process of upgrading to Hardknott.

P.P.S. For the sake of completeness, I had to add the following files to the buildtools-extended
sysroot to fully complete the build of our images:

/usr/include/magic.h -> util-linux "more" command requires this.
/usr/include/zstd.h -> I do not recall which recipe required this.
/usr/bin/free -> The OpenJDK 8 build scripts need this.
/usr/include/sys/* -> openjdk-8-native
/lib/libcap.so.2 -> The binutils "dir" command quietly breaks the build without this. I am not a fan of the
lack of error checking in the binutils build...
/usr/include/sensors/error.h and sensors.h -> mesa-native
/usr/include/zstd_errors.h -> qemu-system-native

--
"Perfection must be reached by degrees; she requires the slow hand of time." - Voltaire





Re: Recipe for include-what-you-use and rpath problem #sdk

Francesco Cusolito
 

I was able to make it work correctly enabling CMAKE_SKIP_RPATH.
Here the working full recipe:

LICENSE = "NCSA"
LIC_FILES_CHKSUM = "file://LICENSE.TXT;md5=59d01ad98720f3c50d6a8a0ef3108c88 \
                    file://iwyu-check-license-header.py;md5=cdc4ab52c0b26e216cbf434649d30403"

SRC_URI = "git://github.com/include-what-you-use/include-what-you-use.git;protocol=https;branch=clang_10"

PV = "0.14+git${SRCPV}"
SRCREV = "0.14"

S = "${WORKDIR}/git"

DEPENDS = "clang"

inherit cmake python3native

EXTRA_OECMAKE_append_class-nativesdk = " \
	-DCMAKE_SKIP_RPATH:BOOL=ON \
	"

BBCLASSEXTEND_append = " \
	nativesdk \
	"


[meta-rockchip][PATCH] conf/machine/include/rockchip-wic.inc: create

Trevor Woerner
 

Create a conf/machine/include/rockchip-wic.inc file to contain all the common
wic/wks things for easy inclusion by any MACHINEs that use wic for their image
creation.

NOTE: the wic image type of rock-pi-e changed from "wic.xz" to "wic" which
matches all the other meta-rockchip MACHINEs that use wic

The following variables were checked before and after to make sure they remain
correct/sensible:
- IMAGE_FSTYPES
- WKS_FILE_DEPENDS
- IMAGE_BOOT_FILES
- RK_CONSOLE_BAUD
- RK_CONSOLE_DEVICE
- RK_BOOT_DEVICE
- SERIAL_CONSOLES
- WICVARS

Build-tested for all currently-defined MACHINEs.

Boot-tested on the following boards to make sure they continue to boot to a
console correctly (core-image-base):
- tinker-board
- rock64
- rock-pi-4b
- rock-pi-e
- nanopi-m4-2gb

Signed-off-by: Trevor Woerner <twoerner@...>
---
conf/machine/firefly-rk3288.conf | 13 +----------
conf/machine/include/nanopi-m4.inc | 11 ---------
conf/machine/include/rk3328.inc | 1 +
conf/machine/include/rk3399.inc | 1 +
conf/machine/include/rock-pi-4.inc | 11 ---------
conf/machine/include/rockchip-defaults.inc | 14 -----------
conf/machine/include/rockchip-wic.inc | 27 ++++++++++++++++++++++
conf/machine/include/tinker.inc | 13 +----------
conf/machine/rock-pi-e.conf | 10 --------
conf/machine/rock64.conf | 11 ---------
conf/machine/vyasa-rk3288.conf | 13 +----------
11 files changed, 32 insertions(+), 93 deletions(-)
create mode 100644 conf/machine/include/rockchip-wic.inc

diff --git a/conf/machine/firefly-rk3288.conf b/conf/machine/firefly-rk3288.conf
index 2a5f0ba..138e840 100644
--- a/conf/machine/firefly-rk3288.conf
+++ b/conf/machine/firefly-rk3288.conf
@@ -7,20 +7,9 @@
#http://www.t-firefly.com/en/

require conf/machine/include/rk3288.inc
+require conf/machine/include/rockchip-wic.inc

KERNEL_DEVICETREE = "rk3288-firefly.dtb"
UBOOT_MACHINE = "firefly-rk3288_defconfig"

WKS_FILE ?= "firefly-rk3288.wks"
-IMAGE_FSTYPES += "wic wic.bmap"
-
-WKS_FILE_DEPENDS ?= " \
- mtools-native \
- dosfstools-native \
- virtual/bootloader \
- virtual/kernel \
- "
-IMAGE_BOOT_FILES ?= "\
- ${KERNEL_IMAGETYPE} \
- ${KERNEL_DEVICETREE} \
- "
diff --git a/conf/machine/include/nanopi-m4.inc b/conf/machine/include/nanopi-m4.inc
index 8a7c1d9..7ca91db 100644
--- a/conf/machine/include/nanopi-m4.inc
+++ b/conf/machine/include/nanopi-m4.inc
@@ -10,14 +10,3 @@ KERNEL_DEVICETREE = "rockchip/rk3399-nanopi-m4.dtb"

RK_BOOT_DEVICE = "mmcblk1"
WKS_FILE ?= "rock-pi-4.wks"
-IMAGE_FSTYPES += "wic wic.bmap"
-
-WKS_FILE_DEPENDS ?= " \
- mtools-native \
- dosfstools-native \
- virtual/bootloader \
- virtual/kernel \
- "
-IMAGE_BOOT_FILES ?= "\
- ${KERNEL_IMAGETYPE} \
- "
diff --git a/conf/machine/include/rk3328.inc b/conf/machine/include/rk3328.inc
index 5b11868..b0cafb5 100644
--- a/conf/machine/include/rk3328.inc
+++ b/conf/machine/include/rk3328.inc
@@ -8,6 +8,7 @@ DEFAULTTUNE ?= "cortexa53-crypto"
require conf/machine/include/soc-family.inc
require conf/machine/include/tune-cortexa53.inc
require conf/machine/include/rockchip-defaults.inc
+require conf/machine/include/rockchip-wic.inc

KBUILD_DEFCONFIG ?= "defconfig"
KERNEL_CLASSES = "kernel-fitimage"
diff --git a/conf/machine/include/rk3399.inc b/conf/machine/include/rk3399.inc
index 9f9f474..79e83e2 100644
--- a/conf/machine/include/rk3399.inc
+++ b/conf/machine/include/rk3399.inc
@@ -8,6 +8,7 @@ DEFAULTTUNE ?= "cortexa72-cortexa53-crypto"
require conf/machine/include/soc-family.inc
require conf/machine/include/tune-cortexa72-cortexa53.inc
require conf/machine/include/rockchip-defaults.inc
+require conf/machine/include/rockchip-wic.inc

KBUILD_DEFCONFIG ?= "defconfig"
KERNEL_CLASSES = "kernel-fitimage"
diff --git a/conf/machine/include/rock-pi-4.inc b/conf/machine/include/rock-pi-4.inc
index a3e60c7..92fc330 100644
--- a/conf/machine/include/rock-pi-4.inc
+++ b/conf/machine/include/rock-pi-4.inc
@@ -5,16 +5,5 @@ require conf/machine/include/rk3399.inc

RK_BOOT_DEVICE = "mmcblk1"
WKS_FILE ?= "rock-pi-4.wks"
-IMAGE_FSTYPES += "wic wic.bmap"
-
-WKS_FILE_DEPENDS ?= " \
- mtools-native \
- dosfstools-native \
- virtual/bootloader \
- virtual/kernel \
- "
-IMAGE_BOOT_FILES ?= "\
- ${KERNEL_IMAGETYPE} \
- "

MACHINE_EXTRA_RRECOMMENDS += "kernel-modules"
diff --git a/conf/machine/include/rockchip-defaults.inc b/conf/machine/include/rockchip-defaults.inc
index b0346c9..b41c523 100644
--- a/conf/machine/include/rockchip-defaults.inc
+++ b/conf/machine/include/rockchip-defaults.inc
@@ -23,17 +23,3 @@ XSERVER = " \
# misc
SERIAL_CONSOLES ?= "1500000;ttyS2"
IMAGE_FSTYPES += "ext4"
-
-# use the first-defined <baud>;<device> pair in SERIAL_CONSOLES
-# for the console parameter in the wks files
-RK_CONSOLE_BAUD ?= "${@d.getVar('SERIAL_CONSOLES').split(';')[0]}"
-RK_CONSOLE_DEVICE ?= "${@d.getVar('SERIAL_CONSOLES').split(';')[1].split()[0]}"
-
-# boot device (sd-card/emmc)
-RK_BOOT_DEVICE ??= "mmcblk0"
-
-WICVARS_append = " \
- RK_BOOT_DEVICE \
- RK_CONSOLE_BAUD \
- RK_CONSOLE_DEVICE \
- "
diff --git a/conf/machine/include/rockchip-wic.inc b/conf/machine/include/rockchip-wic.inc
new file mode 100644
index 0000000..0ee8c0e
--- /dev/null
+++ b/conf/machine/include/rockchip-wic.inc
@@ -0,0 +1,27 @@
+# common meta-rockchip wic/wks items
+
+IMAGE_FSTYPES += "wic wic.bmap"
+WKS_FILE_DEPENDS ?= " \
+ mtools-native \
+ dosfstools-native \
+ virtual/bootloader \
+ virtual/kernel \
+ "
+IMAGE_BOOT_FILES = " \
+ ${KERNEL_IMAGETYPE} \
+ ${@bb.utils.contains('KERNEL_IMAGETYPE', 'fitImage', '', '${KERNEL_DEVICETREE}', d)} \
+ "
+
+# use the first-defined <baud>;<device> pair in SERIAL_CONSOLES
+# for the console parameter in the wks files
+RK_CONSOLE_BAUD ?= "${@d.getVar('SERIAL_CONSOLES').split(';')[0]}"
+RK_CONSOLE_DEVICE ?= "${@d.getVar('SERIAL_CONSOLES').split(';')[1].split()[0]}"
+
+# boot device (sd-card/emmc)
+RK_BOOT_DEVICE ??= "mmcblk0"
+
+WICVARS_append = " \
+ RK_BOOT_DEVICE \
+ RK_CONSOLE_BAUD \
+ RK_CONSOLE_DEVICE \
+ "
diff --git a/conf/machine/include/tinker.inc b/conf/machine/include/tinker.inc
index e851b59..eaeb564 100644
--- a/conf/machine/include/tinker.inc
+++ b/conf/machine/include/tinker.inc
@@ -1,15 +1,4 @@
require conf/machine/include/rk3288.inc
+require conf/machine/include/rockchip-wic.inc

WKS_FILE ?= "tinker-board.wks"
-IMAGE_FSTYPES += "wic wic.bmap"
-
-WKS_FILE_DEPENDS ?= " \
- mtools-native \
- dosfstools-native \
- virtual/bootloader \
- virtual/kernel \
- "
-IMAGE_BOOT_FILES ?= "\
- ${KERNEL_IMAGETYPE} \
- ${KERNEL_DEVICETREE} \
- "
diff --git a/conf/machine/rock-pi-e.conf b/conf/machine/rock-pi-e.conf
index 38362a0..3fdbb5e 100644
--- a/conf/machine/rock-pi-e.conf
+++ b/conf/machine/rock-pi-e.conf
@@ -15,13 +15,3 @@ PREFERRED_PROVIDER_virtual/bootloader = "u-boot"
UBOOT_MACHINE = "rock-pi-e-rk3328_defconfig"

WKS_FILE = "rock-pi-e.wks"
-IMAGE_FSTYPES += "wic.xz wic.bmap"
-WKS_FILE_DEPENDS = " \
- mtools-native \
- dosfstools-native \
- virtual/bootloader \
- virtual/kernel \
- "
-IMAGE_BOOT_FILES ?= " \
- ${KERNEL_IMAGETYPE} \
- "
diff --git a/conf/machine/rock64.conf b/conf/machine/rock64.conf
index acda018..93e68e0 100644
--- a/conf/machine/rock64.conf
+++ b/conf/machine/rock64.conf
@@ -16,16 +16,5 @@ KERNEL_DEVICETREE = "rockchip/rk3328-rock64.dtb"
RK_BOOT_DEVICE ?= "mmcblk1"

WKS_FILE ?= "rock-pi-e.wks"
-IMAGE_FSTYPES += "wic wic.bmap"
-
-WKS_FILE_DEPENDS ?= " \
- mtools-native \
- dosfstools-native \
- virtual/bootloader \
- virtual/kernel \
- "
-IMAGE_BOOT_FILES ?= "\
- ${KERNEL_IMAGETYPE} \
- "

KBUILD_DEFCONFIG = "defconfig"
diff --git a/conf/machine/vyasa-rk3288.conf b/conf/machine/vyasa-rk3288.conf
index c92c821..3e1f395 100644
--- a/conf/machine/vyasa-rk3288.conf
+++ b/conf/machine/vyasa-rk3288.conf
@@ -6,6 +6,7 @@
#@DESCRIPTION: Amarula Vyasa is Rockchip RK3288 SOC based Single board computer with fully supported opensource software.

require conf/machine/include/rk3288.inc
+require conf/machine/include/rockchip-wic.inc

KERNEL_IMAGETYPE = "uImage"
KERNEL_DEVICETREE = "rk3288-vyasa.dtb"
@@ -15,15 +16,3 @@ UBOOT_MACHINE = "vyasa-rk3288_defconfig"

RK_BOOT_DEVICE = "mmcblk2"
WKS_FILE ?= "vyasa-rk3288.wks"
-IMAGE_FSTYPES += "wic wic.bmap"
-
-WKS_FILE_DEPENDS ?= " \
- mtools-native \
- dosfstools-native \
- virtual/bootloader \
- virtual/kernel \
- "
-IMAGE_BOOT_FILES ?= "\
- ${KERNEL_IMAGETYPE} \
- ${KERNEL_DEVICETREE} \
- "
--
2.30.0.rc0


Re: The do_populate_sdk is finishing OK even when there are errors present in the build

Fabio Berton
 

Hi Richard!

Ok, I'll prepare a patch, do more tests on my side and if everything works I'll send the patch to the OE-core list.

Is there any specific test, or just populate_sdk with core-image-base?

Thanks!

On Fri, Jun 25, 2021 at 8:48 AM Richard Purdie <richard.purdie@...> wrote:
On Thu, 2021-06-24 at 17:40 -0300, Fabio Berton wrote:
> Hi all!
>
> I'm running some test with do_populate_sdk task and I'm seeing this 
> on the log:
>
> check_data_file_clashes: Package kmsxx-dbg wants to install file /home/builder/build/tmp/work/foo-poky-
> linux/core-image-minimal/1.0-r0/sdk/image/opt/bar/sysroots/aarch64-poky-linux/usr/bin/.debug/kmstest
> But that file is already provided by package  * libdrm-dbg
>
> I also see this kind of message with other packages.
>
> Looking in the source code I found that the install_complementary 
> function runs this [1] with attempt_only=True, and if attempt_only is 
> true log above it's just a warning, as shown here [2].
>
> This [3] comment says that "will only attempt to install these packages, 
> if they don't exist then no error will occur."
>
> My question is how can I force an error and not just a warning when 
> running do_populate_sdk? 
>
> I understand that I can change [1] to run:
>
>   self.install(install_pkgs)
>
> so, it'll use set attempt_only to False, that is the default, but I 
> think this will break some use cases. 
>
> What is the correct behaviour here, see the warning messages and fix 
> the packages to avoid "file is already provided by package" messages, 
> every time I create a SDK or change in some way to see an error message
>  and stop SDK generation?
>
> What is the correct behavior here, inspect the warning messages, and
> fix the packages to avoid "file is already provided by package" messages,
> every time I create an SDK or change it in some way to see an error 
> message and stop the SDK generation?

It would probably be worth an experiment to see if we really do need the
attempt_only option set there any more. I'd hope it isn't needed now...

It is probably worth testing a patch on the autobuilder, assuming your
local tests with that pass. We'd need to check the different package
backends are ok with that.

Cheers,

Richard


Re: The do_populate_sdk is finishing OK even when there are errors present in the build

Richard Purdie
 

On Thu, 2021-06-24 at 17:40 -0300, Fabio Berton wrote:
Hi all!

I'm running some test with do_populate_sdk task and I'm seeing this 
on the log:

check_data_file_clashes: Package kmsxx-dbg wants to install file /home/builder/build/tmp/work/foo-poky-
linux/core-image-minimal/1.0-r0/sdk/image/opt/bar/sysroots/aarch64-poky-linux/usr/bin/.debug/kmstest
But that file is already provided by package  * libdrm-dbg

I also see this kind of message with other packages.

Looking in the source code I found that the install_complementary 
function runs this [1] with attempt_only=True, and if attempt_only is 
true log above it's just a warning, as shown here [2].

This [3] comment says that "will only attempt to install these packages, 
if they don't exist then no error will occur."

My question is how can I force an error and not just a warning when 
running do_populate_sdk? 

I understand that I can change [1] to run:

  self.install(install_pkgs)

so, it'll use set attempt_only to False, that is the default, but I 
think this will break some use cases. 

What is the correct behaviour here, see the warning messages and fix 
the packages to avoid "file is already provided by package" messages, 
every time I create a SDK or change in some way to see an error message
and stop SDK generation?

What is the correct behavior here, inspect the warning messages, and
fix the packages to avoid "file is already provided by package" messages,
every time I create an SDK or change it in some way to see an error 
message and stop the SDK generation?
It would probably be worth an experiment to see if we really do need the
attempt_only option set there any more. I'd hope it isn't needed now...

It is probably worth testing a patch on the autobuilder, assuming your
local tests with that pass. We'd need to check the different package
backends are ok with that.

Cheers,

Richard


Re: what's the state of things with pushing the bounds on ASSUME_PROVIDED?

Richard Purdie
 

On Thu, 2021-06-24 at 07:50 -0400, Robert P. J. Day wrote:
  i asked about this once upon a time, so i thought i'd follow up ...
given the fairly stable state of recent linux distros, is there any
standard for taking advantage of what *should* be robust native tools
rather than building them? (i'm ignoring taking advantage of sstate
and building SDKs and other clever speedups for now.)

  from scratch, i did a wind river (LINCD) build of
wrlinux-image-small (and i assume it would be much the same under
current oe-core), and i notice that numerous native tools were
compiled, including such standards as cmake, curl, elfutils ... the
list goes on and on.

  so other than the tools that are *required* to be installed, if i
mention that i am currently running ubuntu 20.04, is there any
indication as to which tools i'm relatively safe to take advantage
using ASSUME_PROVIDED and HOSTTOOLS? i realize that the versions built
will probably differ from the host versions, but it seems that if
there is an incompatibility, that would be fairly obvious in short
order.

  thoughts?
Quite often things aren't as simple as they first seem:

Elfutils has a history of interesting changes between versions so having 
our builds use a consistent version is good.

Some recipes build libs as well as binaries, e.g. the compression tools.
Its relatively easy to check a binary is present, it is harder to check
the right -devel headers are present. That is a solvable problem but again, 
version consistency is good. If you require a HOSTTOOLS bin but our own
lib, you can get version mismatches.

We do patch some utilities for 'reasons' and having those patches missing
can be a pain and cause weird errors.

Reproducibility is also a concern, particularly if different versions of 
tools like flex/bison generated different code.

I also wonder who is going to support testing all these different options
and handle the resulting build failures and bugs being raised?

This list isn't definitive.


In summary, I see a lot of problems for what amounts to not much speed
gain. Particularly when we have a mechanism like sstate available
which allows binary reuse.

Cheers,

Richard


Re: Hardknott (GCC10) Compiler Issues

Zoran
 

> I have no idea if this is possible in the current YOCTO development stage:
> GCCVERSION = "11.%"
> To do the FF to GCC 11.>

WARNING: preferred version 11.% of gcc-runtime not available (for item libg2c)
WARNING: versions of gcc-runtime available: 10.2.0


For hardknott. Guess, this answers my later question.

Let us see about my very first question!

BR,
Zee
_______

INCLUDED:
WARNING: preferred version 11.% of gcc-runtime not available (for item libssp-dev)
WARNING: versions of gcc-runtime available: 10.2.0
WARNING: preferred version 11.% of gcc-runtime not available (for item libg2c-dev)
WARNING: versions of gcc-runtime available: 10.2.0
WARNING: preferred version 11.% of gcc-runtime not available (for item libssp)
WARNING: versions of gcc-runtime available: 10.2.0

Build Configuration:
BB_VERSION           = "1.50.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "fedora-33"
TARGET_SYS           = "arm-poky-linux-gnueabi"
MACHINE              = "beaglebone"
DISTRO               = "poky"
DISTRO_VERSION       = "3.3.1"
TUNE_FEATURES        = "arm vfp cortexa8 neon callconvention-hard"
TARGET_FPU           = "hard"
meta                
meta-poky            
meta-yocto-bsp       = "hardknott:74dbb08c3709fec6563ee65a3661f66fdcbb3e2f"
meta-jumpnow         = "hardknott:ac90f018ebb9de8d6ac12f22368e004aa7be69a2"
meta-bbb             = "hardknott:d838aa54e3ed81d08597c08e112fc8966aaa501d"
meta-oe              
meta-python          
meta-networking      = "hardknott:aca88908fd329f5cef6f19995b072397fb2d8ec6"
meta-qt5             = "upstream/hardknott:a00af3eae082b772469d9dd21b2371dd4d237684"
meta-socketcan       = "master:cefd86cd1def9ad2e63be527f8ce36a076d7e17c"

NOTE: Fetching uninative binary shim http://downloads.yoctoproject.org/releases/uninative/3.2/x86_64-nativesdk-libc.tar.xz;sha256sum=3ee8c7d55e2d4c7ae3887cddb97219f97b94efddfeee2e24923c0cb0e8ce84c6 (will check PREMIRRORS first)
Initialising tasks: 100% |###########################################################################################| Time: 0:00:11
Sstate summary: Wanted 1709 Local 0 Network 0 Missed 1709 Current 0 (0% match, 0% complete)
NOTE: Executing Tasks


On Fri, Jun 25, 2021 at 7:58 AM Zoran via lists.yoctoproject.org <zoran.stojsavljevic=gmail.com@...> wrote:
An interesting issue, and I think I hit it as well (my best guess).

Here is my issue:
https://github.com/mguentner/cannelloni/issues/35

> During the thud-to-hardknott upgrade process, we did nightly
> builds of the new hardknott based target image from our thud
> based SDK VM. I assumed that since GCC10 was being built
> as part of the build sysroot bootstrap process, we were getting
> a clean and consistent result irrespective of the underlying
> build server OS.

Maybe you can try the following: in your local.conf to insert the
following line:

GCCVERSION = "9.%"

for hardknott release.

I need to try this myself, I just used gcc as is (default one which
comes with the release, I guess 10).

I have no idea if this is possible in the current YOCTO development stage:

GCCVERSION = "11.%"

To do the FF to GCC 11.

Zee
_______

On Fri, Jun 25, 2021 at 6:48 AM Chuck Wolber <chuckwolber@...> wrote:
>
> All,
>
> Please accept my apologies in advance for the detailed submission. I think it is warranted in this
> case.
>
> There is something... "odd" about the GCC 10 compiler that is delivered with Hardknott. I am still
> chasing it down, so I am not yet ready to declare a root cause or submit a bug, but I am posting
> what I have now in case anyone has some insights to offer.
>
> For all I know it is something unusual that I am doing, but we have a lot of history with our
> build/dev/release methods, so I would be surprised if that was actually the case. I have also
> discussed aspects of this on IRC for the last few days, so some of this may be familiar to some
> of you.
>
> Background: We maintain a virtual machine SDK for our developers that is as close as possible to
> the actual embedded hardware environment that we target. The SDK image is our baseline Linux
> OS plus lots of the expected dev and debugging tools. The image deployed to our target devices is
> the baseline Linux OS plus the core application suite. It is also important to note that we only
> support the x86_64 machine architecture in our target devices and development workstations.
>
> We also spin up and spin down the SDK VM for our nightly builds. This guarantees strict consistency
> and eliminates lots of variables when we are trying to troubleshoot something hairy.
>
> We just upgraded from Thud to Hardknott. This means we built our new Hardknott based SDK VM
> image from our Thud based SDK VM (GCC 8 / glibc 2.28). When we attempted to build our target
> device image in the new Hardknott based SDK VM, we consistently got a segfault when any build
> task involves bison issuing a warning of some sort. I traced this down for a very long time and it
> seemed to have something to do with the libtextstyle library from gettext and the way bison used it.
> But I now believe that this to be a red herring. Bison seems to be very fragile, but in this case,
> that may have actually been a good thing.
>
> After some experimentation I found that the issue went away when I dropped down to the 3.6.4
> recipe of bison found at OE-Core:bc95820cd. But this did not sit right with me. There is no way I
> should be the only person seeing this issue.
>
> Then I tried an experiment... I assumed I was encountering a compiler bootstrap issue with such a
> big jump (GCC8 -> GCC10), so I rebuilt our hardknott based SDK VM with the 3.3.1 version of
> buildtools-extended. The build worked flawlessly, but when I booted into the new SDK VM and
> kicked off the build I got the same result (bison segfault when any build warnings are encountered).
>
> This is when I started to mentally put a few more details together with other post-upgrade issues that
> had been discovered in our lab. We attributed them to garden variety API and behavioral changes
> expected during a Yocto upgrade, but now I am not so sure.
>
> During the thud-to-hardknott upgrade process, we did nightly builds of the new hardknott based
> target image from our thud based SDK VM. I assumed that since GCC10 was being built as part of
> the build sysroot bootstrap process, we were getting a clean and consistent result irrespective of the
> underlying build server OS.
>
> One of the issues we were seeing in the lab was a periodic hang during the initramfs phase of the
> boot process. We run a couple of setup scripts to manage the sysroot before the switch_root, so it
> is not unusual to see some "growing pains" after an upgrade. The hangs were random with no
> obvious cause, but systemd is very weird anyway so we attributed it to a new dependency or race
> condition that we had to address after going from systemd 239 to 247.
>
> It is also worth noting that systemd itself was not hung, it responded to the 'ole "three finger salute"
> and dutifully filled the screen with shutdown messages. It was just that the boot process randomly
> stopped cold in initramfs before the switch root. We would also occasionally see systemd
> complaining in the logs, "Starting requested but asserts failed".
>
> Historically, when asserts fail, it is a sign of a much larger problem, so I did another experiment...
>
> Since we could build our SDK VM successfully with buildtools-extended, why not build the target
> images? So I did. After a day of testing in the lab, none of the testers have seen the boot hang up in
> the initramfs stage, whereas before it was happening about 50% of the time. I need a good week of
> successful test activity before I am willing to declare success, but the results were convincing
> enough to make it worth this summary post.
>
> I did an extensive amount of trial and error testing, including meticulously comparing
> buildtools-extended with our own versions of the same files. The only intersection point was gcc.
>
> The gcc delivered with buildtools-extended works great. When I build hardknott's gcc10 from the
> gcc in buildtools-extended, we are not able to build our target images with the resulting compiler.
> When I build our target images from the old thud environment, we get a mysterious hang and
> systemd asserts triggering during boot. Since GCC10 is an intermediate piece of the build, it is
> also implicated despite the native environment running GCC8.
>
> I will continue to troubleshoot this but I was hoping for some insight (or gentle guidance if I am
> making a silly mistake). Overall, I am at a loss to think of a reason why I should not be able to build
> a compiler from the buildtools-extended compiler and then use it to reliably build our target images.
>
> Thank you,
>
> ..Ch:W..
>
>
> P.S. For those who are curious, we started out on Pyro hosted on Ubuntu 16.04. From there we made
> the jump to self hosting when we used that environment to build a thud based VM SDK. After years of
> successful build, we are now in the process of upgrading to Hardknott.
>
> P.P.S. For the sake of completeness, I had to add the following files to the buildtools-extended
> sysroot to fully complete the build of our images:
>
> /usr/include/magic.h -> util-linux "more" command requires this.
> /usr/include/zstd.h -> I do not recall which recipe required this.
> /usr/bin/free -> The OpenJDK 8 build scripts need this.
> /usr/include/sys/* -> openjdk-8-native
> /lib/libcap.so.2 -> The binutils "dir" command quietly breaks the build without this. I am not a fan of the
>                             lack of error checking in the binutils build...
> /usr/include/sensors/error.h and sensors.h -> mesa-native
> /usr/include/zstd_errors.h -> qemu-system-native
>
> --
> "Perfection must be reached by degrees; she requires the slow hand of time." - Voltaire
>
>
>




Re: Hardknott (GCC10) Compiler Issues

Zoran
 

An interesting issue, and I think I hit it as well (my best guess).

Here is my issue:
https://github.com/mguentner/cannelloni/issues/35

During the thud-to-hardknott upgrade process, we did nightly
builds of the new hardknott based target image from our thud
based SDK VM. I assumed that since GCC10 was being built
as part of the build sysroot bootstrap process, we were getting
a clean and consistent result irrespective of the underlying
build server OS.
Maybe you can try the following: in your local.conf to insert the
following line:

GCCVERSION = "9.%"

for hardknott release.

I need to try this myself, I just used gcc as is (default one which
comes with the release, I guess 10).

I have no idea if this is possible in the current YOCTO development stage:

GCCVERSION = "11.%"

To do the FF to GCC 11.

Zee
_______

On Fri, Jun 25, 2021 at 6:48 AM Chuck Wolber <chuckwolber@...> wrote:

All,

Please accept my apologies in advance for the detailed submission. I think it is warranted in this
case.

There is something... "odd" about the GCC 10 compiler that is delivered with Hardknott. I am still
chasing it down, so I am not yet ready to declare a root cause or submit a bug, but I am posting
what I have now in case anyone has some insights to offer.

For all I know it is something unusual that I am doing, but we have a lot of history with our
build/dev/release methods, so I would be surprised if that was actually the case. I have also
discussed aspects of this on IRC for the last few days, so some of this may be familiar to some
of you.

Background: We maintain a virtual machine SDK for our developers that is as close as possible to
the actual embedded hardware environment that we target. The SDK image is our baseline Linux
OS plus lots of the expected dev and debugging tools. The image deployed to our target devices is
the baseline Linux OS plus the core application suite. It is also important to note that we only
support the x86_64 machine architecture in our target devices and development workstations.

We also spin up and spin down the SDK VM for our nightly builds. This guarantees strict consistency
and eliminates lots of variables when we are trying to troubleshoot something hairy.

We just upgraded from Thud to Hardknott. This means we built our new Hardknott based SDK VM
image from our Thud based SDK VM (GCC 8 / glibc 2.28). When we attempted to build our target
device image in the new Hardknott based SDK VM, we consistently got a segfault when any build
task involves bison issuing a warning of some sort. I traced this down for a very long time and it
seemed to have something to do with the libtextstyle library from gettext and the way bison used it.
But I now believe that this to be a red herring. Bison seems to be very fragile, but in this case,
that may have actually been a good thing.

After some experimentation I found that the issue went away when I dropped down to the 3.6.4
recipe of bison found at OE-Core:bc95820cd. But this did not sit right with me. There is no way I
should be the only person seeing this issue.

Then I tried an experiment... I assumed I was encountering a compiler bootstrap issue with such a
big jump (GCC8 -> GCC10), so I rebuilt our hardknott based SDK VM with the 3.3.1 version of
buildtools-extended. The build worked flawlessly, but when I booted into the new SDK VM and
kicked off the build I got the same result (bison segfault when any build warnings are encountered).

This is when I started to mentally put a few more details together with other post-upgrade issues that
had been discovered in our lab. We attributed them to garden variety API and behavioral changes
expected during a Yocto upgrade, but now I am not so sure.

During the thud-to-hardknott upgrade process, we did nightly builds of the new hardknott based
target image from our thud based SDK VM. I assumed that since GCC10 was being built as part of
the build sysroot bootstrap process, we were getting a clean and consistent result irrespective of the
underlying build server OS.

One of the issues we were seeing in the lab was a periodic hang during the initramfs phase of the
boot process. We run a couple of setup scripts to manage the sysroot before the switch_root, so it
is not unusual to see some "growing pains" after an upgrade. The hangs were random with no
obvious cause, but systemd is very weird anyway so we attributed it to a new dependency or race
condition that we had to address after going from systemd 239 to 247.

It is also worth noting that systemd itself was not hung, it responded to the 'ole "three finger salute"
and dutifully filled the screen with shutdown messages. It was just that the boot process randomly
stopped cold in initramfs before the switch root. We would also occasionally see systemd
complaining in the logs, "Starting requested but asserts failed".

Historically, when asserts fail, it is a sign of a much larger problem, so I did another experiment...

Since we could build our SDK VM successfully with buildtools-extended, why not build the target
images? So I did. After a day of testing in the lab, none of the testers have seen the boot hang up in
the initramfs stage, whereas before it was happening about 50% of the time. I need a good week of
successful test activity before I am willing to declare success, but the results were convincing
enough to make it worth this summary post.

I did an extensive amount of trial and error testing, including meticulously comparing
buildtools-extended with our own versions of the same files. The only intersection point was gcc.

The gcc delivered with buildtools-extended works great. When I build hardknott's gcc10 from the
gcc in buildtools-extended, we are not able to build our target images with the resulting compiler.
When I build our target images from the old thud environment, we get a mysterious hang and
systemd asserts triggering during boot. Since GCC10 is an intermediate piece of the build, it is
also implicated despite the native environment running GCC8.

I will continue to troubleshoot this but I was hoping for some insight (or gentle guidance if I am
making a silly mistake). Overall, I am at a loss to think of a reason why I should not be able to build
a compiler from the buildtools-extended compiler and then use it to reliably build our target images.

Thank you,

..Ch:W..


P.S. For those who are curious, we started out on Pyro hosted on Ubuntu 16.04. From there we made
the jump to self hosting when we used that environment to build a thud based VM SDK. After years of
successful build, we are now in the process of upgrading to Hardknott.

P.P.S. For the sake of completeness, I had to add the following files to the buildtools-extended
sysroot to fully complete the build of our images:

/usr/include/magic.h -> util-linux "more" command requires this.
/usr/include/zstd.h -> I do not recall which recipe required this.
/usr/bin/free -> The OpenJDK 8 build scripts need this.
/usr/include/sys/* -> openjdk-8-native
/lib/libcap.so.2 -> The binutils "dir" command quietly breaks the build without this. I am not a fan of the
lack of error checking in the binutils build...
/usr/include/sensors/error.h and sensors.h -> mesa-native
/usr/include/zstd_errors.h -> qemu-system-native

--
"Perfection must be reached by degrees; she requires the slow hand of time." - Voltaire



Re: TLV320AIC3104: tlv320aic3104 #kernel #yocto

Amrun Nisha.R
 

Hi Alexandre,

Thanks for your guidance. I have updated my device tree with dummy codec as linux. Still the sound card is not properly added. When i tried to verify that using aplay, i got error message as "no sound card is found". Kindly advice.

4721 - 4740 of 58675