Date   

[meta-security][RFC 0/2] generic dm-verity support + BBB example

Bartosz Golaszewski
 

From: Bartosz Golaszewski <bgolaszewski@...>

I'm terribly sorry for spamming, but I eventually decided to resend it: not
only the tags were messed up but I also added a v2 on top. This time it
should be good.

===

This series attempts to introduce support for dm-verity in meta-security.
It depends on a series[1] I submitted for OE-core that introduces multi-stage
image deployment that's currently pending review (although the general idea
was accepted by Richard). This new way of deploying image artifacts is aimed
at solving a circular dependency problem[2] which turned out to be impossible
to resolve if all artifacts are deployed at once by the do_image_complete task.

The first patch in this series introduces a generic bbclass that allows to
generate and append dm-verity hash data at the end of the partition image.

The second patch adds support for an example verified boot image for Beagle
Bone Black where the root dm-verity hash is stored inside the signed fitImage
in an initramfs which takes care of mouting the protected rootfs.

Patch 2/2 - while made sure to work on BBB - should be generic enough to be
reusable across many platforms.

[1] https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg135694.html
[2] https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg134825.html

Bartosz Golaszewski (2):
classes: provide a class for generating dm-verity meta-data images
dm-verity: add a working example for BeagleBone Black

classes/dm-verity-img.bbclass | 88 +++++++++++++++++++
.../images/dm-verity-image-initramfs.bb | 26 ++++++
.../initrdscripts/initramfs-dm-verity.bb | 13 +++
.../initramfs-dm-verity/init-dm-verity.sh | 46 ++++++++++
wic/beaglebone-yocto-verity.wks.in | 15 ++++
5 files changed, 188 insertions(+)
create mode 100644 classes/dm-verity-img.bbclass
create mode 100644 recipes-core/images/dm-verity-image-initramfs.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity/init-dm-verity.sh
create mode 100644 wic/beaglebone-yocto-verity.wks.in

--
2.25.0


Re: [OE-core][PATCH v2 0/2] generic dm-verity support + BBB example

Bartosz Golaszewski
 

pt., 10 kwi 2020 o 14:34 Bartosz Golaszewski <brgl@...> napisał(a):

From: Bartosz Golaszewski <bgolaszewski@...>

This series attempts to introduce support for dm-verity in meta-security.
It depends on a series[1] I submitted for OE-core that introduces multi-stage
image deployment that's currently pending review (although the general idea
was accepted by Richard). This new way of deploying image artifacts is aimed
at solving a circular dependency problem[2] which turned out to be impossible
to resolve if all artifacts are deployed at once by the do_image_complete task.

The first patch in this series introduces a generic bbclass that allows to
generate and append dm-verity hash data at the end of the partition image.

The second patch adds support for an example verified boot image for Beagle
Bone Black where the root dm-verity hash is stored inside the signed fitImage
in an initramfs which takes care of mouting the protected rootfs.

Patch 2/2 - while made sure to work on BBB - should be generic enough to be
reusable across many platforms.

[1] https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg135694.html
[2] https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg134825.html

Bartosz Golaszewski (2):
classes: provide a class for generating dm-verity meta-data images
dm-verity: add a working example for BeagleBone Black

classes/dm-verity-img.bbclass | 88 +++++++++++++++++++
.../images/dm-verity-image-initramfs.bb | 26 ++++++
.../initrdscripts/initramfs-dm-verity.bb | 13 +++
.../initramfs-dm-verity/init-dm-verity.sh | 46 ++++++++++
wic/beaglebone-yocto-verity.wks.in | 15 ++++
5 files changed, 188 insertions(+)
create mode 100644 classes/dm-verity-img.bbclass
create mode 100644 recipes-core/images/dm-verity-image-initramfs.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity/init-dm-verity.sh
create mode 100644 wic/beaglebone-yocto-verity.wks.in

--
2.25.0
Eek, this was supposed to be tagged [meta-security]. But since I'm
posting it as an RFC I won't be resending for now.

Bart


[OE-core][PATCH v2 2/2] dm-verity: add a working example for BeagleBone Black

Bartosz Golaszewski
 

From: Bartosz Golaszewski <bgolaszewski@...>

This adds various bits and pieces to enable generating a working example
of a full chain of trust up to dm-verity-protected rootfs level on Beagle
Bone Black.

The new initramfs is quite generic and should work for other SoCs as well
when using fitImage.

The following config can be used with current master poky,
meta-openembedded & meta-security to generate a BBB image using verified
boot and dm-verity.

UBOOT_SIGN_KEYDIR = "/tmp/test-keys/"
UBOOT_SIGN_KEYNAME = "dev"
UBOOT_SIGN_ENABLE = "1"
UBOOT_MKIMAGE_DTCOPTS = "-I dts -O dtb -p 2000"
UBOOT_MACHINE_beaglebone-yocto = "am335x_boneblack_vboot_config"

IMAGE_CLASSES += "dm-verity-img"
IMAGE_FSTYPES += "wic.xz ext4"

DM_VERITY_IMAGE = "core-image-full-cmdline"
DM_VERITY_IMAGE_TYPE = "ext4"

KERNEL_CLASSES += "kernel-fitimage"
KERNEL_IMAGETYPE_beaglebone-yocto = "fitImage"

IMAGE_INSTALL_remove = " kernel-image-zimage"
IMAGE_BOOT_FILES_remove = " zImage"
IMAGE_BOOT_FILES_append = " fitImage-${INITRAMFS_IMAGE}-${MACHINE}-${MACHINE};fitImage"

# Using systemd is not strictly needed but deals nicely with read-only
# filesystem by default.
DISTRO_FEATURES_append = " systemd"
DISTRO_FEATURES_BACKFILL_CONSIDERED += "sysvinit"
VIRTUAL-RUNTIME_init_manager = "systemd"
VIRTUAL-RUNTIME_initscripts = "systemd-compat-units"

INITRAMFS_IMAGE = "dm-verity-image-initramfs"
INITRAMFS_FSTYPES = "cpio.gz"
INITRAMFS_IMAGE_BUNDLE = "1"

WKS_FILE = "beaglebone-yocto-verity.wks.in"

KERNEL_FEATURES_append = " features/device-mapper/dm-verity.scc"

Signed-off-by: Bartosz Golaszewski <bgolaszewski@...>
---
.../images/dm-verity-image-initramfs.bb | 26 +++++++++++
.../initrdscripts/initramfs-dm-verity.bb | 13 ++++++
.../initramfs-dm-verity/init-dm-verity.sh | 46 +++++++++++++++++++
wic/beaglebone-yocto-verity.wks.in | 15 ++++++
4 files changed, 100 insertions(+)
create mode 100644 recipes-core/images/dm-verity-image-initramfs.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity/init-dm-verity.sh
create mode 100644 wic/beaglebone-yocto-verity.wks.in

diff --git a/recipes-core/images/dm-verity-image-initramfs.bb b/recipes-core/images/dm-verity-image-initramfs.bb
new file mode 100644
index 0000000..f9ea376
--- /dev/null
+++ b/recipes-core/images/dm-verity-image-initramfs.bb
@@ -0,0 +1,26 @@
+DESCRIPTION = "Simple initramfs image for mounting the rootfs over the verity device mapper."
+
+# We want a clean, minimal image.
+IMAGE_FEATURES = ""
+
+PACKAGE_INSTALL = " \
+ initramfs-dm-verity \
+ base-files \
+ busybox \
+ util-linux-mount \
+ udev \
+ cryptsetup \
+ lvm2-udevrules \
+"
+
+# Can we somehow inspect reverse dependencies to avoid these variables?
+do_rootfs[depends] += "${DM_VERITY_IMAGE}:do_image_${DM_VERITY_IMAGE_TYPE}"
+
+IMAGE_FSTYPES = "${INITRAMFS_FSTYPES}"
+
+inherit core-image
+
+deploy_verity_hash() {
+ install -D -m 0644 ${DEPLOY_DIR_IMAGE}/${DM_VERITY_IMAGE}-${MACHINE}.${DM_VERITY_IMAGE_TYPE}.verity.env ${IMAGE_ROOTFS}/${datadir}/dm-verity.env
+}
+ROOTFS_POSTPROCESS_COMMAND += "deploy_verity_hash;"
diff --git a/recipes-core/initrdscripts/initramfs-dm-verity.bb b/recipes-core/initrdscripts/initramfs-dm-verity.bb
new file mode 100644
index 0000000..b614956
--- /dev/null
+++ b/recipes-core/initrdscripts/initramfs-dm-verity.bb
@@ -0,0 +1,13 @@
+SUMMARY = "Simple init script that uses devmapper to mount the rootfs in read-only mode protected by dm-verity"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
+
+SRC_URI = "file://init-dm-verity.sh"
+
+do_install() {
+ install -m 0755 ${WORKDIR}/init-dm-verity.sh ${D}/init
+ install -d ${D}/dev
+ mknod -m 622 ${D}/dev/console c 5 1
+}
+
+FILES_${PN} = "/init /dev/console"
diff --git a/recipes-core/initrdscripts/initramfs-dm-verity/init-dm-verity.sh b/recipes-core/initrdscripts/initramfs-dm-verity/init-dm-verity.sh
new file mode 100644
index 0000000..307d2c7
--- /dev/null
+++ b/recipes-core/initrdscripts/initramfs-dm-verity/init-dm-verity.sh
@@ -0,0 +1,46 @@
+#!/bin/sh
+
+PATH=/sbin:/bin:/usr/sbin:/usr/bin
+RDEV=""
+ROOT_DIR="/new_root"
+
+mkdir -p /proc
+mkdir -p /sys
+mkdir -p /run
+mkdir -p /tmp
+mount -t proc proc /proc
+mount -t sysfs sysfs /sys
+mount -t devtmpfs none /dev
+
+udevd --daemon
+udevadm trigger --type=subsystems --action=add
+udevadm trigger --type=devices --action=add
+udevadm settle --timeout=10
+
+for PARAM in $(cat /proc/cmdline); do
+ case $PARAM in
+ root=*)
+ RDEV=${PARAM#root=}
+ ;;
+ esac
+done
+
+if ! [ -b $RDEV ]; then
+ echo "Missing root command line argument!"
+ exit 1
+fi
+
+case $RDEV in
+ UUID=*)
+ RDEV=$(realpath /dev/disk/by-uuid/${RDEV#UUID=})
+ ;;
+esac
+
+. /usr/share/dm-verity.env
+
+echo "Mounting $RDEV over dm-verity as the root filesystem"
+
+veritysetup --data-block-size=1024 --hash-offset=$DATA_SIZE create rootfs $RDEV $RDEV $ROOT_HASH
+mkdir -p $ROOT_DIR
+mount -o ro /dev/mapper/rootfs $ROOT_DIR
+exec switch_root $ROOT_DIR /sbin/init
diff --git a/wic/beaglebone-yocto-verity.wks.in b/wic/beaglebone-yocto-verity.wks.in
new file mode 100644
index 0000000..cd1702e
--- /dev/null
+++ b/wic/beaglebone-yocto-verity.wks.in
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: MIT
+#
+# Copyright (C) 2020 BayLibre SAS
+# Author: Bartosz Golaszewski <bgolaszewski@...>
+#
+# A dm-verity variant of the regular wks for beaglebone black. We need to fetch
+# the partition images from the DEPLOY_DIR_IMAGE as the rootfs source plugin will
+# not recreate the exact block device corresponding with the hash tree. We must
+# not alter the label or any other setting on the image.
+#
+# This .wks only works with the dm-verity-img class.
+
+part /boot --source bootimg-partition --ondisk mmcblk0 --fstype=vfat --label boot --active --align 4 --size 16 --sourceparams="loader=u-boot" --use-uuid
+part / --source rawcopy --ondisk mmcblk0 --sourceparams="file=${DEPLOY_DIR_IMAGE}/${DM_VERITY_IMAGE}-${MACHINE}.${DM_VERITY_IMAGE_TYPE}.verity"
+bootloader --append="console=ttyS0,115200"
--
2.25.0


[OE-core][PATCH v2 1/2] classes: provide a class for generating dm-verity meta-data images

Bartosz Golaszewski
 

From: Bartosz Golaszewski <bgolaszewski@...>

This adds a class that allows to generate conversions of ext[234] and
btrfs partitions images with dm-verity hash data appended at the end as
well as a corresponding .env file containing the root hash and data
offset that can be stored in a secure location (e.g. signed fitImage)
or signed and verified at run-time on its own.

The class depends on two variables:
DM_VERITY_IMAGE: defines the name of the main image (normally the
one that is used with the bitbake command to
build the main image)
DM_VERITY_IMAGE_TYPE: defines exactly one type for which to generate
the protected image.

Signed-off-by: Bartosz Golaszewski <bgolaszewski@...>
---
classes/dm-verity-img.bbclass | 88 +++++++++++++++++++++++++++++++++++
1 file changed, 88 insertions(+)
create mode 100644 classes/dm-verity-img.bbclass

diff --git a/classes/dm-verity-img.bbclass b/classes/dm-verity-img.bbclass
new file mode 100644
index 0000000..1c0e29b
--- /dev/null
+++ b/classes/dm-verity-img.bbclass
@@ -0,0 +1,88 @@
+# SPDX-License-Identifier: MIT
+#
+# Copyright (C) 2020 BayLibre SAS
+# Author: Bartosz Golaszewski <bgolaszewski@...>
+#
+# This bbclass allows creating of dm-verity protected partition images. It
+# generates a device image file with dm-verity hash data appended at the end
+# plus the corresponding .env file containing additional information needed
+# to mount the image such as the root hash in the form of ell variables. To
+# assure data integrity, the root hash must be stored in a trusted location
+# or cryptographically signed and verified.
+#
+# Usage:
+# DM_VERITY_IMAGE = "core-image-full-cmdline" # or other image
+# DM_VERITY_IMAGE_TYPE = "ext4" # or ext2, ext3 & btrfs
+# IMAGE_CLASSES += "dm-verity-img"
+#
+# The resulting image can then be used to implement the device mapper block
+# integrity checking on the target device.
+
+# Process the output from veritysetup and generate the corresponding .env
+# file. The output from veritysetup is not very machine-friendly so we need to
+# convert it to some better format. Let's drop the first line (doesn't contain
+# any useful info) and feed the rest to a script.
+process_verity() {
+ local ENV="$OUTPUT.env"
+
+ # Each line contains a key and a value string delimited by ':'. Read the
+ # two parts into separate variables and process them separately. For the
+ # key part: convert the names to upper case and replace spaces with
+ # underscores to create correct shell variable names. For the value part:
+ # just trim all white-spaces.
+ IFS=":"
+ while read KEY VAL; do
+ echo -ne "$KEY" | tr '[:lower:]' '[:upper:]' | sed 's/ /_/g' >> $ENV
+ echo -ne "=" >> $ENV
+ echo "$VAL" | tr -d " \t" >> $ENV
+ done
+
+ # Add partition size
+ echo "DATA_SIZE=$SIZE" >> $ENV
+
+ ln -sf $ENV ${IMAGE_BASENAME}-${MACHINE}.$TYPE.verity.env
+}
+
+verity_setup() {
+ local TYPE=$1
+ local INPUT=${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$TYPE
+ local SIZE=$(stat --printf="%s" $INPUT)
+ local OUTPUT=$INPUT.verity
+
+ cp -a $INPUT $OUTPUT
+
+ # Let's drop the first line of output (doesn't contain any useful info)
+ # and feed the rest to another function.
+ veritysetup --data-block-size=1024 --hash-offset=$SIZE format $OUTPUT $OUTPUT | tail -n +2 | process_verity
+}
+
+VERITY_TYPES = "ext2.verity ext3.verity ext4.verity btrfs.verity"
+IMAGE_TYPES += "${VERITY_TYPES}"
+CONVERSIONTYPES += "verity"
+CONVERSION_CMD_verity = "verity_setup ${type}"
+CONVERSION_DEPENDS_verity = "cryptsetup-native"
+
+python __anonymous() {
+ verity_image = d.getVar('DM_VERITY_IMAGE')
+ verity_type = d.getVar('DM_VERITY_IMAGE_TYPE')
+ image_fstypes = d.getVar('IMAGE_FSTYPES')
+ pn = d.getVar('PN')
+
+ if verity_image != pn:
+ return # This doesn't concern this image
+
+ if not verity_image or not verity_type:
+ bb.warn('dm-verity-img class inherited but not used')
+ return
+
+ if len(verity_type.split()) is not 1:
+ bb.fatal('DM_VERITY_IMAGE_TYPE must contain exactly one type')
+
+ d.appendVar('IMAGE_FSTYPES', ' %s.verity' % verity_type)
+
+ # If we're using wic: we'll have to use partition images and not the rootfs
+ # source plugin so add the appropriate dependency.
+ if 'wic' in image_fstypes:
+ dep = ' %s:do_image_%s' % (pn, verity_type)
+ d.appendVarFlag('do_image_wic', 'depends', dep)
+}
--
2.25.0


[OE-core][PATCH v2 0/2] generic dm-verity support + BBB example

Bartosz Golaszewski
 

From: Bartosz Golaszewski <bgolaszewski@...>

This series attempts to introduce support for dm-verity in meta-security.
It depends on a series[1] I submitted for OE-core that introduces multi-stage
image deployment that's currently pending review (although the general idea
was accepted by Richard). This new way of deploying image artifacts is aimed
at solving a circular dependency problem[2] which turned out to be impossible
to resolve if all artifacts are deployed at once by the do_image_complete task.

The first patch in this series introduces a generic bbclass that allows to
generate and append dm-verity hash data at the end of the partition image.

The second patch adds support for an example verified boot image for Beagle
Bone Black where the root dm-verity hash is stored inside the signed fitImage
in an initramfs which takes care of mouting the protected rootfs.

Patch 2/2 - while made sure to work on BBB - should be generic enough to be
reusable across many platforms.

[1] https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg135694.html
[2] https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg134825.html

Bartosz Golaszewski (2):
classes: provide a class for generating dm-verity meta-data images
dm-verity: add a working example for BeagleBone Black

classes/dm-verity-img.bbclass | 88 +++++++++++++++++++
.../images/dm-verity-image-initramfs.bb | 26 ++++++
.../initrdscripts/initramfs-dm-verity.bb | 13 +++
.../initramfs-dm-verity/init-dm-verity.sh | 46 ++++++++++
wic/beaglebone-yocto-verity.wks.in | 15 ++++
5 files changed, 188 insertions(+)
create mode 100644 classes/dm-verity-img.bbclass
create mode 100644 recipes-core/images/dm-verity-image-initramfs.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity.bb
create mode 100644 recipes-core/initrdscripts/initramfs-dm-verity/init-dm-verity.sh
create mode 100644 wic/beaglebone-yocto-verity.wks.in

--
2.25.0


meta-intel: Override SERIAL_CONSOLES variable

Marek Belisko
 

Hi,

in meta-intel in machine configuration SERIAL_CONSOLES are defined as
: SERIAL_CONSOLES = "115200;ttyS0 115200;ttyS1 115200;ttyS2"

I would like to remove content of this variable (as on my target
system I get always this in journalctl):

Apr 06 11:12:54 intel-corei7-64 systemd[1]:
serial-getty@...: Succeeded. Apr 06 11:12:54 intel-corei7-64
systemd[1]: serial-getty@...: Succeeded. Apr 06 11:12:54
intel-corei7-64 systemd[1]: serial-getty@...: Service has no
hold-off time (RestartSec=0), scheduling restart. Apr 06 11:12:54
intel-corei7-64 systemd[1]: serial-getty@...: Scheduled
restart job, restart counter is at 62. Apr 06 11:12:54 intel-corei7-64
systemd[1]: serial-getty@...: Service has no hold-off time
(RestartSec=0), scheduling restart. Apr 06 11:12:54 intel-corei7-64
systemd[1]: serial-getty@...: Scheduled restart job, restart
counter is at 62. Apr 06 11:12:54 intel-corei7-64 systemd[1]: Stopped
Serial Getty on ttyS1. Apr 06 11:12:54 intel-corei7-64 systemd[1]:
Started Serial Getty on ttyS1. Apr 06 11:12:54 intel-corei7-64
systemd[1]: Stopped Serial Getty on ttyS2. Apr 06 11:12:54
intel-corei7-64 systemd[1]: Started Serial Getty on ttyS2.

so I do in local.conf:
SERIAL_CONSOLES_remove = "115200;ttyS0 115200;ttyS1 115200;ttyS2"

but then I get issue when building systemd-serialgetty:
sed: -e expression #1, char 13: unterminated `s' command

because variable looks like this:
SERIAL_CONSOLES=" " when check with bitbake -e

Is there some other way how to achieve this? Thanks.

BR,

marek
--
as simple and primitive as possible
-------------------------------------------------
Marek Belisko - OPEN-NANDRA
Freelance Developer

Ruska Nova Ves 219 | Presov, 08005 Slovak Republic
Tel: +421 915 052 184
skype: marekwhite
twitter: #opennandra
web: http://open-nandra.com


Re: sstate causing stripped kernel vs symbols mismatch

Joshua Watt
 



On Thu, Apr 9, 2020, 12:52 PM Bruce Ashfield <bruce.ashfield@...> wrote:
On Thu, Apr 9, 2020 at 1:21 PM Sean McKay <sean.mckay@...> wrote:
>
> I don’t know offhand, but the kernel documentation seems relatively straightforward.
>
> I can start investigating in that direction and see how complex it looks like it’s going to be.
>

I can tweak linux-yocto in the direction of reproducibility without
much trouble (for the build part). But I'm a bit out of my normal flow
for testing that it really is reproducible. So if anyone can point me
at what they are running to currently test that .. I can do the build
part.

Reproducible builds are part of the standard OE QA tests. You can run them with:

 oe-selftest -r reproducible

It currently tests core-image-sato, which I thought would cover the kernel, so I'm a little surprised it's not. Anyway, you can easily modify the reporducible.py test file to build whatever you want, since doing the full core-image-sato build can be pretty slow


Bruce

>
>
> When you say that reproducible builds are turned on by default, is there a flag somewhere that can be used to turn that off that I need to gate these changes behind? Or can they be made globally so that the reproducibility can’t be turned off (easily)?
>
>
>
>
>
> Do we expect to generally be okay with letting this sort of race condition remain in sstate? I concede that it’s probably okay, since I think the kernel is the only thing with this kind of forking task tree behavior after do_compile, and if we get 100% reproducible builds working, it’s not overly relevant… but it seems like it probably deserves a warning somewhere in the documentation.
>
>
>
> I can also bring this question to the next technical meeting (I know I just missed one) if it seems the sort of thing we need to get consensus.
>
>
>
> Cheers!
>
> -Sean
>
>
>
>
>
>
>
> From: Joshua Watt <jpewhacker@...>
> Sent: Thursday, April 9, 2020 10:00 AM
> To: McKay, Sean <sean.mckay@...>; yocto@...
> Subject: Re: [yocto] sstate causing stripped kernel vs symbols mismatch
>
>
>
>
>
> On 4/9/20 11:42 AM, Sean McKay wrote:
>
> Anyone have any thoughts or guidance on this?
>
> It seems like a pretty major bug to me.
>
>
>
> We’re willing to put the work in to fix it, and if it’s not something the upstream community is interested in, I’ll just pick a solution for us and go with it.
>
> But if it’s something that we’d like me to upstream, I’d like some feedback on which path I should start walking down before I start taking things apart.
>
>
>
> We have had a recent push for reproducible builds (and they are now enabled by default). Do you have any idea how much effort it would take to make the kernel build reproducibly? It's something we probably want anyway, and can add to the automated testing infrastructure to ensure it doesn't regress.
>
>
>
>
>
>
>
>
>
> Cheers!
>
> -Sean
>
>
>
> From: yocto@... <yocto@...> On Behalf Of Sean McKay
> Sent: Tuesday, April 7, 2020 12:03 PM
> To: yocto@...
> Subject: [yocto] sstate causing stripped kernel vs symbols mismatch
>
>
>
> Hi all,
>
>
>
> We’ve discovered that (quite frequently) the kernel that we deploy doesn’t match the unstripped one that we’re saving for debug symbols. I’ve traced the issue to a combination of an sstate miss for the kernel do_deploy step combined with an sstate hit for do_package_write_rpm. (side note: we know we have issues with sstate reuse/stamps including things they shouldn’t which is why we hit this so much. We’re working on that too)
>
>
>
> The result is that when our debug rootfs is created (where we added the kernel symbols), it’s got the version of the kernel from the sstate cached rpm files, but since do_deploy had an sstate miss, the entire kernel gets rebuilt to satisfy that dependency chain. Since the kernel doesn’t have reproducible builds working, the resulting pair of kernels don’t match each other for debug purposes.
>
>
>
> So, I have two questions to start:
>
> What is the recommended way to be getting debug symbols for the kernel, since do_deploy doesn’t seem to have a debug counterpart (which is why we originally just set things up to add the rpm to the generated debug rootfs)
> Does this seem like a bug that should be fixed? If so, what would be the recommended solution (more thoughts below)?
>
>
>
> Even if there’s a task somewhere that does what I’m looking for, this seems like a bit of a bug. I generally feel like we want to be able to trust sstate, so the fact that forking dependencies that each generate their own sstate objects can be out of sync is a bit scary.
>
> I’ve thought of several ways around this, but I can’t say I like any of them.
>
> (extremely gross hack) Create a new task to use instead of do_deploy that depends on do_packagegroup_write_rpm. Unpack the restored (or built) RPMs and use those blobs to deploy the kernel and symbols to the image directory.
> (gross hack with painful effects on build time) Disable sstate for do_package_write_rpm and do_deploy. Possibly replace with sstate logic for the kernel’s do_install step (side question – why doesn’t do_install generate sstate? It seems like it should be able to, since the point is to drop everything into the image directory)
> (possibly better, but sounds hard) Change the sstate logic so that if anything downstream of a do_compile task needs to be rerun, everything downstream of it needs to be rerun and sstate reuse for that recipe is not allowed (basically all or nothing sstate). Maybe with a flag that’s allowed in the bitbake file to indicate that a recipe does have reproducible builds and that different pieces are allowed to come from sstate in that case.
> (fix the symptoms but not the problem) Figure out how to get linux-yocto building in a reproducible fashion and pretend the problem doesn’t exist.
>
>
>
>
>
> If you’re interested, this is quite easy to reproduce – these are my repro steps
>
> Check out a clean copy of zeus (22.0.2)
> Add kernel-image to core-image-minimal in whatever fashion you choose (I just dumped it in the RDEPENDS for packagegroup-core-boot for testing)
> bitbake core-image-minimal
> bitbake -c clean core-image-minimal linux-yocto (or just wipe your whole build dir, since everything should come from sstate now)
> Delete the sstate object(s) for linux-yocto’s deploy task.
> bitbake core-image-minimal
> Compare the BuildID hashes for the kernel in the two locations using file (you’ll need to use the kernel’s extract-vmlinux script to get it out of the bzImage)
>
> file tmp/work/qemux86_64-poky-linux/core-image-minimal/1.0-r0/rootfs/boot/vmlinux-5.2.28-yocto-standard
> ./tmp/work-shared/qemux86-64/kernel-source/scripts/extract-vmlinux tmp/deploy/images/qemux86-64/bzImage > vmlinux-deploy && file vmlinux-deploy
>
>
>
> Anyone have thoughts or suggestions?
>
>
>
> Cheers!
>
> -Sean McKay
>
>
>
>



--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


Re: sstate causing stripped kernel vs symbols mismatch

Sean McKay
 

The simplest thing I've found is checking/comparing the BuildID that GCC embeds in the ELF file after I force it to recompile. Eg:
$ file tmp/work/qemux86_64-poky-linux/linux-yocto/5.2.28+gitAUTOINC+dd6019025c_992280855e-r0/linux-qemux86_64-standard-build/vmlinux | egrep -o "BuildID\[sha1\]=[0-9a-f]*"
BuildID[sha1]=9b1971fb286e78364246543583ed13600a7f8111

Is that what you were asking for?
Presumably we could also actually hash the actual vmlinux file for comparisons at the do_compile stage, but I was originally comparing stripped vs unstripped, so I had to go by the BuildID.

Side note: from the kernel documentation, it looks like there are 4 main things that could affect reproducibility:
Timestamps, build directory, user account name, and hostname.
I assume they'd be easiest to tackle sequentially in that order.

This is the documentation I've been referencing:
https://www.kernel.org/doc/html/latest/kbuild/reproducible-builds.html

-Sean

-----Original Message-----
From: Bruce Ashfield <bruce.ashfield@...>
Sent: Thursday, April 9, 2020 10:52 AM
To: McKay, Sean <sean.mckay@...>
Cc: Joshua Watt <jpewhacker@...>; yocto@...
Subject: Re: [yocto] sstate causing stripped kernel vs symbols mismatch

On Thu, Apr 9, 2020 at 1:21 PM Sean McKay <sean.mckay@...> wrote:

I don’t know offhand, but the kernel documentation seems relatively straightforward.

I can start investigating in that direction and see how complex it looks like it’s going to be.
I can tweak linux-yocto in the direction of reproducibility without much trouble (for the build part). But I'm a bit out of my normal flow for testing that it really is reproducible. So if anyone can point me at what they are running to currently test that .. I can do the build part.

Bruce



When you say that reproducible builds are turned on by default, is there a flag somewhere that can be used to turn that off that I need to gate these changes behind? Or can they be made globally so that the reproducibility can’t be turned off (easily)?





Do we expect to generally be okay with letting this sort of race condition remain in sstate? I concede that it’s probably okay, since I think the kernel is the only thing with this kind of forking task tree behavior after do_compile, and if we get 100% reproducible builds working, it’s not overly relevant… but it seems like it probably deserves a warning somewhere in the documentation.



I can also bring this question to the next technical meeting (I know I just missed one) if it seems the sort of thing we need to get consensus.



Cheers!

-Sean







From: Joshua Watt <jpewhacker@...>
Sent: Thursday, April 9, 2020 10:00 AM
To: McKay, Sean <sean.mckay@...>; yocto@...
Subject: Re: [yocto] sstate causing stripped kernel vs symbols
mismatch





On 4/9/20 11:42 AM, Sean McKay wrote:

Anyone have any thoughts or guidance on this?

It seems like a pretty major bug to me.



We’re willing to put the work in to fix it, and if it’s not something the upstream community is interested in, I’ll just pick a solution for us and go with it.

But if it’s something that we’d like me to upstream, I’d like some feedback on which path I should start walking down before I start taking things apart.



We have had a recent push for reproducible builds (and they are now enabled by default). Do you have any idea how much effort it would take to make the kernel build reproducibly? It's something we probably want anyway, and can add to the automated testing infrastructure to ensure it doesn't regress.









Cheers!

-Sean



From: yocto@... <yocto@...> On
Behalf Of Sean McKay
Sent: Tuesday, April 7, 2020 12:03 PM
To: yocto@...
Subject: [yocto] sstate causing stripped kernel vs symbols mismatch



Hi all,



We’ve discovered that (quite frequently) the kernel that we deploy
doesn’t match the unstripped one that we’re saving for debug symbols.
I’ve traced the issue to a combination of an sstate miss for the
kernel do_deploy step combined with an sstate hit for
do_package_write_rpm. (side note: we know we have issues with sstate
reuse/stamps including things they shouldn’t which is why we hit this
so much. We’re working on that too)



The result is that when our debug rootfs is created (where we added the kernel symbols), it’s got the version of the kernel from the sstate cached rpm files, but since do_deploy had an sstate miss, the entire kernel gets rebuilt to satisfy that dependency chain. Since the kernel doesn’t have reproducible builds working, the resulting pair of kernels don’t match each other for debug purposes.



So, I have two questions to start:

What is the recommended way to be getting debug symbols for the
kernel, since do_deploy doesn’t seem to have a debug counterpart (which is why we originally just set things up to add the rpm to the generated debug rootfs) Does this seem like a bug that should be fixed? If so, what would be the recommended solution (more thoughts below)?



Even if there’s a task somewhere that does what I’m looking for, this seems like a bit of a bug. I generally feel like we want to be able to trust sstate, so the fact that forking dependencies that each generate their own sstate objects can be out of sync is a bit scary.

I’ve thought of several ways around this, but I can’t say I like any of them.

(extremely gross hack) Create a new task to use instead of do_deploy that depends on do_packagegroup_write_rpm. Unpack the restored (or built) RPMs and use those blobs to deploy the kernel and symbols to the image directory.
(gross hack with painful effects on build time) Disable sstate for
do_package_write_rpm and do_deploy. Possibly replace with sstate logic for the kernel’s do_install step (side question – why doesn’t do_install generate sstate? It seems like it should be able to, since the point is to drop everything into the image directory) (possibly better, but sounds hard) Change the sstate logic so that if anything downstream of a do_compile task needs to be rerun, everything downstream of it needs to be rerun and sstate reuse for that recipe is not allowed (basically all or nothing sstate). Maybe with a flag that’s allowed in the bitbake file to indicate that a recipe does have reproducible builds and that different pieces are allowed to come from sstate in that case.
(fix the symptoms but not the problem) Figure out how to get linux-yocto building in a reproducible fashion and pretend the problem doesn’t exist.





If you’re interested, this is quite easy to reproduce – these are my
repro steps

Check out a clean copy of zeus (22.0.2) Add kernel-image to
core-image-minimal in whatever fashion you choose (I just dumped it in
the RDEPENDS for packagegroup-core-boot for testing) bitbake
core-image-minimal bitbake -c clean core-image-minimal linux-yocto (or
just wipe your whole build dir, since everything should come from sstate now) Delete the sstate object(s) for linux-yocto’s deploy task.
bitbake core-image-minimal
Compare the BuildID hashes for the kernel in the two locations using
file (you’ll need to use the kernel’s extract-vmlinux script to get it
out of the bzImage)

file
tmp/work/qemux86_64-poky-linux/core-image-minimal/1.0-r0/rootfs/boot/v
mlinux-5.2.28-yocto-standard
./tmp/work-shared/qemux86-64/kernel-source/scripts/extract-vmlinux
tmp/deploy/images/qemux86-64/bzImage > vmlinux-deploy && file
vmlinux-deploy



Anyone have thoughts or suggestions?



Cheers!

-Sean McKay





--
- Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


Re: sstate causing stripped kernel vs symbols mismatch

Bruce Ashfield
 

On Thu, Apr 9, 2020 at 1:21 PM Sean McKay <sean.mckay@...> wrote:

I don’t know offhand, but the kernel documentation seems relatively straightforward.

I can start investigating in that direction and see how complex it looks like it’s going to be.
I can tweak linux-yocto in the direction of reproducibility without
much trouble (for the build part). But I'm a bit out of my normal flow
for testing that it really is reproducible. So if anyone can point me
at what they are running to currently test that .. I can do the build
part.

Bruce



When you say that reproducible builds are turned on by default, is there a flag somewhere that can be used to turn that off that I need to gate these changes behind? Or can they be made globally so that the reproducibility can’t be turned off (easily)?





Do we expect to generally be okay with letting this sort of race condition remain in sstate? I concede that it’s probably okay, since I think the kernel is the only thing with this kind of forking task tree behavior after do_compile, and if we get 100% reproducible builds working, it’s not overly relevant… but it seems like it probably deserves a warning somewhere in the documentation.



I can also bring this question to the next technical meeting (I know I just missed one) if it seems the sort of thing we need to get consensus.



Cheers!

-Sean







From: Joshua Watt <jpewhacker@...>
Sent: Thursday, April 9, 2020 10:00 AM
To: McKay, Sean <sean.mckay@...>; yocto@...
Subject: Re: [yocto] sstate causing stripped kernel vs symbols mismatch





On 4/9/20 11:42 AM, Sean McKay wrote:

Anyone have any thoughts or guidance on this?

It seems like a pretty major bug to me.



We’re willing to put the work in to fix it, and if it’s not something the upstream community is interested in, I’ll just pick a solution for us and go with it.

But if it’s something that we’d like me to upstream, I’d like some feedback on which path I should start walking down before I start taking things apart.



We have had a recent push for reproducible builds (and they are now enabled by default). Do you have any idea how much effort it would take to make the kernel build reproducibly? It's something we probably want anyway, and can add to the automated testing infrastructure to ensure it doesn't regress.









Cheers!

-Sean



From: yocto@... <yocto@...> On Behalf Of Sean McKay
Sent: Tuesday, April 7, 2020 12:03 PM
To: yocto@...
Subject: [yocto] sstate causing stripped kernel vs symbols mismatch



Hi all,



We’ve discovered that (quite frequently) the kernel that we deploy doesn’t match the unstripped one that we’re saving for debug symbols. I’ve traced the issue to a combination of an sstate miss for the kernel do_deploy step combined with an sstate hit for do_package_write_rpm. (side note: we know we have issues with sstate reuse/stamps including things they shouldn’t which is why we hit this so much. We’re working on that too)



The result is that when our debug rootfs is created (where we added the kernel symbols), it’s got the version of the kernel from the sstate cached rpm files, but since do_deploy had an sstate miss, the entire kernel gets rebuilt to satisfy that dependency chain. Since the kernel doesn’t have reproducible builds working, the resulting pair of kernels don’t match each other for debug purposes.



So, I have two questions to start:

What is the recommended way to be getting debug symbols for the kernel, since do_deploy doesn’t seem to have a debug counterpart (which is why we originally just set things up to add the rpm to the generated debug rootfs)
Does this seem like a bug that should be fixed? If so, what would be the recommended solution (more thoughts below)?



Even if there’s a task somewhere that does what I’m looking for, this seems like a bit of a bug. I generally feel like we want to be able to trust sstate, so the fact that forking dependencies that each generate their own sstate objects can be out of sync is a bit scary.

I’ve thought of several ways around this, but I can’t say I like any of them.

(extremely gross hack) Create a new task to use instead of do_deploy that depends on do_packagegroup_write_rpm. Unpack the restored (or built) RPMs and use those blobs to deploy the kernel and symbols to the image directory.
(gross hack with painful effects on build time) Disable sstate for do_package_write_rpm and do_deploy. Possibly replace with sstate logic for the kernel’s do_install step (side question – why doesn’t do_install generate sstate? It seems like it should be able to, since the point is to drop everything into the image directory)
(possibly better, but sounds hard) Change the sstate logic so that if anything downstream of a do_compile task needs to be rerun, everything downstream of it needs to be rerun and sstate reuse for that recipe is not allowed (basically all or nothing sstate). Maybe with a flag that’s allowed in the bitbake file to indicate that a recipe does have reproducible builds and that different pieces are allowed to come from sstate in that case.
(fix the symptoms but not the problem) Figure out how to get linux-yocto building in a reproducible fashion and pretend the problem doesn’t exist.





If you’re interested, this is quite easy to reproduce – these are my repro steps

Check out a clean copy of zeus (22.0.2)
Add kernel-image to core-image-minimal in whatever fashion you choose (I just dumped it in the RDEPENDS for packagegroup-core-boot for testing)
bitbake core-image-minimal
bitbake -c clean core-image-minimal linux-yocto (or just wipe your whole build dir, since everything should come from sstate now)
Delete the sstate object(s) for linux-yocto’s deploy task.
bitbake core-image-minimal
Compare the BuildID hashes for the kernel in the two locations using file (you’ll need to use the kernel’s extract-vmlinux script to get it out of the bzImage)

file tmp/work/qemux86_64-poky-linux/core-image-minimal/1.0-r0/rootfs/boot/vmlinux-5.2.28-yocto-standard
./tmp/work-shared/qemux86-64/kernel-source/scripts/extract-vmlinux tmp/deploy/images/qemux86-64/bzImage > vmlinux-deploy && file vmlinux-deploy



Anyone have thoughts or suggestions?



Cheers!

-Sean McKay





--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


Re: sstate causing stripped kernel vs symbols mismatch

Sean McKay
 

I don’t know offhand, but the kernel documentation seems relatively straightforward.

I can start investigating in that direction and see how complex it looks like it’s going to be.

 

When you say that reproducible builds are turned on by default, is there a flag somewhere that can be used to turn that off that I need to gate these changes behind? Or can they be made globally so that the reproducibility can’t be turned off (easily)?

 

 

Do we expect to generally be okay with letting this sort of race condition remain in sstate? I concede that it’s probably okay, since I think the kernel is the only thing with this kind of forking task tree behavior after do_compile, and if we get 100% reproducible builds working, it’s not overly relevant… but it seems like it probably deserves a warning somewhere in the documentation.

 

I can also bring this question to the next technical meeting (I know I just missed one) if it seems the sort of thing we need to get consensus.

 

Cheers!

-Sean

 

 

 

From: Joshua Watt <jpewhacker@...>
Sent: Thursday, April 9, 2020 10:00 AM
To: McKay, Sean <sean.mckay@...>; yocto@...
Subject: Re: [yocto] sstate causing stripped kernel vs symbols mismatch

 

 

On 4/9/20 11:42 AM, Sean McKay wrote:

Anyone have any thoughts or guidance on this?

It seems like a pretty major bug to me.

 

We’re willing to put the work in to fix it, and if it’s not something the upstream community is interested in, I’ll just pick a solution for us and go with it.

But if it’s something that we’d like me to upstream, I’d like some feedback on which path I should start walking down before I start taking things apart.

 

We have had a recent push for reproducible builds (and they are now enabled by default). Do you have any idea how much effort it would take to make the kernel build reproducibly? It's something we probably want anyway, and can add to the automated testing infrastructure to ensure it doesn't regress.

 

 



 

Cheers!

-Sean

 

From: yocto@... <yocto@...> On Behalf Of Sean McKay
Sent: Tuesday, April 7, 2020 12:03 PM
To: yocto@...
Subject: [yocto] sstate causing stripped kernel vs symbols mismatch

 

Hi all,

 

We’ve discovered that (quite frequently) the kernel that we deploy doesn’t match the unstripped one that we’re saving for debug symbols. I’ve traced the issue to a combination of an sstate miss for the kernel do_deploy step combined with an sstate hit for do_package_write_rpm. (side note: we know we have issues with sstate reuse/stamps including things they shouldn’t which is why we hit this so much. We’re working on that too)

 

The result is that when our debug rootfs is created (where we added the kernel symbols), it’s got the version of the kernel from the sstate cached rpm files, but since do_deploy had an sstate miss, the entire kernel gets rebuilt to satisfy that dependency chain. Since the kernel doesn’t have reproducible builds working, the resulting pair of kernels don’t match each other for debug purposes.

 

So, I have two questions to start:

  1. What is the recommended way to be getting debug symbols for the kernel, since do_deploy doesn’t seem to have a debug counterpart (which is why we originally just set things up to add the rpm to the generated debug rootfs)
  2. Does this seem like a bug that should be fixed? If so, what would be the recommended solution (more thoughts below)?

 

Even if there’s a task somewhere that does what I’m looking for, this seems like a bit of a bug. I generally feel like we want to be able to trust sstate, so the fact that forking dependencies that each generate their own sstate objects can be out of sync is a bit scary.

I’ve thought of several ways around this, but I can’t say I like any of them.

  • (extremely gross hack) Create a new task to use instead of do_deploy that depends on do_packagegroup_write_rpm. Unpack the restored (or built) RPMs and use those blobs to deploy the kernel and symbols to the image directory.
  • (gross hack with painful effects on build time) Disable sstate for do_package_write_rpm and do_deploy. Possibly replace with sstate logic for the kernel’s do_install step (side question – why doesn’t do_install generate sstate? It seems like it should be able to, since the point is to drop everything into the image directory)
  • (possibly better, but sounds hard) Change the sstate logic so that if anything downstream of a do_compile task needs to be rerun, everything downstream of it needs to be rerun and sstate reuse for that recipe is not allowed (basically all or nothing sstate). Maybe with a flag that’s allowed in the bitbake file to indicate that a recipe does have reproducible builds and that different pieces are allowed to come from sstate in that case.
  • (fix the symptoms but not the problem) Figure out how to get linux-yocto building in a reproducible fashion and pretend the problem doesn’t exist.

 

 

If you’re interested, this is quite easy to reproduce – these are my repro steps

  • Check out a clean copy of zeus (22.0.2)
  • Add kernel-image to core-image-minimal in whatever fashion you choose (I just dumped it in the RDEPENDS for packagegroup-core-boot for testing)
  • bitbake core-image-minimal
  • bitbake -c clean core-image-minimal linux-yocto (or just wipe your whole build dir, since everything should come from sstate now)
  • Delete the sstate object(s) for linux-yocto’s deploy task.
  • bitbake core-image-minimal
  • Compare the BuildID hashes for the kernel in the two locations using file (you’ll need to use the kernel’s extract-vmlinux script to get it out of the bzImage)
    • file tmp/work/qemux86_64-poky-linux/core-image-minimal/1.0-r0/rootfs/boot/vmlinux-5.2.28-yocto-standard
    • ./tmp/work-shared/qemux86-64/kernel-source/scripts/extract-vmlinux tmp/deploy/images/qemux86-64/bzImage > vmlinux-deploy && file vmlinux-deploy

 

Anyone have thoughts or suggestions?

 

Cheers!

-Sean McKay





Re: sstate causing stripped kernel vs symbols mismatch

Joshua Watt
 


On 4/9/20 11:42 AM, Sean McKay wrote:

Anyone have any thoughts or guidance on this?

It seems like a pretty major bug to me.

 

We’re willing to put the work in to fix it, and if it’s not something the upstream community is interested in, I’ll just pick a solution for us and go with it.

But if it’s something that we’d like me to upstream, I’d like some feedback on which path I should start walking down before I start taking things apart.


We have had a recent push for reproducible builds (and they are now enabled by default). Do you have any idea how much effort it would take to make the kernel build reproducibly? It's something we probably want anyway, and can add to the automated testing infrastructure to ensure it doesn't regress.




 

Cheers!

-Sean

 

From: yocto@... <yocto@...> On Behalf Of Sean McKay
Sent: Tuesday, April 7, 2020 12:03 PM
To: yocto@...
Subject: [yocto] sstate causing stripped kernel vs symbols mismatch

 

Hi all,

 

We’ve discovered that (quite frequently) the kernel that we deploy doesn’t match the unstripped one that we’re saving for debug symbols. I’ve traced the issue to a combination of an sstate miss for the kernel do_deploy step combined with an sstate hit for do_package_write_rpm. (side note: we know we have issues with sstate reuse/stamps including things they shouldn’t which is why we hit this so much. We’re working on that too)

 

The result is that when our debug rootfs is created (where we added the kernel symbols), it’s got the version of the kernel from the sstate cached rpm files, but since do_deploy had an sstate miss, the entire kernel gets rebuilt to satisfy that dependency chain. Since the kernel doesn’t have reproducible builds working, the resulting pair of kernels don’t match each other for debug purposes.

 

So, I have two questions to start:

  1. What is the recommended way to be getting debug symbols for the kernel, since do_deploy doesn’t seem to have a debug counterpart (which is why we originally just set things up to add the rpm to the generated debug rootfs)
  2. Does this seem like a bug that should be fixed? If so, what would be the recommended solution (more thoughts below)?

 

Even if there’s a task somewhere that does what I’m looking for, this seems like a bit of a bug. I generally feel like we want to be able to trust sstate, so the fact that forking dependencies that each generate their own sstate objects can be out of sync is a bit scary.

I’ve thought of several ways around this, but I can’t say I like any of them.

  • (extremely gross hack) Create a new task to use instead of do_deploy that depends on do_packagegroup_write_rpm. Unpack the restored (or built) RPMs and use those blobs to deploy the kernel and symbols to the image directory.
  • (gross hack with painful effects on build time) Disable sstate for do_package_write_rpm and do_deploy. Possibly replace with sstate logic for the kernel’s do_install step (side question – why doesn’t do_install generate sstate? It seems like it should be able to, since the point is to drop everything into the image directory)
  • (possibly better, but sounds hard) Change the sstate logic so that if anything downstream of a do_compile task needs to be rerun, everything downstream of it needs to be rerun and sstate reuse for that recipe is not allowed (basically all or nothing sstate). Maybe with a flag that’s allowed in the bitbake file to indicate that a recipe does have reproducible builds and that different pieces are allowed to come from sstate in that case.
  • (fix the symptoms but not the problem) Figure out how to get linux-yocto building in a reproducible fashion and pretend the problem doesn’t exist.

 

 

If you’re interested, this is quite easy to reproduce – these are my repro steps

  • Check out a clean copy of zeus (22.0.2)
  • Add kernel-image to core-image-minimal in whatever fashion you choose (I just dumped it in the RDEPENDS for packagegroup-core-boot for testing)
  • bitbake core-image-minimal
  • bitbake -c clean core-image-minimal linux-yocto (or just wipe your whole build dir, since everything should come from sstate now)
  • Delete the sstate object(s) for linux-yocto’s deploy task.
  • bitbake core-image-minimal
  • Compare the BuildID hashes for the kernel in the two locations using file (you’ll need to use the kernel’s extract-vmlinux script to get it out of the bzImage)
    • file tmp/work/qemux86_64-poky-linux/core-image-minimal/1.0-r0/rootfs/boot/vmlinux-5.2.28-yocto-standard
    • ./tmp/work-shared/qemux86-64/kernel-source/scripts/extract-vmlinux tmp/deploy/images/qemux86-64/bzImage > vmlinux-deploy && file vmlinux-deploy

 

Anyone have thoughts or suggestions?

 

Cheers!

-Sean McKay



    


Re: sstate causing stripped kernel vs symbols mismatch

Sean McKay
 

Anyone have any thoughts or guidance on this?

It seems like a pretty major bug to me.

 

We’re willing to put the work in to fix it, and if it’s not something the upstream community is interested in, I’ll just pick a solution for us and go with it.

But if it’s something that we’d like me to upstream, I’d like some feedback on which path I should start walking down before I start taking things apart.

 

Cheers!

-Sean

 

From: yocto@... <yocto@...> On Behalf Of Sean McKay
Sent: Tuesday, April 7, 2020 12:03 PM
To: yocto@...
Subject: [yocto] sstate causing stripped kernel vs symbols mismatch

 

Hi all,

 

We’ve discovered that (quite frequently) the kernel that we deploy doesn’t match the unstripped one that we’re saving for debug symbols. I’ve traced the issue to a combination of an sstate miss for the kernel do_deploy step combined with an sstate hit for do_package_write_rpm. (side note: we know we have issues with sstate reuse/stamps including things they shouldn’t which is why we hit this so much. We’re working on that too)

 

The result is that when our debug rootfs is created (where we added the kernel symbols), it’s got the version of the kernel from the sstate cached rpm files, but since do_deploy had an sstate miss, the entire kernel gets rebuilt to satisfy that dependency chain. Since the kernel doesn’t have reproducible builds working, the resulting pair of kernels don’t match each other for debug purposes.

 

So, I have two questions to start:

  1. What is the recommended way to be getting debug symbols for the kernel, since do_deploy doesn’t seem to have a debug counterpart (which is why we originally just set things up to add the rpm to the generated debug rootfs)
  2. Does this seem like a bug that should be fixed? If so, what would be the recommended solution (more thoughts below)?

 

Even if there’s a task somewhere that does what I’m looking for, this seems like a bit of a bug. I generally feel like we want to be able to trust sstate, so the fact that forking dependencies that each generate their own sstate objects can be out of sync is a bit scary.

I’ve thought of several ways around this, but I can’t say I like any of them.

  • (extremely gross hack) Create a new task to use instead of do_deploy that depends on do_packagegroup_write_rpm. Unpack the restored (or built) RPMs and use those blobs to deploy the kernel and symbols to the image directory.
  • (gross hack with painful effects on build time) Disable sstate for do_package_write_rpm and do_deploy. Possibly replace with sstate logic for the kernel’s do_install step (side question – why doesn’t do_install generate sstate? It seems like it should be able to, since the point is to drop everything into the image directory)
  • (possibly better, but sounds hard) Change the sstate logic so that if anything downstream of a do_compile task needs to be rerun, everything downstream of it needs to be rerun and sstate reuse for that recipe is not allowed (basically all or nothing sstate). Maybe with a flag that’s allowed in the bitbake file to indicate that a recipe does have reproducible builds and that different pieces are allowed to come from sstate in that case.
  • (fix the symptoms but not the problem) Figure out how to get linux-yocto building in a reproducible fashion and pretend the problem doesn’t exist.

 

 

If you’re interested, this is quite easy to reproduce – these are my repro steps

  • Check out a clean copy of zeus (22.0.2)
  • Add kernel-image to core-image-minimal in whatever fashion you choose (I just dumped it in the RDEPENDS for packagegroup-core-boot for testing)
  • bitbake core-image-minimal
  • bitbake -c clean core-image-minimal linux-yocto (or just wipe your whole build dir, since everything should come from sstate now)
  • Delete the sstate object(s) for linux-yocto’s deploy task.
  • bitbake core-image-minimal
  • Compare the BuildID hashes for the kernel in the two locations using file (you’ll need to use the kernel’s extract-vmlinux script to get it out of the bzImage)
    • file tmp/work/qemux86_64-poky-linux/core-image-minimal/1.0-r0/rootfs/boot/vmlinux-5.2.28-yocto-standard
    • ./tmp/work-shared/qemux86-64/kernel-source/scripts/extract-vmlinux tmp/deploy/images/qemux86-64/bzImage > vmlinux-deploy && file vmlinux-deploy

 

Anyone have thoughts or suggestions?

 

Cheers!

-Sean McKay


Re: Shorten booting time

Michael Nazzareno Trimarchi
 

Hi

Please give some information about your plafform.
Kernel release
u-boot release
architecture
ubifs or not

Michael

On Thu, Apr 9, 2020 at 4:25 PM Anders Montonen <Anders.Montonen@...> wrote:

On 8 Apr 2020, at 14:59, JH <jupiter.hce@...> wrote:

I am running Yocto built Linux on an ARM device, it takes about more
than 1 minutes to boot from NAND and kernel, any good strategy to
reduce the ARM device booting time? I can think of cutting down
unnecessary configures in kernel.
If you’re using systemd, you can use systemd-analyze to get some boot performance statistics. They can help identify slow-starting services, dependency chains, and other bottlenecks.

-a
--
| Michael Nazzareno Trimarchi Amarula Solutions BV |
| COO - Founder Cruquiuskade 47 |
| +31(0)851119172 Amsterdam 1018 AM NL |
| [`as] http://www.amarulasolutions.com |


Re: QA notification for completed autobuilder build (yocto-3.1.rc2)

Sangeeta Jain
 

 

 

From: yocto@... <yocto@...> On Behalf Of akuster
Sent: Thursday, 9 April, 2020 9:58 AM
To: Jain, Sangeeta <sangeeta.jain@...>; Richard Purdie <richard.purdie@...>; yocto@...
Cc: otavio@...; yi.zhao@...; Sangal, Apoorv <apoorv.sangal@...>; Yeoh, Ee Peng <ee.peng.yeoh@...>; Chan, Aaron Chun Yew <aaron.chun.yew.chan@...>; akuster808@...; sjolley.yp.pm@...
Subject: Re: [yocto] QA notification for completed autobuilder build (yocto-3.1.rc2)

 

 

On 4/8/20 6:18 PM, Sangeeta Jain wrote:

Hi all,
 
Intel and WR YP QA is now running QA execution for YP build yocto-3.1.rc2.
We are planning to execute following tests for this cycle:
 
OEQA-manual tests for following module:
1. OE-Core
2. BSP-hw

Has the manual testing situation changed from the last QA run?

Yeah, managed to run some more manual testcases; though still cannot access all hardware.



 
 
Runtime auto test for following platforms:
1. MinnowTurbot 32-bit
2. Coffee Lake
3. NUC 7
4. NUC 6
5. Edgerouter
6. Beaglebone
 
ETA for completion is next Monday, April 13.

Sounds great.

- armin

 
 
Thanks,
Sangeeta
 
-----Original Message-----
From: Richard Purdie <richard.purdie@...>
Sent: Wednesday, 8 April, 2020 9:21 PM
To: pokybuild@...; yocto@...
Cc: otavio@...; yi.zhao@...; Sangal, Apoorv
<apoorv.sangal@...>; Yeoh, Ee Peng <ee.peng.yeoh@...>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@...>; akuster808@...;
sjolley.yp.pm@...; Jain, Sangeeta <sangeeta.jain@...>
Subject: Re: QA notification for completed autobuilder build (yocto-3.1.rc2)
 
On Wed, 2020-04-08 at 04:01 +0000, pokybuild@...
wrote:
A build flagged for QA (yocto-3.1.rc2) was completed on the
autobuilder and is available at:
 
 
    https://autobuilder.yocto.io/pub/releases/yocto-3.1.rc2
 
 
Build hash information:
 
bitbake: 4618da2094189e4d814b7d65672cb65c86c0626a
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: bd539ea962ee285eb71053583e3c17fa166fc610
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 1795f30d8ab73d35710ca99064c51190dc84853e
poky: 5d47cdf448b6cff5bb7cc5b0ba0426b8235ec478
 
 
 
There were two failures in this build due to collect-results failing. I fixed the
missing dependency on that autobuilder worker (there was already an open bug
but it wasn't fixed yet) and reran the collections scripts so the results were
added and handled.
 
Cheers,
 
Richard
 



 

 


Re: [yocto-autobuilder2][PATCH] config: Fix giturl for meta-virtualization Layer

Armin Kuster
 

On 4/1/20 7:27 PM, Aaron Chan wrote:
Could someone please fix this simple piece?

    


Re: Shorten booting time

Anders Montonen
 

On 8 Apr 2020, at 14:59, JH <jupiter.hce@...> wrote:

I am running Yocto built Linux on an ARM device, it takes about more
than 1 minutes to boot from NAND and kernel, any good strategy to
reduce the ARM device booting time? I can think of cutting down
unnecessary configures in kernel.
If you’re using systemd, you can use systemd-analyze to get some boot performance statistics. They can help identify slow-starting services, dependency chains, and other bottlenecks.

-a


Re: imx-boot do_compile failing with custom distro #yocto

stefan.wenninger@...
 

Hi Rudolf,
I did some digging in imx-atf_1.0.bb and saw that it copies bl31-imx8mq.bin to the imx-boot-tools folder in do_deploy().
In imx-boot_0.2.bb I can see that its do_compile is setup to run after imx-atf's do_deploy:
do_compile[depends] += " \
    virtual/bootloader:do_deploy \
    ${@' '.join('%s:do_deploy' % r for r in '${IMX_FIRMWARE}'.split() )} \
    imx-atf:do_deploy \
    ${IMX_M4_DEMOS} \
    ${@bb.utils.contains('COMBINED_FEATURES', 'optee', 'optee-os-imx:do_deploy', '', d)} \
"
As I have learned that is exactly the mechanism used to instruct one recipe to only be executed after a specific step in another recipe.
I think that if that piece of code was not there, my problem would be exactly what you described: imx-boot:do_compile executing after imx-atf:do_install but before imx-atf:do_deploy.

I followed your advice to build imx-atf on its own (which worked) and I think I know why my build failed:
I was not aware of imx-boot depending on imx-atf. While I was trying to get my new distro to work I at one point deleted the imx-boot-tools folder manually. I now realise that that is not a smart move.
Additionally I ran bitbake -c clean imx-boot. But not bitbake -c clean imx-atf, because I was not aware of it existing.
That likely led to imx-boot being rebuilt at next build, but imx-atf being considered up-to-date by bitbake. So it didn't bother building it again.
But because I manually removed some of the files imx-atf would produce, and upon which imx-boot depends, the build failed.

So in the end your explanations, hints and advice helped me understand more about how bitbake recipes interact with each other.
It would have probably been very difficult for you (or anyone) to spot my real problem since you could never have anticipated me just taking a shotgun to the files in my deploy directory :D
Nevertheless I want to thank you for your help.

I consider this issue solved.

Stefan


Re: [yocto-autobuilder-helper][zeus][PATCH] config.json: Override BBTARGETS for meta-intel

Richard Purdie
 

On Thu, 2020-04-09 at 09:54 +0800, mohamad.noor.alim.hussin@... wrote:
From: Mohamad Noor Alim Hussin <mohamad.noor.alim.hussin@...>

Using meta-intel with core-image-sato-(sdk)-ptest
results in a hddimg size of more than 4 GB. Remove
that image type from testing.

hddimg is not built by default in dunfell so this
is applicable only to zeus.

Signed-off-by: Mohamad Noor Alim Hussin <mohamad.noor.alim.hussin@...>
---
config.json | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index 7a88e6e..2989b8a 100644
--- a/config.json
+++ b/config.json
@@ -384,7 +384,10 @@
"NEEDREPOS" : ["poky", "meta-intel"],
"ADDLAYER" : ["${BUILDDIR}/../meta-intel"],
"MACHINE" : "intel-corei7-64",
- "TEMPLATE" : "arch-hw"
+ "TEMPLATE" : "arch-hw",
+ "step1": {
+ "BBTARGETS": "core-image-sato core-image-sato-dev core-image-sato-sdk core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk"
+ }
},
"genericx86-64-alt" : {
"MACHINE" : "genericx86-64",
Whilst this will "fix" the problem, it is just by hiding it. Any user
using meta-intel will run into a failure if they try and build this
image and it may as well be disabled and it won't work for anyone.

A better fix would be to disable hddimg for this image, then a user can
actually build it without failures.

Its Intel's layer so its your call but I'd want to fix this properly.

Cheers,

Richard


Re: QA notification for completed autobuilder build (yocto-3.1.rc2)

Armin Kuster
 



On 4/8/20 6:18 PM, Sangeeta Jain wrote:
Hi all,

Intel and WR YP QA is now running QA execution for YP build yocto-3.1.rc2.
We are planning to execute following tests for this cycle:

OEQA-manual tests for following module:
1. OE-Core
2. BSP-hw
Has the manual testing situation changed from the last QA run?

Runtime auto test for following platforms:
1. MinnowTurbot 32-bit
2. Coffee Lake
3. NUC 7
4. NUC 6
5. Edgerouter
6. Beaglebone

ETA for completion is next Monday, April 13.
Sounds great.

- armin

Thanks,
Sangeeta

-----Original Message-----
From: Richard Purdie <richard.purdie@...>
Sent: Wednesday, 8 April, 2020 9:21 PM
To: pokybuild@...; yocto@...
Cc: otavio@...; yi.zhao@...; Sangal, Apoorv
<apoorv.sangal@...>; Yeoh, Ee Peng <ee.peng.yeoh@...>; Chan,
Aaron Chun Yew <aaron.chun.yew.chan@...>; akuster808@...;
sjolley.yp.pm@...; Jain, Sangeeta <sangeeta.jain@...>
Subject: Re: QA notification for completed autobuilder build (yocto-3.1.rc2)

On Wed, 2020-04-08 at 04:01 +0000, pokybuild@...
wrote:
A build flagged for QA (yocto-3.1.rc2) was completed on the
autobuilder and is available at:


    https://autobuilder.yocto.io/pub/releases/yocto-3.1.rc2


Build hash information:

bitbake: 4618da2094189e4d814b7d65672cb65c86c0626a
meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
meta-intel: bd539ea962ee285eb71053583e3c17fa166fc610
meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
oecore: 1795f30d8ab73d35710ca99064c51190dc84853e
poky: 5d47cdf448b6cff5bb7cc5b0ba0426b8235ec478


There were two failures in this build due to collect-results failing. I fixed the
missing dependency on that autobuilder worker (there was already an open bug
but it wasn't fixed yet) and reran the collections scripts so the results were
added and handled.

Cheers,

Richard

      

    


[yocto-autobuilder-helper][zeus][PATCH] config.json: Override BBTARGETS for meta-intel

Hussin, Mohamad Noor Alim
 

From: Mohamad Noor Alim Hussin <mohamad.noor.alim.hussin@...>

Using meta-intel with core-image-sato-(sdk)-ptest
results in a hddimg size of more than 4 GB. Remove
that image type from testing.

hddimg is not built by default in dunfell so this
is applicable only to zeus.

Signed-off-by: Mohamad Noor Alim Hussin <mohamad.noor.alim.hussin@...>
---
config.json | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index 7a88e6e..2989b8a 100644
--- a/config.json
+++ b/config.json
@@ -384,7 +384,10 @@
"NEEDREPOS" : ["poky", "meta-intel"],
"ADDLAYER" : ["${BUILDDIR}/../meta-intel"],
"MACHINE" : "intel-corei7-64",
- "TEMPLATE" : "arch-hw"
+ "TEMPLATE" : "arch-hw",
+ "step1": {
+ "BBTARGETS": "core-image-sato core-image-sato-dev core-image-sato-sdk core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk"
+ }
},
"genericx86-64-alt" : {
"MACHINE" : "genericx86-64",
--
2.20.1

8241 - 8260 of 57343