Date   

Public key for U-Boot verified boot is not inserted in DTB when rebuilding from sstate

Christian Andersen
 

Hello,

I have a problem with U-Boot verified boot and the sstate caching of build artifacts.

On a clean rebuild (deleted sstate and tmp dir), the signed FIT image and U-Boot incl. the public key are correctly created.
But when I delete the tmp dir and let bitbake recreate it from sstate, the public key in U-Boot is missing.

The task sequence according to uboot-sign.bbclass is:

# u-boot:do_deploy_dtb
# u-boot:do_deploy
# virtual/kernel:do_assemble_fitimage
# u-boot:do_concat_dtb
# u-boot:do_install

The problem seems to be that while assembling the FIT image (from the kernel recipe), the U-Boot DTB in DEPLOY_IMAGE_DIR is modified and the public key is inserted. After that U-Boot and the new DTB are concatenated. This happens for the U-Boot image in DEPLOYDIR as well in DEPLOY_IMAGE_DIR.

The problem now is, that the sstate caches the versions of U-Boot and DTB while deploying it. Since this happens before assembling the FIT image, the sstate now contains U-Boot and DTB without the public key.

U-Boot unfortunately (silently!) disables verified boot when the public key is not available in the DTB.

I already filed a bug (#12112) for this, but has anybody an idea how to easily fix this (other than cleaning the sstate of U-Boot/Kernel after deleting the tmp dir)?

A possible solution would be to remove the dependency between kernel and U-Boot. But in this case it would be necessary to insert the public key into the DTB while building U-Boot without using the FIT image from the kernel build. Unfortunately uboot-mkimage does not support this at the moment.


Regards
Christian

--
KOSTAL Industrie Elektrik GmbH
www.kostal-industrie-elektrik.com


KOSTAL Industrie Elektrik GmbH - Sitz Lüdenscheid, Registergericht Iserlohn HRB 3924 - USt-Id-Nr./Vat No.: DE 813742170
Postanschrift: An der Bellmerei 10, D-58513 Lüdenscheid * Telefon: +49 2351 16-0 * Telefax: +49 2351 16-2400
Werksanschrift: Lange Eck 11, D-58099 Hagen * Tel. +49 2331 8040-601 * Fax +49 2331 8040-602
Geschäftsführung: Dr.-Ing. Dipl.-Wirt.Ing. Manfred Gerhard, Dipl.-Ing. Marwin Kinzl, Dipl.-Oec. Andreas Kostal


Re: which is the official(?) OE/YP openbmc layer?

Robert P. J. Day
 

On Thu, 21 Sep 2017, Burton, Ross wrote:

On 21 September 2017 at 11:01, Robert P. J. Day <rpjday@crashcourse.ca> wrote:
  colleague just yesterday asked me a couple questions about openbmc,
so i investigated the OE/YP layer, and i'm a bit confused ... the
official OE layers page here:

https://layers.openembedded.org/layerindex/branch/master/layers/

refers to a meta-openbmc layer at https://github.com/facebook/openbmc,
implying it's a facebook project, but github also hosts:

https://github.com/openbmc

  can anyone clarify the relationship between those two? if there is
any?


Oh I really hope that isn't a Google/IBM vs Facebook fork war.

I think the best way to get an answer is to email both
maintainers...
after just cursory examination of both repos, i have to say, neither
of them looks particularly well-organized. or am i just being overly
critical?

rday

--

========================================================================
Robert P. J. Day Ottawa, Ontario, CANADA
http://crashcourse.ca

Twitter: http://twitter.com/rpjday
LinkedIn: http://ca.linkedin.com/in/rpjday
========================================================================


Re: which is the official(?) OE/YP openbmc layer?

Ross Burton
 

On 21 September 2017 at 11:01, Robert P. J. Day <rpjday@...> wrote:
  colleague just yesterday asked me a couple questions about openbmc,
so i investigated the OE/YP layer, and i'm a bit confused ... the
official OE layers page here:

https://layers.openembedded.org/layerindex/branch/master/layers/

refers to a meta-openbmc layer at https://github.com/facebook/openbmc,
implying it's a facebook project, but github also hosts:

https://github.com/openbmc

  can anyone clarify the relationship between those two? if there is
any?

Oh I really hope that isn't a Google/IBM vs Facebook fork war.

I think the best way to get an answer is to email both maintainers...

Ross 


Re: [yocto-autobuilder][PATCH] CheckYoctoCompat.py: rename yocto-compat-layer to yocto-check-layer

Joshua Lock <joshua.g.lock@...>
 

On 21/09/17 09:47, Joshua Lock wrote:
On 21/09/17 02:36, Stephano Cetola wrote:
This script name was changed in the following commit:

b46e05677b342df44829ffe8bcfbfc954e906030

This patch updates the script name to match.

[YOCTO #12110]

Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
---
lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
index 134adaa51..62eddae50 100644
---
a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
+++ b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
@@ -41,11 +41,12 @@ class CheckYoctoCompat(BitbakeShellCommand):
          layerversioncore = int(self.getProperty("layerversion_core", "0"))
          # yocto-compat-layer-wrapper was introduced in Pyro
+        # it was renamed to yocto-check-layer-wrapper Rocko
          if layerversioncore >= 10:
              command = ". ./oe-init-build-env;"
              for layer in self.layers:
                  layerpath = os.path.join(builddir, layer)
-                cmd = "yocto-compat-layer-wrapper {}".format(layerpath)
+                cmd = "yocto-check-layer-wrapper {}".format(layerpath)
This will result in failures on Pyro (layer version 10). We should either:
a) bump the layer version check to only run this for Rocko (layer version 11)
I went ahead and merged it with this change.

Joshua

b) use different program names for layer version 10 vs. layer version 11.
I'm inclined to suggest a, the yocto-compat-layer scripts only really became useful in the Rocko cycle.

                  cmd = cmd + " || export CL_FAIL=1;"
                  command = command + cmd
              command = command + 'if [ "$CL_FAIL" = "1" ]; then exit 1; fi;'


which is the official(?) OE/YP openbmc layer?

Robert P. J. Day
 

colleague just yesterday asked me a couple questions about openbmc,
so i investigated the OE/YP layer, and i'm a bit confused ... the
official OE layers page here:

https://layers.openembedded.org/layerindex/branch/master/layers/

refers to a meta-openbmc layer at https://github.com/facebook/openbmc,
implying it's a facebook project, but github also hosts:

https://github.com/openbmc

can anyone clarify the relationship between those two? if there is
any?

rday

--

========================================================================
Robert P. J. Day Ottawa, Ontario, CANADA
http://crashcourse.ca

Twitter: http://twitter.com/rpjday
LinkedIn: http://ca.linkedin.com/in/rpjday
========================================================================


Re: [yocto-autobuilder][PATCH] CheckYoctoCompat.py: rename yocto-compat-layer to yocto-check-layer

Joshua Lock <joshua.g.lock@...>
 

On 21/09/17 02:36, Stephano Cetola wrote:
This script name was changed in the following commit:
b46e05677b342df44829ffe8bcfbfc954e906030
This patch updates the script name to match.
[YOCTO #12110]
Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
---
lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
index 134adaa51..62eddae50 100644
--- a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
+++ b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
@@ -41,11 +41,12 @@ class CheckYoctoCompat(BitbakeShellCommand):
layerversioncore = int(self.getProperty("layerversion_core", "0"))
# yocto-compat-layer-wrapper was introduced in Pyro
+ # it was renamed to yocto-check-layer-wrapper Rocko
if layerversioncore >= 10:
command = ". ./oe-init-build-env;"
for layer in self.layers:
layerpath = os.path.join(builddir, layer)
- cmd = "yocto-compat-layer-wrapper {}".format(layerpath)
+ cmd = "yocto-check-layer-wrapper {}".format(layerpath)
This will result in failures on Pyro (layer version 10). We should either:
a) bump the layer version check to only run this for Rocko (layer version 11)
b) use different program names for layer version 10 vs. layer version 11.

I'm inclined to suggest a, the yocto-compat-layer scripts only really became useful in the Rocko cycle.


cmd = cmd + " || export CL_FAIL=1;"
command = command + cmd
command = command + 'if [ "$CL_FAIL" = "1" ]; then exit 1; fi;'


Re: "(-)"??

Peter Kjellerstedt
 

-----Original Message-----
From: yocto-bounces@yoctoproject.org [mailto:yocto-
bounces@yoctoproject.org] On Behalf Of Khem Raj
Sent: den 21 september 2017 07:15
To: Takashi Matsuzawa <tmatsuzawa@xevo.com>; yocto@yoctoproject.org
Subject: Re: [yocto] "(-)"??

On 9/20/17 8:18 PM, Takashi Matsuzawa wrote:
Hello.
I am seeing some of the recipes contains lines like below.

COMPATIBLE_MACHINE = "(-)"
Sorry being novice, but what is the intended effect of this line?
I can see submit comments that this is for blacklisting but I am not
sure how it works.  It simply means a '-' letter?
COMAPTIBLE_MACHINE uses regexp syntax
Which actually makes that a pretty weird COMPATIBLE_MACHINE,
especially if it is intended for blacklisting. Given that it would
match any machine with a dash in it, it would match, e.g., qemux86-64
but not qemux86. It would also happen to match about half of our
machines which happen to have dashes in their names.

A more appropriate way to blacklist machines using COMPATIBLE_MACHINE
would be something like:

COMPATIBLE_MACHINE = "null"

or:

COMPATIBLE_MACHINE = "nothing"

I found two occurrences of "(-)" being used as COMPATIBLE_MACHINE in
meta-openembedded for Morty and Pyro, but they have been removed for
Rocko. If you see them anywhere else, consider changing them.

//Peter


Re: "(-)"??

Khem Raj
 

On 9/20/17 8:18 PM, Takashi Matsuzawa wrote:
Hello.
I am seeing some of the recipes contains lines like below.

COMPATIBLE_MACHINE = "(-)"
Sorry being novice, but what is the intended effect of this line?
I can see submit comments that this is for blacklisting but I am not
sure how it works.  It simply means a '-' letter?
COMAPTIBLE_MACHINE uses regexp syntax


"(-)"??

Takashi Matsuzawa <tmatsuzawa@...>
 

Hello.
I am seeing some of the recipes contains lines like below.

> COMPATIBLE_MACHINE = "(-)"

Sorry being novice, but what is the intended effect of this line?
I can see submit comments that this is for blacklisting but I am not sure how it works.  It simply means a '-' letter?



Re: Sysroot bug in bitbake or wrong configuration?

Andre McCurdy <armccurdy@...>
 

On Tue, Sep 19, 2017 at 11:43 PM, Svein Seldal <sveinse@seldal.com> wrote:

I have the spu-image.bb recipe below, and running on Pyro, the recipe
behaves differently if the recipe is run on a fresh system with no sstate
elements, compared to a system that has a sstate cache present.

The failure is that the spu-image required the host tool "uuidgen", and thus
has DEPENDS on "util-linux-native".
DEPENDS is basically a shorthand for saying that the
do_populate_sysroot task for the recipe(s) listed in DEPENDS should be
run before the do_configure task of the current recipe.

Since image recipes don't have a do_configure task (or at least, they
do their work in tasks such as do_rootfs which don't depend on
do_configure), using the DEPENDS shorthand for setting dependencies
for the do_configure task doesn't work.

If an image recipe's do_rootfs or do_image tasks have dependencies
then they need to be expressed using the "longhand" format, for
example:

do_rootfs[depends] += "util-linux-native:do_populate_sysroot"

Unfortunately trying to use DEPENDS in an image recipe seems to be
quite a common mistake. Maybe we should try to make things a little
more user friendly by adding a sanity test to catch the problem? Or
perhaps do_rootfs should depend on a dummy do_configure task (and so
ensure that do_rootfs effectively sees dependencies expressed via
DEPENDS) ?

When the -c cleanall spu-image is run
prior to building spu-image, the recipe sysroot is properly initialized with
util-linux-native and uuidgen is available in the task functions.

If -c clean is run prior to build, or simply by deleting tmp, the sysroot
will not be properly initialized and uuidgen is not available and the recipe
fails

Is this a bug in bitbake or am I missing something in my recipe?

Best regards,
Svein Seldal


[yocto-autobuilder][PATCH] CheckYoctoCompat.py: rename yocto-compat-layer to yocto-check-layer

Stephano Cetola <stephano.cetola@...>
 

This script name was changed in the following commit:

b46e05677b342df44829ffe8bcfbfc954e906030

This patch updates the script name to match.

[YOCTO #12110]

Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
---
lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
index 134adaa51..62eddae50 100644
--- a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
+++ b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
@@ -41,11 +41,12 @@ class CheckYoctoCompat(BitbakeShellCommand):

layerversioncore = int(self.getProperty("layerversion_core", "0"))
# yocto-compat-layer-wrapper was introduced in Pyro
+ # it was renamed to yocto-check-layer-wrapper Rocko
if layerversioncore >= 10:
command = ". ./oe-init-build-env;"
for layer in self.layers:
layerpath = os.path.join(builddir, layer)
- cmd = "yocto-compat-layer-wrapper {}".format(layerpath)
+ cmd = "yocto-check-layer-wrapper {}".format(layerpath)
cmd = cmd + " || export CL_FAIL=1;"
command = command + cmd
command = command + 'if [ "$CL_FAIL" = "1" ]; then exit 1; fi;'
--
2.14.1


Re: eSDK install script failure

Andrea Galbusera
 

Hi Paul,
thanks for explaining and helping sorting this out.

On Wed, Sep 20, 2017 at 11:54 AM, Paul Eggleton <paul.eggleton@...> wrote:
Hi Andrea,

On Wednesday, 20 September 2017 8:44:22 PM NZST Andrea Galbusera wrote:
> Seeing the errors below while installing an eSDK. This is a routinely
> generated VM that installs the eSDK from installation script. The errors
> appeared with the latest iteration of the eSDK script, which is generated
> with almost up-to-date revisions from master. Of course I have extra layers
> in the mix, but none of them apparently had relevant changed since last
> (working) iteration: mostly synching to master branches happened. Can
> anyone help suggesting how to investigate this further? What do those
> unexpected task mean? I'm blocked on releasing this SDK to developers and
> clues from expert would be very appreciated...
>
> ==> default: Checking sstate mirror object availability...
> ==> default: done.
> ==> default: ERROR: Task python-native.do_fetch attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_prepare_recipe_sysroot attempted
> to execute unexpectedly
> ==> default: ERROR: Task python-native.do_unpack attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_patch attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_populate_lic attempted to execute
> unexpectedly and should have been setscened
> ==> default: ERROR: Task python-native.do_configure attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_compile attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_install attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_populate_sysroot attempted to
> execute unexpectedly and should have been setscened
> ==> default: ERROR: SDK preparation failed: error log written to
> /home/vagrant/poky_sdk/preparing_build_system.log
>

Basically this means that these tasks tried to execute where really the
results should have been able to be restored from sstate.

The cause of this type of error is one of three things:

1) The sstate archive corresponding to a task wasn't able to be fetched from
the server (for a minimal eSDK) or wasn't present in the installer (for a full
eSDK - less likely as we basically do a trial run as part of building the eSDK
in the first place)

2) The signature was somehow different to what it should have been. (Locked
signatures are supposed to guard against this.)

3) A task that wasn't expected to execute did execute and thus the sstate
wasn't available.

Given that this was python-native which I would expect would be a normal part
of the SDK, I would suspect #1. Is this a minimal or full eSDK (i.e. what is
SDK_EXT_TYPE set to)?

That was a "full" eSDK. I noticed that the "same" eSDK installer from another build host was not affected and I'm trying to rebuild on the original one with even more recent revision and see if it still happens or not. Failure with the first one was repeatable, hence I suspect an issue at sdk population stage, not during installation. 


Re: pyro openembedded gpsd update-rc.d problems with read-only-rootfs

Dan Walkes
 

On Wed, Sep 13, 2017 at 10:56 AM, Dan Walkes
<danwalkes@trellis-logic.com> wrote:
On Mon, Sep 11, 2017 at 5:01 AM, Burton, Ross <ross.burton@intel.com> wrote:
On 10 September 2017 at 21:35, Dan Walkes <danwalkes@trellis-logic.com>
wrote:

It looks like because the update-rc.d step fails this setup gets moved
into a gpsd post install script, which won’t work because I’m
configured to use a read only root filesystem. So I need to find a
way to keep the update-rc.d step from failing.

The recipe shouldn't invoke update-alternatives directly, but use the
update-alternatives class instead.
Thanks for the suggestion Ross.

I didn't mention it before but I had already attempted to make this
change after I initially noticed the problem. See this commit:
https://github.com/Trellis-Logic/meta-openembedded/commit/ddf008dbdae602dbe722f1fcb231f5549e75a586

I didn't see any difference when I updated to use update-alternatives
instead of invoking directly.

Since the error message was related to update-rc.d I've also attempted
to use the multi-update form of update-rc.d in the above commit. I
thought that might be required when multiple packages were built from
the same .bb file. However, don't see a difference in the result with
these changes either.
The fix was to specify INITSCRIPT_PACKAGES = "gpsd-conf", since the
gpsd-conf package was where the /etc/init.d/gpsd file is being
installed, per inspection of rpm files. See the patch at
https://github.com/Trellis-Logic/meta-openembedded/commit/d91bab137dfc4f3ce6526bd8a6e95e5de7658fd5

I will submit this patch to the Openembedded-devel list unless anyone
has other/different recommended changes.


Re: eSDK install script failure

Paul Eggleton
 

Hi Andrea,

On Wednesday, 20 September 2017 8:44:22 PM NZST Andrea Galbusera wrote:
Seeing the errors below while installing an eSDK. This is a routinely
generated VM that installs the eSDK from installation script. The errors
appeared with the latest iteration of the eSDK script, which is generated
with almost up-to-date revisions from master. Of course I have extra layers
in the mix, but none of them apparently had relevant changed since last
(working) iteration: mostly synching to master branches happened. Can
anyone help suggesting how to investigate this further? What do those
unexpected task mean? I'm blocked on releasing this SDK to developers and
clues from expert would be very appreciated...

==> default: Checking sstate mirror object availability...
==> default: done.
==> default: ERROR: Task python-native.do_fetch attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_prepare_recipe_sysroot attempted
to execute unexpectedly
==> default: ERROR: Task python-native.do_unpack attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_patch attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_populate_lic attempted to execute
unexpectedly and should have been setscened
==> default: ERROR: Task python-native.do_configure attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_compile attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_install attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_populate_sysroot attempted to
execute unexpectedly and should have been setscened
==> default: ERROR: SDK preparation failed: error log written to
/home/vagrant/poky_sdk/preparing_build_system.log
Basically this means that these tasks tried to execute where really the
results should have been able to be restored from sstate.

The cause of this type of error is one of three things:

1) The sstate archive corresponding to a task wasn't able to be fetched from
the server (for a minimal eSDK) or wasn't present in the installer (for a full
eSDK - less likely as we basically do a trial run as part of building the eSDK
in the first place)

2) The signature was somehow different to what it should have been. (Locked
signatures are supposed to guard against this.)

3) A task that wasn't expected to execute did execute and thus the sstate
wasn't available.

Given that this was python-native which I would expect would be a normal part
of the SDK, I would suspect #1. Is this a minimal or full eSDK (i.e. what is
SDK_EXT_TYPE set to)?

Cheers,
Paul

--

Paul Eggleton
Intel Open Source Technology Centre


[meta-selinux][PATCH] selinux-python: fix installed-vs-shipped warnings

wenzong.fan@...
 

From: Wenzong Fan <wenzong.fan@windriver.com>

Fix the warnings if ${libdir} = '/usr/lib64':
WARNING: selinux-python-2.7-r0 do_package: QA Issue: selinux-python: \
Files/directories were installed but not shipped in any package:
/usr/lib/python2.7/site-packages/sepolicy-1.1.egg-info
/usr/lib/python2.7/site-packages/sepolicy/__init__.py

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
---
recipes-security/selinux/selinux-python.inc | 1 +
1 file changed, 1 insertion(+)

diff --git a/recipes-security/selinux/selinux-python.inc b/recipes-security/selinux/selinux-python.inc
index 55060e3..4bc5cb5 100644
--- a/recipes-security/selinux/selinux-python.inc
+++ b/recipes-security/selinux/selinux-python.inc
@@ -102,6 +102,7 @@ FILES_${PN} += "\
EXTRA_OEMAKE += "LIBSEPOLA=${STAGING_LIBDIR}/libsepol.a"
do_install() {
oe_runmake DESTDIR=${D} \
+ LIBDIR='${D}${libdir}' \
PYTHONLIBDIR='${libdir}/python${PYTHON_BASEVERSION}/site-packages' \
install
}
--
2.13.0


eSDK install script failure

Andrea Galbusera
 

Seeing the errors below while installing an eSDK. This is a routinely generated VM that installs the eSDK from installation script. The errors appeared with the latest iteration of the eSDK script, which is generated with almost up-to-date revisions from master. Of course I have extra layers in the mix, but none of them apparently had relevant changed since last (working) iteration: mostly synching to master branches happened. Can anyone help suggesting how to investigate this further? What do those unexpected task mean? I'm blocked on releasing this SDK to developers and clues from expert would be very appreciated...

==> default: Checking sstate mirror object availability...
==> default: done.
==> default: ERROR: Task python-native.do_fetch attempted to execute unexpectedly
==> default: ERROR: Task python-native.do_prepare_recipe_sysroot attempted to execute unexpectedly
==> default: ERROR: Task python-native.do_unpack attempted to execute unexpectedly
==> default: ERROR: Task python-native.do_patch attempted to execute unexpectedly
==> default: ERROR: Task python-native.do_populate_lic attempted to execute unexpectedly and should have been setscened
==> default: ERROR: Task python-native.do_configure attempted to execute unexpectedly
==> default: ERROR: Task python-native.do_compile attempted to execute unexpectedly
==> default: ERROR: Task python-native.do_install attempted to execute unexpectedly
==> default: ERROR: Task python-native.do_populate_sysroot attempted to execute unexpectedly and should have been setscened
==> default: ERROR: SDK preparation failed: error log written to /home/vagrant/poky_sdk/preparing_build_system.log


devtool/sdk: multiple issues with sdk-update (and after update)

Krzysztof Kozlowski <krzk@...>
 

Hi all,

I am using Yocto Poky 2.3 (yocto-2.3-65-gcc48789276e0) and its
extensible SDK. Host is Ubuntu 16.04.3 LTS. I have multiple issues
with sdk-update:


1. $ devtool sdk-update
Fetching origin
fatal: unable to access
'https://foobar.com/~builder/releases/yocto-2.3/toolchain/updates/layers/.git/':
Problem with the SSL CA cert (path? access rights?)
error: Could not fetch origin

Workaround is GIT_SSL_CAINFO="/etc/ssl/certs/ca-certificates.crt"
devtool sdk-update but that is not that convenient.


2. SDK update partially succeeds but not all tasks are apparently executed:

HEAD is now at 8189dd22fed5 init repo
NOTE: Preparing build system... (This may take some time.)
ERROR: Unexecuted tasks found in preparation log:
NOTE: Running task 1065 of 2619
(/home/krzk/proceq17_sdk/layers/poky/meta/recipes-graphics/freetype/freetype_2.7.1.bb:do_fetch)
NOTE: Running task 1077 of 2619
(/home/krzk/proceq17_sdk/layers/poky/meta/recipes-multimedia/libpng/libpng_1.6.28.bb:do_fetch)
...
...
...
NOTE: Running task 2619 of 2619
(/home/krzk/proceq17_sdk/layers/meta-oe/recipes-support/opencv/opencv_3.2.bb:do_packagedata)


It seems that update works... but not entirely.


3. devtool sdk-update sees new commits in layers repository on origin
remote but runs just "git reset --hard" so it does not switch to it.

Fetching origin
From https://foobar.com/~builder/releases/yocto-2.3/toolchain/updates/layers/
f392bd369685..af2013cdfa56 master -> origin/master
HEAD is now at f392bd369685 init repo

Running git reset --hard will obviously not switch the HEAD from
f392bd369685 to af2013cdfa56.

My workaround here is to manually reset --hard origin/master and then
re-run the sdk-update.


4. SDK updated this way (with all workarounds above) is different than
fresh install. For example some files in sysroots are missing - the
parts introduced at some point by changes in my software. Here - the
json-c.pc is missing:

$ cd UPDATED_SDK/workspace/sources
$ mkdir lib-eclipse
$ make -G"Eclipse CDT4 - Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug ../lib
-- Checking for module 'json-c'
-- No package 'json-c' found
$ find UPDATED_SDK/ -name 'json-c.pc'
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/image/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/package/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/sysroot-destdir/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/packages-split/json-c-dev/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/build/json-c.pc
./tmp/sysroots-components/cortexa5hf-neon/json-c/usr/lib/pkgconfig/json-c.pc

$ find FRESH_INSTALL/ -name 'json-c.pc'
./tmp/sysroots/col-vf50-proceq/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/sysroot-destdir/usr/lib/pkgconfig/json-c.pc
./tmp/sysroots-components/cortexa5hf-neon/json-c/usr/lib/pkgconfig/json-c.pc

The devtool build works okay for both cases.


Any hints on these issues?

Best regards,
Krzysztof

P.S. I could not find discussion list for extensible SDK or devtool so
I hope this is the right place...


Sysroot bug in bitbake or wrong configuration?

Svein Seldal
 

I have the spu-image.bb recipe below, and running on Pyro, the recipe behaves differently if the recipe is run on a fresh system with no sstate elements, compared to a system that has a sstate cache present.

The failure is that the spu-image required the host tool "uuidgen", and thus has DEPENDS on "util-linux-native". When the -c cleanall spu-image is run prior to building spu-image, the recipe sysroot is properly initialized with util-linux-native and uuidgen is available in the task functions.

If -c clean is run prior to build, or simply by deleting tmp, the sysroot will not be properly initialized and uuidgen is not available and the recipe fails

Is this a bug in bitbake or am I missing something in my recipe?


Best regards,
Svein Seldal


# spu-image.bb
DESCRIPTION = "Upgrade Image"

LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COREBASE}/LICENSE;md5=4d92cd373abda3937c2bc47fbc49d690"

DEPENDS = "util-linux-native"
INHIBIT_DEFAULT_DEPS = "1"

fakeroot do_spu_rootfs() {
uuidgen
}
addtask do_spu_rootfs before do_build

fakeroot do_spu_image () {
uuidgen
}
addtask do_spu_image after do_spu_rootfs before do_build

# It does not matter if these are noexec-ed or not
#do_fetch[noexec] = "1"
#do_unpack[noexec] = "1"
#do_patch[noexec] = "1"
#do_configure[noexec] = "1"
#do_compile[noexec] = "1"
#do_install[noexec] = "1"
#do_package[noexec] = "1"
#do_package_qa[noexec] = "1"
#do_packagedata[noexec] = "1"
#do_package_write_ipk[noexec] = "1"
#do_package_write_deb[noexec] = "1"
#do_package_write_rpm[noexec] = "1"


# 1) Running works fine
# bitbake -v spu-image |tee log1.txt
# cat log1.txt | grep -2 uuidgen
#
# 2) Cleaning
# bitbake -c clean spu-image
#
# 3) Rebuilding -- now fails
# bitbake -v spu-image |tee log2.txt
# cat log2.txt | grep -2 uuidgen
#
# 4) Sstate cleaning
# bitbake -c cleanall spu-image
#
# 5) Works again:
# bitbake -v spu-image |tee log3.txt
# cat log3.txt | grep -2 uuidgen


Re: Kernel Build Failures with Shared SSTATE

Manjukumar Harthikote Matha <MANJUKUM@...>
 

Hi Richard,

-----Original Message-----
From: yocto-bounces@yoctoproject.org [mailto:yocto-bounces@yoctoproject.org]
On Behalf Of Schmitt, Richard
Sent: Friday, July 14, 2017 8:23 AM
To: yocto@yoctoproject.org
Subject: [yocto] Kernel Build Failures with Shared SSTATE

Hi,



I had been running into kernel build failures on the morty branch when using a shared
state. First I'll describe the error, and then my solution.



The first build that initializes the sstate cache works fine. Subsequent clean builds
will fail. The failure

would occur in the do_compile_kernelmodules task. The error would indicate a
failure because tmp/work-shared/<machine>/kernel-build-artifacts was missing.



My analysis concluded that the kernel build was restored from the cache, but it did
not restore the kernel-build-artifacts needed by the do_compile_kernelmodules task.



My solution was to include the following in a bbappend file for the kernel:



SSTATETASKS += "do_shared_workdir"



do_shared_workdir[sstate-plaindirs] = "${STAGING_KERNEL_BUILDDIR}"



python do_shared_workdir_setscene () {

sstate_setscene(d)

}



I assume the correct way to fix this would be to update the
meta/classes/kernel.bbclass. It looks like there was some attempt to do something
with the shared_workdir because there is a do_shared_workdir_setscene routine,
but right now it just returns a 1. Is that intentional. It seems wrong.
Even I am facing the same issue, but seen only few instances of failures. Not able to concretely figure out exact steps to replicate the issue.
Is it better to remove the addtask shared_workdir_setscene ?
If you see do_deploy task in kernel.bbclass, it doesn't handle the setscene task either

Thanks,
Manju


[meta-raspberrypi][PATCH V2 4/4] xserver-xf86-config: Disable glamor for the modesetting driver on pi64

Khem Raj
 

Fixes a xorg server crash with musl see details
https://github.com/voidlinux/void-packages/issues/6091

Signed-off-by: Khem Raj <raj.khem@gmail.com>
---
.../xserver-xf86-config/rpi/xorg.conf.d/10-noglamor.conf | 6 ++++++
recipes-graphics/xorg-xserver/xserver-xf86-config_0.1.bbappend | 9 +++++++--
2 files changed, 13 insertions(+), 2 deletions(-)
create mode 100644 recipes-graphics/xorg-xserver/xserver-xf86-config/rpi/xorg.conf.d/10-noglamor.conf

diff --git a/recipes-graphics/xorg-xserver/xserver-xf86-config/rpi/xorg.conf.d/10-noglamor.conf b/recipes-graphics/xorg-xserver/xserver-xf86-config/rpi/xorg.conf.d/10-noglamor.conf
new file mode 100644
index 0000000..1a562ea
--- /dev/null
+++ b/recipes-graphics/xorg-xserver/xserver-xf86-config/rpi/xorg.conf.d/10-noglamor.conf
@@ -0,0 +1,6 @@
+#
+Section "Device"
+ Identifier "modeset"
+ Driver "modesetting"
+ Option "AccelMethod" "None"
+EndSection
diff --git a/recipes-graphics/xorg-xserver/xserver-xf86-config_0.1.bbappend b/recipes-graphics/xorg-xserver/xserver-xf86-config_0.1.bbappend
index b361eef..7902f20 100644
--- a/recipes-graphics/xorg-xserver/xserver-xf86-config_0.1.bbappend
+++ b/recipes-graphics/xorg-xserver/xserver-xf86-config_0.1.bbappend
@@ -4,7 +4,9 @@ SRC_URI_append_rpi = " \
file://xorg.conf.d/98-pitft.conf \
file://xorg.conf.d/99-calibration.conf \
"
-
+SRC_URI_append_libc-musl_raspberrypi3-64 = " \
+ file://xorg.conf.d/10-noglamor.conf \
+"
do_install_append_rpi () {
PITFT="${@bb.utils.contains("MACHINE_FEATURES", "pitft", "1", "0", d)}"
if [ "${PITFT}" = "1" ]; then
@@ -13,5 +15,8 @@ do_install_append_rpi () {
install -m 0644 ${WORKDIR}/xorg.conf.d/99-calibration.conf ${D}/${sysconfdir}/X11/xorg.conf.d/
fi
}
-
+do_install_append_libc-musl_raspberrypi3-64 () {
+ install -d ${D}/${sysconfdir}/X11/xorg.conf.d/
+ install -m 0644 ${WORKDIR}/xorg.conf.d/10-noglamor.conf ${D}/${sysconfdir}/X11/xorg.conf.d/
+}
FILES_${PN}_rpi += "${sysconfdir}/X11/xorg.conf ${sysconfdir}/X11/xorg.conf.d/*"
--
2.14.1

17461 - 17480 of 55440