Date   

[meta-mingw][PATCH] wayland: Disable DTD validation on i686 MinGW

Joshua Watt
 

DTD validation can't be built for i686 MinGW because the assembly file
used to encode the DTD string is incompatible (it works fine for x86_64
MinGW though).

Signed-off-by: Joshua Watt <JPEWhacker@...>
---
recipes-graphics/wayland/wayland_%.bbappend | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/recipes-graphics/wayland/wayland_%.bbappend b/recipes-graphics/wayland/wayland_%.bbappend
index 3713f2d..bbb1c52 100644
--- a/recipes-graphics/wayland/wayland_%.bbappend
+++ b/recipes-graphics/wayland/wayland_%.bbappend
@@ -1,2 +1,6 @@
+# The assembly file that encodes the DTD string into wayland-scanner is not
+# compatible with i686 MinGW
+PACKAGECONFIG_remove_mingw32_i686 = "dtd-validation"
+
EXTRA_OECONF_class-nativesdk_mingw32 = "--disable-documentation --disable-libraries"

--
2.23.0


[meta-security][PATCH] meta-security: add layer index callouts

Armin Kuster
 

Signed-off-by: Armin Kuster <akuster808@...>
---
meta-integrity/conf/layer.conf | 2 ++
meta-security-compliance/conf/layer.conf | 2 ++
meta-tpm/conf/layer.conf | 1 +
3 files changed, 5 insertions(+)

diff --git a/meta-integrity/conf/layer.conf b/meta-integrity/conf/layer.conf
index 962424c..bfc9c6f 100644
--- a/meta-integrity/conf/layer.conf
+++ b/meta-integrity/conf/layer.conf
@@ -24,3 +24,5 @@ OE_TERMINAL_EXPORTS += "INTEGRITY_BASE"
LAYERSERIES_COMPAT_integrity = "zeus"
# ima-evm-utils depends on keyutils from meta-oe
LAYERDEPENDS_integrity = "core openembedded-layer"
+
+BBLAYERS_LAYERINDEX_NAME_integrity = "meta-integrity"
diff --git a/meta-security-compliance/conf/layer.conf b/meta-security-compliance/conf/layer.conf
index 0e93bd0..e346bf3 100644
--- a/meta-security-compliance/conf/layer.conf
+++ b/meta-security-compliance/conf/layer.conf
@@ -11,3 +11,5 @@ BBFILE_PRIORITY_scanners-layer = "10"
LAYERSERIES_COMPAT_scanners-layer = "zeus"

LAYERDEPENDS_scanners-layer = "core openembedded-layer meta-python"
+
+BBLAYERS_LAYERINDEX_NAME_integrity = "meta-security-compliance"
diff --git a/meta-tpm/conf/layer.conf b/meta-tpm/conf/layer.conf
index 3af2d95..175eba8 100644
--- a/meta-tpm/conf/layer.conf
+++ b/meta-tpm/conf/layer.conf
@@ -14,3 +14,4 @@ LAYERDEPENDS_tpm-layer = " \
core \
openembedded-layer \
"
+BBLAYERS_LAYERINDEX_NAME_tpm-layer = "meta-tpm"
--
2.17.1


sdk rpi3 & rpi4 different sysroot

Ed Vidal
 

Hi,
Any and all help is appreciated. Thanks in advance.
Question 1
I tried using the sdk to build icestorm (a makefile project).  The sysroot for rpi3 sdk & rpi4 sdk are
missing a ftdi.h header.  The target rpi4 builds icestorm okay.  see steps below.

Question 2
What is the procedure to update the meta-raspberrypi repo?
An issue with core-image-sato for rpi3 required adding
vidal@ws009:~/wkg/yocto-zeus-3.0/raspberrypi4/poky/meta-raspberrypi$ git diff
diff --git a/conf/machine/raspberrypi3.conf b/conf/machine/raspberrypi3.conf
index 581e47c..43a0a25 100644
--- a/conf/machine/raspberrypi3.conf
+++ b/conf/machine/raspberrypi3.conf
@@ -18,3 +18,4 @@ UBOOT_MACHINE = "rpi_3_32b_config"
SERIAL_CONSOLES ?= "115200;ttyS0"
 
ARMSTUB ?= "armstub7.bin"
+ENABLE_UART = "1"
For rpi4 build I made the same in raspberrypi4.conf 
diff --git a/conf/machine/raspberrypi4.conf b/conf/machine/raspberrypi4.conf
index 1bcf931..0b91515 100644
--- a/conf/machine/raspberrypi4.conf
+++ b/conf/machine/raspberrypi4.conf
@@ -18,3 +18,4 @@ SERIAL_CONSOLES ?= "115200;ttyS0"
 
VC4DTBO ?= "vc4-fkms-v3d"
ARMSTUB ?= "armstub7.bin"
+ENABLE_UART = "1"
Question 3
Why is the header file ftdi.h missing in rpi4 sdk and not used in rpi3 sdk? 

vidal@ws009:~/wkg/test-yocto-sdk$ git clone https://github.com/develone/icestorm.git

vidal@ws009:~/wkg/test-yocto-sdk$ . /opt/poky/3.0.1/rpi4/environment-setup-cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi

vidal@ws009:~/wkg/test-yocto-sdk$ cd icestorm/

vidal@ws009:~/wkg/test-yocto-sdk$ echo ${CC}
arm-poky-linux-gnueabi-gcc -mthumb -mfpu=neon-vfpv4 -mfloat-abi=hard -mcpu=cortex-a7 --sysroot=/opt/poky/3.0.1/rpi4/sysroots/cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi

vidal@ws009:~/wkg/test-yocto-sdk/icestorm$ time make -e

The following error:
make[1]: Entering directory '/home/vidal/wkg/test-yocto-sdk/icestorm/iceprog'
arm-poky-linux-gnueabi-gcc  -mthumb -mfpu=neon-vfpv4 -mfloat-abi=hard -mcpu=cortex-a7 --sysroot=/opt/poky/3.0.1/rpi4/sysroots/cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi  -O2 -pipe -g -feliminate-unused-debug-types    -c -o iceprog.o iceprog.c
iceprog.c:27:10: fatal error: ftdi.h: No such file or directory
   27 | #include <ftdi.h>
      |          ^~~~~~~~
compilation terminated.

Tried to locate ftdi.h using the command below none was found in my rpi4 sdk sysroot.

vidal@ws009:~/wkg/test-yocto-sdk/icestorm$ find /opt/poky/3.0.1/rpi4/sysroots/cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi/ -name ftdi.h

A ftdi.h file was found in my rpi3 sdk sysroot.
vidal@ws009:~/wkg/test-yocto-sdk/icestorm$ find /opt/poky/3.0.1/sysroots/cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi/ -name ftdi.h
/opt/poky/3.0.1/sysroots/cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi/usr/include/libftdi1/ftdi.h

The only difference between the raspberrypi3 & raspberrypi4 builds are below.
vidal@ws009:~/wkg/yocto-zeus-3.0/raspberrypi4/build$ diff ../../build/conf/local.conf conf/local.conf 
39c39
< MACHINE ??= "raspberrypi3"
---
> MACHINE ??= "raspberrypi4"
66a67
> DL_DIR ?= "/home/vidal//wkg/yocto-zeus-3.0/build/downloads"
Testing rpi3 sdk

vidal@ws009:~/wkg/test-yocto-sdk/icestorm$ . /opt/poky/3.0.1/environment-setup-cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi

vidal@ws009:~/wkg/test-yocto-sdk/icestorm$ echo ${CC}
arm-poky-linux-gnueabi-gcc -mthumb -mfpu=neon-vfpv4 -mfloat-abi=hard -mcpu=cortex-a7 --sysroot=/opt/poky/3.0.1/sysroots/cortexa7t2hf-neon-vfpv4-poky-linux-gnueabi

vidal@ws009:~/wkg/test-yocto-sdk/icestorm$ make -e

The rpi3 sdk gets the same error as the rpi4 sdk (see above). 
target rpi4
real 23m38.686s
user 23m16.332s
sys 0m9.949s

AMD 6 core sdk
real 3m33.809s
user 3m33.304s
sys 0m0.556s


Let me know if I can provide additional information,
Regards
Edward Vidal Jr. e-mail develone@... 915-595-1613


OpenEmbedded Workshop at FOSDEM20 tickets

Jon Mason
 

We are happy to inform everyone that tickets to the inaugural
OpenEmbedded Workshop are now on sale. Early bird tickets are
available for 1/3rd off regular price. Also, we are offering a
"Supporter Ticket". This is for those who are able to contribute more
to help support OpenEmbedded, and/or would like to see more events
similar to this in the future.
Both of these can be purchased at
https://pretix.eu/OpenEmbedded/oe-workshop-2020/

Get your tickets now, as we expect these to sellout quickly. Also,
early bird pricing ends on December 31st.

Finally, there are still a few slots open for Speakers. If you have a
talk you would like to give, go to
https://pretalx.com/oe-workshop-2020/cfp
CFP deadline is December 15th.

More information on the OpenEmbedded Workshop can be found at
https://pretalx.com/oe-workshop-2020/

Thank you,
The OpenEmbedded Board


Re: What should I expect when using SSTATE_MIRROR?

Mikko Rapeli
 

Hi,

On Fri, Dec 06, 2019 at 02:07:47PM +0100, Mans Zigher wrote:
Hi,

I am trying to use the sstate cache by using a SSTATE_MIRROR. I have
built everything from scratch once and pushed my sstate-cache
directory to the server. I then wen't over to another machine and did
a clean build but this time I made sure it used SSTATE_MIRROR but the
result was not what I expected

Sstate summary: Wanted 1255 Found 233 Missed 2044 Current 0 (18%
match, 0% complete)

I have tried it multiple times on different machines with the same
result the only time it works as expect is when running the build on
the first machine from which the sstate cache was originally coming
from. Am I missing something? Or is this expected?
Maybe host tool versions differ which trigger recompilation of things
like gcc and thus everything gets recompiled. You can use bitbake-diffsigs
to figure out why rebuild was triggered.

I work around this by using a build container based on LXC and Debian stable
everywhere, in CI and on developer machines with various Linux distros.

Hope this helps,

-Mikko


What should I expect when using SSTATE_MIRROR?

Mans Zigher <mans.zigher@...>
 

Hi,

I am trying to use the sstate cache by using a SSTATE_MIRROR. I have
built everything from scratch once and pushed my sstate-cache
directory to the server. I then wen't over to another machine and did
a clean build but this time I made sure it used SSTATE_MIRROR but the
result was not what I expected

Sstate summary: Wanted 1255 Found 233 Missed 2044 Current 0 (18%
match, 0% complete)

I have tried it multiple times on different machines with the same
result the only time it works as expect is when running the build on
the first machine from which the sstate cache was originally coming
from. Am I missing something? Or is this expected?

Thanks

/Måns Zigher


How to integrate Python monorepo application? #yocto #python #monorepo

Georgii Staroselskii
 

A little context before the actual questions. As a project that I have worked on grew, the number of repositories used for the application we're developing grew as well. To the point that it's gotten unbearable to keep making modifications to the codebase: when something is changed in the core library in one repository we need to make adjustments to other Python middleware projects that use this library. Whereas this sounds not that terrifying, managing this in Yocto has become a burden: on every version bump of the Python library we need to bump all of the dependant projects as well. After some thinking, we decided to try a monorepo approach to deal with the complexity. I'm omitting the logic behind choosing this approach but I can delve into it if you think this is wrong.

1) Phase 1 was easy. Just bring every component of our middleware to one repository. Right now we have a big repository with more than git 20 submodules. Submodules will be gone once we finish the transition but as the project doesn't stop while we're doing the transition, submodules were chosen to track the changes and keep the monorepo up to date. Every submodule is a Python code with a setup.py/Pipfile et al. that does the bootstrapping.

2) Phase 2 is to integrate everything in Yocto. It has turned out to be more difficult than I had anticipated.

Right now we have this:

Application monorepo

Pipfile
python-library1/setup.py
python-library1/Makefile
python-library1/python-library1/<code>


python-library2/setup.py
python-library2/Makefile
python-library2/python-library2/<code>


...
python-libraryN/setup.py
python-libraryN/Makefile
python-libraryN/python-libraryN/<code>

Naturally, right now we have python-library1_<some_version>.bbpython-library2_<some_other_version>.bb in Yocto. Now we want to get rid of this version hell and stick to monorepo versions so that we could have just monorepo_1.0.bb that is to be updated when anything changes in any part.

Right now I have the layout described below.

monorepo.inc is as follows:

SRC_URI=<gitsm://the location of the monorepo,branch=...,>
S = "${WORKDIR}/git"
SRCREV = <hash>

python-libraryN.inc:

require monorepo.inc
S
= "${WORKDIR}/git/python-libraryN" inherit setuptools

#some libraries also use entry points and the recipe needs to inherit systemd to enable some services.

This approach works. But it has a very significant drawback I can't ignore. Every time SRCREV (or PV/tag in the future) is changed every recipe downloads the whole repository that is quite big.

So, well, the question is how to structure the Yocto recipes for Python monorepo manipulation? I'm looking for a way to trick do_unpack and do_fetch to their thing on the monorepo basis.

This section of the documentation makes me think that I'm approaching this task wrong. I was trying to decipher gcc-source.inc and gcc-shared-source.inc files in the poky but they are out of my depth for now and look a little bit overcomplicated for my use-case. So basically my question boils down to how to deal with multiple recipes referencing the same source. Any structural change to either the monorepo or the recipe structure can be done now, so I would be very grateful for any feedback.

P.S. 1) The monorepo structure is not set in stone so if you have any suggestions, I'm more than open to any criticism 2) The .inc file structure might not be suitable for another reason. This stack is deployed across a dozen of different devices and some of those python-library_N.bb have .bbappends in other layers that are custom for the devices. I mean that some of them might require different components of the system installed or some configs and files/ modifications.

Any suggestion on how to deal with the complexity of a big Python application in Yocto will be welcome. Thanks in advance.


[meta-security][PATCH 2/2] tpm2-abrmd: Port command line options to new version.

Diego Santa Cruz
 

From: Philip Tricca <flihp@...>

These have changed upstream.

Signed-off-by: Philip Tricca <flihp@...>
Signed-off-by: Diego Santa Cruz <Diego.SantaCruz@...>
---
meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd.default | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd.default b/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd.default
index 987978a..b4b3c20 100644
--- a/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd.default
+++ b/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd.default
@@ -1 +1 @@
-DAEMON_OPTS="--tcti=device --logger=syslog --max-connections=20 --max-transient-objects=20 --fail-on-loaded-trans"
+DAEMON_OPTS="--tcti=device --logger=syslog --max-connections=20 --max-transients=20 --flush-all"
--
2.18.1


[meta-security][PATCH 1/2] tpm2-abrmd-init.sh: fix for /dev/tpmrmX

Diego Santa Cruz
 

From: Trevor Woerner <twoerner@...>

Newer kernels, in addition to the traditional /dev/tpmX device nodes, are now
also creating /dev/tpmrmX device nodes. This causes this script to get
confused and abort, meaning tpm2-abrmd does not get started during boot.

Fix for https://github.com/flihp/meta-measured/issues/56

Signed-off-by: Trevor Woerner <twoerner@...>
Signed-off-by: Diego Santa Cruz <Diego.SantaCruz@...>
---
meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd-init.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd-init.sh b/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd-init.sh
index c8dfb7d..9bb7da9 100644
--- a/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd-init.sh
+++ b/meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd-init.sh
@@ -27,7 +27,7 @@ case "${1}" in
start)
echo -n "Starting $DESC: "

- if [ ! -e /dev/tpm* ]
+ if [ ! -e /dev/tpm? ]
then
echo "device driver not loaded, skipping."
exit 0
--
2.18.1


[meta-security][PATCH 0/2] tpm2-abrmd: startup fixes

Diego Santa Cruz
 

This ports patches from the meta-measured layer that fix tpm2-abrmd
not starting up.
The first patch is necessary on kernels 4.12 and later.
The second patch is necessary after the update to 2.3.0.

Philip Tricca (1):
tpm2-abrmd: Port command line options to new version.

Trevor Woerner (1):
tpm2-abrmd-init.sh: fix for /dev/tpmrmX

meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd-init.sh | 2 +-
meta-tpm/recipes-tpm2/tpm2-abrmd/files/tpm2-abrmd.default | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

--
2.18.1


Re: http2 support issue in curl #yocto #raspberrypi

Uzair Mazhar
 

Hey Ross,
Thank you for your help. 
The problem was I wasn't adding nghttp2 in PACKAGECONFIG??=.  Once I did it solved the http2 issue. 

Regards,
Uzair


What influences the "latest version" shown in bitbake?

Sean McKay
 

Hi all,

 

I’m in the middle of our organization’s upgrade to the most recent version of poky (3.0) and I’m finding that more recent versions of recipes in our poky layer aren’t showing up as the most recent version (bitbake -s) or even accessible if I set the PREFERRED_VERSION in our distro’s .conf file.

 

For a specific example: We’re trying to stick with LTS kernel releases, so we want to move to linux-yocto 4.19. We have 4.14 in our current codebase, but it was manually pulled in (we had a rather old version of poky) so it’s in a different layer. Setting the PREFERRED_VERSION to 4.19% doesn’t result in its use, and 4.19 doesn’t even show up as an available version if I run bitbake -s. If I remove the 4.14 bb file, linux-yocto goes away completely as a buildable recipe. The only way that we’ve found that causes 4.19 to show up as an option is to add a (currently mostly empty) linux-yocto_4.19.bbappend file in one of the higher layers. (Note that there are 4.14.bbappends too, partially because of backported fixes, and partially because there are internal patches that we haven’t moved forward to 4.19 yet)

 

Note that someone did manually set all the layer priorities in their respective .conf files in our codebase, but I didn’t think that would necessarily have any effect on the version detection.

 

Can anyone offer any guidance as to what’s causing this behavior? Is there documentation somewhere on how bitbake determines the most recent version?

 

Thanks!

-Sean McKay


Re: Question about shipping files to package.

Sean McKay
 

I’m not an expert on compiling kernel modules, but I can at least answer some of your questions to get started. I’m also not an actual expert, but just another user, so it’s entirely possible that someone more knowledgeable will come along and tell us that I’m wrong.

 

With that said, first, I have to ask the simple stuff: when you say that no .ko is being created, where are you looking? If you’re not sure where you should be looking, I’d recommend by doing a find inside of your ${WORKDIR} to see whether or not you’ve got any .ko files. They won’t quite be out in plain sight.

 

In general (though again, I can’t speak for the module class), compilations take place in a build directory inside of the recipe’s workdir (${B} in bitbake parlance). The install process then copies the created files to a staging directory (${D} in bitbake parlance) that’s usually ${WORKDIR}/image.

 

Packaging is the process of taking those output files from ${D} and packaging them into one of a few well defined packaging formats. If you’re using the defaults, this would be RPM. A single recipe can (and often does) produce multiple packages. When bitbake moves onto the packaging step (I’m simplifying a bit), it creates directories for each package that’s going to be generated inside of ${WORKDIR}/packages-split. It then looks at the FILES variable for the particular package it’s working on and copies any files that match the patterns in that variable into that package’s directory in packages-split. When it’s done copying files, each of those directories are turned into the appropriate package (probably .rpm) file.

So, for a somewhat more concrete example, when you said ‘FILES_${PN} += "/lib/modules/4.1.8-rt8+gbd51baf"’, you told bitbake that anything that matched /lib/modules/4.1.8-rt8+gbd51baf should get put in the ${PN} (main) package.

 

In the context of the error message you were previously receiving, “installed” is the do_install step, which is referring to that staging process (analogous to running ‘make install’ on something you just downloaded from the internet). Something is ‘shipped’ if it is packaged into one of the final rpm files.

 

Cheers!

-Sean McKay

 

From: yocto@... <yocto@...> On Behalf Of Wayne Li
Sent: Thursday, December 5, 2019 2:26 PM
To: bitbake-devel <bitbake-devel@...>; Yocto Project Discussion <yocto@...>
Subject: [yocto] Question about shipping files to package.

 

Dear Yocto Developers,

 

So I created a bitbake recipe to integrate kvm into my image as an out-of-tree kernel module.  Here's my recipe right now:

 

LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://COPYING;md5=c616d0e7924e9e78ee192d99a3b26fbd"

inherit module

SRC_URI = "file:///homead/QorIQ-SDK-V2.0-20160527-yocto/sources/meta-virtualization/recipes-kernel/kvm-kmodule/kvm-kmod-3.10.21.tar.bz2"

S = "${WORKDIR}/kvm-kmod-3.10.21"

do_configure() {
    ./configure --arch=ppc64 --kerneldir=/homead/QorIQ-SDK-V2.0-20160527-yocto/build_t4240rdb-64b/tmp/work/t4240rdb_64b-fsl-linux/kernel-devsrc/1.0-r0/image/usr/src/kernel
}

FILES_${PN} += "/lib/modules/4.1.8-rt8+gbd51baf"

 

Bitbaking this recipe completes with no problems but no kernel module is created (compiling the source code should create a kernel module file kvm.ko).  I was wondering if the problem might be because of what I set FILES_${PN} to be.  Before I set the FILES variable I was getting an error saying something along the lines of, "Files/directories were installed but not shipped."  Then I more or less just guessed a directory and set my FILES variable to it and then the recipe finished bitbaking with no errors.

 

But now that the kvm.ko file isn't even being created, I am wondering if it might be because I set the FILES variable wrong?  What does the word "ship" mean?  And along those lines what exactly is a "package" in the setting of Yocto project?

 

-Thanks!, Wayne Li


Question about shipping files to package.

Wayne Li <waynli329@...>
 

Dear Yocto Developers,

So I created a bitbake recipe to integrate kvm into my image as an out-of-tree kernel module.  Here's my recipe right now:

LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://COPYING;md5=c616d0e7924e9e78ee192d99a3b26fbd"

inherit module

SRC_URI = "file:///homead/QorIQ-SDK-V2.0-20160527-yocto/sources/meta-virtualization/recipes-kernel/kvm-kmodule/kvm-kmod-3.10.21.tar.bz2"

S = "${WORKDIR}/kvm-kmod-3.10.21"

do_configure() {
    ./configure --arch=ppc64 --kerneldir=/homead/QorIQ-SDK-V2.0-20160527-yocto/build_t4240rdb-64b/tmp/work/t4240rdb_64b-fsl-linux/kernel-devsrc/1.0-r0/image/usr/src/kernel
}

FILES_${PN} += "/lib/modules/4.1.8-rt8+gbd51baf"

Bitbaking this recipe completes with no problems but no kernel module is created (compiling the source code should create a kernel module file kvm.ko).  I was wondering if the problem might be because of what I set FILES_${PN} to be.  Before I set the FILES variable I was getting an error saying something along the lines of, "Files/directories were installed but not shipped."  Then I more or less just guessed a directory and set my FILES variable to it and then the recipe finished bitbaking with no errors.

But now that the kvm.ko file isn't even being created, I am wondering if it might be because I set the FILES variable wrong?  What does the word "ship" mean?  And along those lines what exactly is a "package" in the setting of Yocto project?

-Thanks!, Wayne Li


Re: http2 support issue in curl #yocto #raspberrypi

Uzair Mazhar
 

On Thu, Dec 5, 2019 at 07:48 PM, Ross Burton wrote:
Did you add meta-networking to bblayers.conf?
Yes! I have. 

I am going to try "PACKAGECONFIG[nghttp2] = "--with-nghttp2,--without-nghttp2,nghttp2"" thing with the help of documentation and will get back to you with my findings.

Thanks, 
Uzair


Re: [patchtest][PATCH 1/2] patchtest: Fix printing of exception tracebacks

Leonardo Sandoval
 

LGTM

On 12/5/2019 6:56 AM, Paul Barker wrote:
On Fri, 29 Nov 2019, at 16:09, Paul Barker wrote:
On Fri, 15 Nov 2019, at 13:23, Paul Barker wrote:
The addError() handler is called outside of an actual exception handler
so sys.exc_info() doesn't actually return an exception. This means that
traceback.print_exc() doesn't know what to print. Instead we need to use
traceback.print_exception() with the err object we've been given.

Signed-off-by: Paul Barker <paul@...>
---
patchtest | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/patchtest b/patchtest
index 592f73e..59b19f5 100755
--- a/patchtest
+++ b/patchtest
@@ -101,8 +101,7 @@ def getResult(patch, mergepatch):

def addError(self, test, err):
self.test_error = True
- (ty, va, trace) = err
- logger.error(traceback.print_exc())
+ logger.error(traceback.print_exception(*err))

def addFailure(self, test, err):
self.test_failure = True
--
2.17.1

Ping on this and the following patch.
Cc'ing the maintainers listed in the readme along with Changqing Li as discussed during the Yocto Project call on Tuesday.

Could you confirm who's maintaining patchtest these days and update the readme? I've got a few ideas for enhancements and I may be able to assist with maintaining these repositories after the new year but I'd like to get these initial changes in first.


Re: [patchtest-oe][PATCH] test_patch_upstream_status: Be explicit about case sensitivity

Leonardo Sandoval
 

LGTM

On 12/5/2019 6:55 AM, Paul Barker wrote:
On Fri, 29 Nov 2019, at 16:09, Paul Barker wrote:
On Fri, 15 Nov 2019, at 13:09, Paul Barker wrote:
The case sensitivity of the checks for the "Upstream-Status" label
should match so that failure messages make sense. The checks in the
parse_upstream_status module are case sensitive and so the initial regex
check should also be made case sensitive.

A quick note about case sensitivity is added to the advice given if
"Upstream-Status" is not found.

Signed-off-by: Paul Barker <paul@...>
---
tests/test_patch_upstream_status.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/test_patch_upstream_status.py
b/tests/test_patch_upstream_status.py
index a477dfb..ecccc58 100644
--- a/tests/test_patch_upstream_status.py
+++ b/tests/test_patch_upstream_status.py
@@ -23,7 +23,7 @@ import os

class PatchUpstreamStatus(base.Base):

- upstream_status_regex = re.compile("(?<=\+)Upstream.Status", re.IGNORECASE)
+ upstream_status_regex = re.compile("(?<=\+)Upstream.Status")

@classmethod
def setUpClassLocal(cls):
@@ -47,7 +47,7 @@ class PatchUpstreamStatus(base.Base):
payload = newpatch.__str__()
if not self.upstream_status_regex.search(payload):
self.fail('Added patch file is missing Upstream-Status
in the header',
- 'Add Upstream-Status: <Valid status> to the
header of %s' % newpatch.path,
+ 'Add Upstream-Status: <Valid status> (case
sensitive) to the header of %s' % newpatch.path,
data=[('Standard format',
self.standard_format), ('Valid status', self.valid_status)])
for line in payload.splitlines():
if self.patchmetadata_regex.match(line):
--
2.17.1

Ping.
Cc'ing the maintainers listed in the readme along with Changqing Li as discussed during the Yocto Project call on Tuesday.

Could you confirm who's maintaining patchtest-oe these days and update the readme? I've got a few ideas for enhancements and I may be able to assist with maintaining these repositories after the new year but I'd like to get these initial changes in first.


Re: [patchtest-oe][PATCH] test_patch_upstream_status: Be explicit about case sensitivity

Leonardo Sandoval
 

On 12/5/2019 6:55 AM, Paul Barker wrote:
On Fri, 29 Nov 2019, at 16:09, Paul Barker wrote:
On Fri, 15 Nov 2019, at 13:09, Paul Barker wrote:
The case sensitivity of the checks for the "Upstream-Status" label
should match so that failure messages make sense. The checks in the
parse_upstream_status module are case sensitive and so the initial regex
check should also be made case sensitive.

A quick note about case sensitivity is added to the advice given if
"Upstream-Status" is not found.

Signed-off-by: Paul Barker <paul@...>
---
tests/test_patch_upstream_status.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/test_patch_upstream_status.py
b/tests/test_patch_upstream_status.py
index a477dfb..ecccc58 100644
--- a/tests/test_patch_upstream_status.py
+++ b/tests/test_patch_upstream_status.py
@@ -23,7 +23,7 @@ import os

class PatchUpstreamStatus(base.Base):

- upstream_status_regex = re.compile("(?<=\+)Upstream.Status", re.IGNORECASE)
+ upstream_status_regex = re.compile("(?<=\+)Upstream.Status")

@classmethod
def setUpClassLocal(cls):
@@ -47,7 +47,7 @@ class PatchUpstreamStatus(base.Base):
payload = newpatch.__str__()
if not self.upstream_status_regex.search(payload):
self.fail('Added patch file is missing Upstream-Status
in the header',
- 'Add Upstream-Status: <Valid status> to the
header of %s' % newpatch.path,
+ 'Add Upstream-Status: <Valid status> (case
sensitive) to the header of %s' % newpatch.path,
data=[('Standard format',
self.standard_format), ('Valid status', self.valid_status)])
for line in payload.splitlines():
if self.patchmetadata_regex.match(line):
--
2.17.1

Ping.
Cc'ing the maintainers listed in the readme along with Changqing Li as discussed during the Yocto Project call on Tuesday.
Sorry for the delay Paul. Together with Paul E., we started the patchtest but since late 2017 I am not longer maintaining it. I definitely can help review patches and enhancements.


Could you confirm who's maintaining patchtest-oe these days and update the readme? I've got a few ideas for enhancements and I may be able to assist with maintaining these repositories after the new year but I'd like to get these initial changes in first.


Re: Missing layer in the layer index

Denys Dmytriyenko
 

On Thu, Dec 05, 2019 at 11:18:22AM +0000, Paul Barker wrote:
On Thu, 5 Dec 2019, at 10:07, Paul Eggleton wrote:
On Thursday, 5 December 2019 9:48:48 PM NZDT Paul Barker wrote:
On Thu, 5 Dec 2019, at 01:37, Paul Eggleton wrote:
On Wednesday, 4 December 2019 11:10:49 PM NZDT Nicolas Dechesne wrote:
I'd like to make sure meta-sancloud (https://github.com/SanCloudLtd/meta-> > sancloud) is listed in the layer index. I can't see it listed for either
of the branches we support (thud & rocko). However, when I try to add the
layer I get the error message "Layer with this Layer name already exists."

Perhaps this was already added with an old URL. Is there any way to get
this fixed up?
yes, this is the reason. It exists with the following URL:
https://bitbucket.sancloud.co.uk/scm/yb/meta-sancloud.git

The maintainer for that layer is not listed.. was it you?
Oddly there are no maintainer records and no layer branch records either,
hence why the layer doesn't show up properly. I'm unsure how it would have got
into that state or how long it's been there, but since it's pretty much
useless I have gone ahead and deleted it - Paul, could you please file your
submission again?
Thanks for that, it may have got broken in the layer index when we moved the repo from our private Bitbucket instance over to GitHub. I've resubmitted now.
Actually looking at the new repo I think I know what might have
happened. The layer index does not currently handle layers that don't
have a master branch perfectly; it could be that the original repo went
away and that caused it to remove all the layerbranches, and since
maintainers are per layerbranch they also got removed. (If a master
layerbranch is there it is protected from deletion even if upstream
master goes away, you just get a warning during parsing.)

I can understand why people don't want to have a master branch if they
aren't using it; that most layers have it was an assumption I made in
the earlier design. It's fixable but will take a bit of work to ensure
the correct behaviour. (This assumption is also reflected in the
submission process - by default a master layerbranch is created, but
for layers without a master branch as the approver you then have to go
in and switch the master branch to whatever the "main" branch is - I
just did that for this layer.)
Ah ok. I have a pet hate of layers with a master branch that doesn't
actually work with master of oe-core and other layers. I'd rather see a
repository with no master branch!

In this case we're building with meta-ti & meta-arago which only support
every other release. I know they also have a master branch but it's always
been broken when I've tried to use it in the past. I may see if I can give
it another try.
FWIW, both of those have been getting some love lately to at least build
against master, so you might want to try again. But in general, at least
meta-arago depends on other layers and components that are always lagging
behind and hence it's rather difficult to keep everything working with
rapidly changing master...

--
Denys


Re: http2 support issue in curl #yocto #raspberrypi

Ross Burton <ross.burton@...>
 

On 05/12/2019 11:23, Uzair Mazhar wrote:
As I mentioned That I Cloned warrior branch of poky, meta-raspberrypi and meta-openembedded.
nghttp2 recipe is already in my repository under "meta-openembedded/meta-networking/recipes-support/nghttp2"
but the problem is still persistent.
Did you add meta-networking to bblayers.conf?

I tried to add nghttp2 as RDENPENDS in curl and it compiled successfully but libnghttp2 was not part of my image.
Probably because the dependency ended up on a package you didn't actually instally. Also RDEPENDS are not in the sysroot, so the curl build wouldn't have been able to see nghttp2, you meant to add it to DEPENDS.

If I add nghttp2 in conf/local.conf inside "IMAGE_INSTALL_append ", the libnghttp2 becomes the part of my image under /usr/lib but again curl doesn't support http2.
Adding the library to the image won't change the curl library which has already been built without nghttp2.


Looking at the curl recipe, nghttp2 support is there but disabled out of the box, so you just need to turn the support on. The recipe has:

PACKAGECONFIG[nghttp2] = "--with-nghttp2,--without-nghttp2,nghttp2"

The documentation covers the methods available to turn on/off PACKAGECONFIG options:

https://www.yoctoproject.org/docs/latest/ref-manual/ref-manual.html#var-PACKAGECONFIG

Ross

10201 - 10220 of 57790