Date   

Re: what to expect from distributed sstate cache?

Mans Zigher <mans.zigher@...>
 

Hi,

Thanks for the input. Regarding docker we are building the docker
image and we are using the same image for all nodes so should they not
be identical when the nodes start the containers?

Thanks,

Den ons 27 maj 2020 kl 11:16 skrev <Mikko.Rapeli@...>:


Hi,

On Wed, May 27, 2020 at 10:58:55AM +0200, Mans Zigher wrote:
This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.
We do something similar except we rsync a sstate mirror to build
nodes from latest release before a build (and topic from gerrit
are merged to latest release too to avoid sstate and build tree getting
too out of sync).

bitbake-diffsigs can tell you why things get rebuild. The answers
should be there.

Also note that docker images are not reproducible by default
and might end up having different patch versions of openssl etc
depending on who build them and when. One way to work around this
is to use e.g. snapshots.debian.org repos for Debian containers
with a timestamped state of the full package repo used to generate
the container. I've done something similar but manual on top of
debootstrap to create a build rootfs tarball for lxc.

Hope this helps,

-Mikko


Re: what to expect from distributed sstate cache?

Mikko Rapeli
 

Hi,

On Wed, May 27, 2020 at 10:58:55AM +0200, Mans Zigher wrote:
This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.
We do something similar except we rsync a sstate mirror to build
nodes from latest release before a build (and topic from gerrit
are merged to latest release too to avoid sstate and build tree getting
too out of sync).

bitbake-diffsigs can tell you why things get rebuild. The answers
should be there.

Also note that docker images are not reproducible by default
and might end up having different patch versions of openssl etc
depending on who build them and when. One way to work around this
is to use e.g. snapshots.debian.org repos for Debian containers
with a timestamped state of the full package repo used to generate
the container. I've done something similar but manual on top of
debootstrap to create a build rootfs tarball for lxc.

Hope this helps,

-Mikko


Re: what to expect from distributed sstate cache?

Alexander Kanavin
 

The recommended set up is to use r/w NFS between build machines, so they all contribute to the cache directly. And yes, if the inputs to the task are identical, then there should a cache hit.

If you are getting cache misses where you are expecting a cache hit, then bitbake-diffsigs/bitbake-dumpsig may help to debug.

Alex


On Wed, 27 May 2020 at 10:59, Mans Zigher <mans.zigher@...> wrote:
Hi,

This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned  and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.

Thanks


what to expect from distributed sstate cache?

Mans Zigher <mans.zigher@...>
 

Hi,

This is maybe more related to bitbake but I start by posting it here.
I am for the first time trying to make use of a distributed sstate
cache but I am getting some unexpected results and wanted to hear if
my expectations are wrong. Everything works as expected when a build
node is using a sstate cache from it's self so I do a clean build and
upload the sstate cache from that build to our mirror. If then do a
complete build using the mirror I get a 99% hit rate which is what I
would expect. If I then start a build on a different node using the
same cache I am only getting a 16% hit rate. I am running the builds
inside docker so the environment should be identical. We have several
build nodes in our CI and they where actually cloned and all of them
have the same HW. They are all running the builds in docker but it
looks like they can share the sstate cache and still get a 99% hit
rate. This to me suggests that the hit rate for the sstate cache is
node depending so a cache cannot actually be shared between different
nodes which is not what I expected. I have not been able find any
information about this limitation. Any clarification regarding what to
expect from the sstate cache would be appreciated.

Thanks


about when poky check sstate_cache #yocto

zhangyifan46@...
 

I read the codes about sstate_cache. Let me briefly talk about how it work . I am not sure if it is right.
Example,we have setscene tasks A_setsscene, B_setcene.
and Task C ,D are not setscene task.
After we bitbake something, runqueue started,and it check A , B to see if the sstate_caache exists and if exist, the poky does unpacking the cache and then ,ordinary build process starts and it will skip A and B.
So checking the existence and validation of sstate_cache is before ordinary build process instead of right before execcuting A and B(for example, we have a dependency tree, A,depend on X ,checking is before X instead of between X and A)
that's the result of my reading codes. And i did a small experienment that, in the middle of a clean bitbake project, i copied  all sstae_caches to the sstate_cache directory and it seemed the bitbake skips no task and unpacking did not happen.

Finally, i have a question. Can we move the checking and unpacking to, like the example above, right before executing A ,between X and A.
it means a lot to me. Cause the sharing of sstate_cache may be helpful to a more efficent parallel make(instead of just parallel make gcc jobs)


Re: [PATCH yocto-autobuilder-helper] scripts: add a pair of scripts to set up and run Auto Upgrade Helper

Anibal Limon
 



On Mon, 18 May 2020 at 02:03, Nicolas Dechesne <nicolas.dechesne@...> wrote:


On Sun, May 17, 2020 at 5:21 PM Alexander Kanavin <alex.kanavin@...> wrote:
This allows automating its setup and execution on all autobuilder worker machines;
previously there was a static setup on a dedicated machine, which wasn't
great from maintenance perspective.

To use:

scripts/setup-auh target_dir
scripts/run-auh target_dir

(run-auh can be run several times in a directory that
was previously set up)

Signed-off-by: Alexander Kanavin <alex.kanavin@...>
---
 scripts/auh-config/local.conf.append   |  3 +++
 scripts/auh-config/upgrade-helper.conf | 33 ++++++++++++++++++++++++++
 scripts/run-auh                        | 32 +++++++++++++++++++++++++
 scripts/setup-auh                      | 26 ++++++++++++++++++++
 4 files changed, 94 insertions(+)
 create mode 100644 scripts/auh-config/local.conf.append
 create mode 100644 scripts/auh-config/upgrade-helper.conf
 create mode 100755 scripts/run-auh
 create mode 100755 scripts/setup-auh

diff --git a/scripts/auh-config/local.conf.append b/scripts/auh-config/local.conf.append
new file mode 100644
index 0000000..9628737
--- /dev/null
+++ b/scripts/auh-config/local.conf.append
@@ -0,0 +1,3 @@
+
+INHERIT += "buildhistory"
+LICENSE_FLAGS_WHITELIST = "commercial"
diff --git a/scripts/auh-config/upgrade-helper.conf b/scripts/auh-config/upgrade-helper.conf
new file mode 100644
index 0000000..fbf5d8a
--- /dev/null
+++ b/scripts/auh-config/upgrade-helper.conf
@@ -0,0 +1,33 @@
+[maintainer_override]
+# mails for recipe upgrades will go to john.doe instead of jane.doe, etc
+#ross.burton@...=anibal.limon@...

unrelated.. but I just spotted this ^. I am cc'ing Anibal new email address. it's in a few other places.

Right, Looks good to me since Alexander cleaned-up the AUH after some tinfoil improvements, he is the new maintainer :).

Cheers,
Anibal
 
 

+
+[settings]
+# recipes in blacklist will be skipped
+blacklist=linux-libc-headers linux-yocto alsa-utils-scripts build-appliance-image
+#blacklist=python python3 glibc gcc linux-libc-headers linux-yocto-rt linux-yocto linux-yocto-dev linux-yocto-tiny qt4-x11-free qt4-embedded qt4-x11-free qt4e-demo-image gnome-common gnome-desktop3 gnome-desktop-testing adt-installer build-appliance-image
+# only recipes belonging to maintainers in whitelist will be attempted
+#maintainers_whitelist=anibal.limon@...
+# SMTP server
+smtp=smtp1.yoctoproject.org:25
+# from whom should the mails arrive
+from=auh@...
+# who should get the status mail with statistics, at the end
+status_recipients=openembedded-core@...
+# clean sstate directory before upgrading
+#clean_sstate=yes
+# clean tmp directory before upgrading
+#clean_tmp=yes
+# machines to test build with
+#machines=qemux86 qemux86-64 qemuarm qemumips qemuppc
+#machines=qemux86
+
+buildhistory=yes
+#testimage=yes
+#testimage_name=core-image-minimal
+
+#workdir=/home/auh/work/
+#publish_work_url=https://logs.yoctoproject.org/auh/
+
+commit_revert_policy=all
+
diff --git a/scripts/run-auh b/scripts/run-auh
new file mode 100755
index 0000000..29a8044
--- /dev/null
+++ b/scripts/run-auh
@@ -0,0 +1,32 @@
+#!/bin/bash
+# Run Auto Upgrade Helper in a directory set up by setup_auh.
+#
+# Called with $1 - the directory where the setup was created
+
+if [ -z $1 ]; then
+  echo "Use: $0 auh_setup_dir"
+  exit 1
+fi
+
+full_dir=$(readlink -e $1)
+
+auh_dir=$full_dir/auto-upgrade-helper
+poky_dir=$full_dir/poky
+build_dir=$full_dir/build
+sstate_dir=$full_dir/build/sstate-cache
+
+pushd $poky_dir
+
+# Base the upgrades on poky master
+git fetch origin
+git checkout -B tmp-auh-upgrades origin/master
+
+source $poky_dir/oe-init-build-env $build_dir
+$auh_dir/upgradehelper.py -e all
+
+# clean up to avoid the disk filling up
+rm -rf $build_dir/tmp/
+rm -rf $build_dir/workspace/sources/*
+find $sstate_dir -atime +10 -delete
+
+popd
diff --git a/scripts/setup-auh b/scripts/setup-auh
new file mode 100755
index 0000000..23f3d44
--- /dev/null
+++ b/scripts/setup-auh
@@ -0,0 +1,26 @@
+#!/bin/bash
+# Initialize Auto Upgrade Helper in a directory.
+#
+# Called with $1 - the directory to place the setup
+CONFIG_DIR=`dirname $0`/auh-config
+
+if [ -z $1 ]; then
+  echo "Use: $0 target_dir"
+  exit 1
+fi
+
+mkdir -p $1
+pushd $1
+
+git clone git://git.yoctoproject.org/poky
+pushd poky
+git config user.email auh@...
+git config user.name "Auto Upgrade Helper"
+popd
+git clone git://git.yoctoproject.org/auto-upgrade-helper
+source poky/oe-init-build-env build
+mkdir -p upgrade-helper
+popd
+
+cp $CONFIG_DIR/upgrade-helper.conf $1/build/upgrade-helper
+cat $CONFIG_DIR/local.conf.append >> $1/build/conf/local.conf
--
2.26.2



Resolving unbuildable dependency chain: custom image (machine & kernel) and quilt-native #yocto

mark@...
 

Hi,
Thanks for all the efort that has gone into Yocto, esp the documentation.
I've setup a new machine definition (x86_64 arch), custom BSP, custom kernel and image.

Initially I had trouble establishing the dependency chain between the kernel and the machine.
Thank you to Diego Santa Cruz and Bruce Ashfield's for their descriptions:
https://www.yoctoproject.org/pipermail/yocto/2019-October/047002.html

From these I was able to get the kernel BSP, BSP recipe and machine definition holding hands and playing nice... up until these squabbles with quilt-native:

Running the `recipes-core/images/bcknv-base.bb`:

$ bitbake bcknv-base
ERROR: Nothing PROVIDES 'quilt-native'
quilt-native was skipped: incompatible with machine bcknv (not in COMPATIBLE_MACHINE)
ERROR: Required build target 'bcknv-base' has no buildable providers.
Missing or unbuildable dependency chain was: ['bcknv-base', 'quilt-native']

Checking it was not just an issue with the image:

 $ bitbake virtual/kernel
ERROR: Nothing PROVIDES 'quilt-native'
quilt-native was skipped: incompatible with machine bcknv (not in COMPATIBLE_MACHINE)
ERROR: Required build target 'virtual/kernel' has no buildable providers.
Missing or unbuildable dependency chain was: ['virtual/kernel', 'quilt-native']

This is progress since before the unbuildable dependency chain was: ['bcknv-base', 'virtual/kernel']

I've been through the BSP and kernel dev manuals as well as the mega manual and couldn't see anywhere this is address.
Obviously I'm missing the tree for the forest.

Appreciate any hints, tips or suggestions on where I've gone wrong.

Kind regards
Mark


Re: gpg: can't connect to the agent: File name too long

Diego Santa Cruz
 

-----Original Message-----
From: yocto@... <yocto@...> On
Behalf Of Damien LEFEVRE via lists.yoctoproject.org
Sent: 26 May 2020 11:02
To: yocto@...
Subject: Re: [yocto] gpg: can't connect to the agent: File name too long

I think my problem is that the do_image_* are running as fakeroot/pseudo.

Is there a way to run this task as a normal local user.

I read that I should create the socket when not running under local user with
gpgconf --create-socketdir


But this fails too although I set permissions for all on the gpg files and
directories:
'''
| gpgconf: socketdir is '/test-warrior/build-jetson-
xavier/tmp/work/jetson_xavier-poky-linux/test-image/1.0-
r0/my_img/home'
| gpgconf: no /run/user dir
| gpgconf: using homedir as fallback
| gpgconf: error creating socket directory
| gpgconf: fatal error (exit status 1)

'''

Basically I need to, as a normal user, run gpg after do_image_tegra.

Any hint?
The problem is that the path to UNIX sockets are limited in length and you are probably hitting that limit. The base classes take care of avoiding that but I did hit this problem in a custom recipe that was using gpg directly.

I solved problem in the task shell function that was calling gpg by using a host temporary directory (/var/tmp/...) as a throw away GPG home directory.

gpgdir=`mktemp -td ${PN}-gpg.XXXXXX`
install -m 700 -d $gpgdir/home
gpg --batch --homedir $gpgdir/home ...
...
rm -rf $gpgdir

Hope that helps.

--
Diego Santa Cruz, PhD
Technology Architect
spinetix.com


Re: Overwrite a bblcass globally

Ayoub Zaki
 

Thanks for the idea of using BBPATH.

my change is kind of custom and I don't think someone else will make use of it. 


Cheers

On Tue, May 26, 2020 at 4:42 PM Quentin Schulz <quentin.schulz@...> wrote:
Hi Ayoub,

On Tue, May 26, 2020 at 04:36:13PM +0200, Ayoub Zaki via lists.yoctoproject.org wrote:
> Hi,
>
> I would like to make changes on systemd.bblcass in my layer.
>
> I can create new one e.g my-systemd.bbclass to overwrite the default one
> but this will not work since I want that ALL recipes in all layers I'm
> using to make usage of it.
>
> are there ways to achieve this?
>

BBPATH[1] is what's used to locate bbclasses. It's defined in your
conf/layer.conf.

You can either make sure your layer is parsed before the one having the
original systemd.bbclass (in BBLAYERS of conf/bblayers.conf) or prepend
to BBPATH instead of appending.

Although... It's usually bad practice because it means that if I were to use
two layers doing the same thing (overriding the same bbclass), the behavior
is kind of undefined depending on which order they are parsed.

Ideally, you should contribute back the modifications to upstream
systemd.bbclass, that way, no need to duplicate it and override it from
somewhere else.

[1] https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#var-BBPATH

Cheers,
Quentin


Re: Overwrite a bblcass globally

Robert P. J. Day
 

On Tue, 26 May 2020, Ayoub Zaki via lists.yoctoproject.org wrote:

Hi,
I would like to make changes on systemd.bblcass in my layer. 

I can create new one e.g my-systemd.bbclass to overwrite the default
one but this will not work since I want that ALL recipes in all
layers I'm using to make usage of it.

are there ways to achieve this?
aside from whether there is a good way to do this, this seems to be
a really bad idea since recipes that inherit what they *think* is the
systemd class file will be getting something different; that's just
inviting all sorts of unintended bugs.

rday


Yocto Project Status WW20'21

Stephen Jolley
 

Current Dev Position: YP 3.2 M1

Next Deadline: YP 3.2 M1 build date 2020/6/16

 

Next Team Meetings:

 

Key Status/Updates:

  • YP 3.0.3 is in QA result review, there were ptest regressions but these look intermittent in nature and hence unlikely to block release
  • YP 2.7.4 should be built this week
  • One of the autobuilder issues where workers were hanging was tracked down to a NFS incompatibility between the NAS and new kernel versions. The NAS was upgraded to resolve this at a cost of some short notice downtime.
  • There remain a large number of bugs that we’d ideally like to fix in 3.2 M1 (or M2) but they are “unassigned”, there is nobody to work on them. If anyone has time, looking at these bugs would be a great way to help us. See: https://wiki.yoctoproject.org/wiki/Bug_Triage#Medium.2B_3.2_Unassigned_Enhancements.2FBugs
  • There continue to be other intermittent autobuilder issues in both master and dunfell which we’re trying to track down. These ones look related to the code rather than infrastructure.
  • The AUH is now implemented as an autobuilder job and will be run from there in future.

 

YP 3.2 Milestone Dates:

  • YP 3.2 M1 build date 2020/6/16
  • YP 3.2 M1 Release date 2020/6/26
  • YP 3.2 M2 build date 2020/7/27
  • YP 3.2 M2 Release date 2020/8/7
  • YP 3.2 M3 build date 2020/8/31
  • YP 3.2 M3 Release date 2020/9/11
  • YP 3.2 M4 build date 2020/10/5
  • YP 3.2 M4 Release date 2020/10/30

 

Planned upcoming dot releases:

  • YP 3.0.3 is out of QA and the results are being reviewed.
  • YP 3.0.3 release date 2020/5/15
  • YP 2.7.4 build date 2020/5/18
  • YP 2.7.4 release date 2020/5/29
  • YP 3.1.1 build date 2020/6/29
  • YP 3.1.1 release date 2020/7/10
  • YP 3.0.4 build date 2020/8/10
  • YP 3.0.4 release date 2020/8/21
  • YP 3.1.2 build date 2020/9/14
  • YP 3.1.2 release date 2020/9/25

 

Tracking Metrics:

 

The Yocto Project’s technical governance is through its Technical Steering Committee, more information is available at:

https://wiki.yoctoproject.org/wiki/TSC

 

The Status reports are now stored on the wiki at: https://wiki.yoctoproject.org/wiki/Weekly_Status

 

[If anyone has suggestions for other information you’d like to see on this weekly status update, let us know!]

 

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

(    Cell:                (208) 244-4460

* Email:              sjolley.yp.pm@...

 


Re: Overwrite a bblcass globally

Quentin Schulz
 

Hi Ayoub,

On Tue, May 26, 2020 at 04:36:13PM +0200, Ayoub Zaki via lists.yoctoproject.org wrote:
Hi,

I would like to make changes on systemd.bblcass in my layer.

I can create new one e.g my-systemd.bbclass to overwrite the default one
but this will not work since I want that ALL recipes in all layers I'm
using to make usage of it.

are there ways to achieve this?
BBPATH[1] is what's used to locate bbclasses. It's defined in your
conf/layer.conf.

You can either make sure your layer is parsed before the one having the
original systemd.bbclass (in BBLAYERS of conf/bblayers.conf) or prepend
to BBPATH instead of appending.

Although... It's usually bad practice because it means that if I were to use
two layers doing the same thing (overriding the same bbclass), the behavior
is kind of undefined depending on which order they are parsed.

Ideally, you should contribute back the modifications to upstream
systemd.bbclass, that way, no need to duplicate it and override it from
somewhere else.

[1] https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#var-BBPATH

Cheers,
Quentin


Overwrite a bblcass globally

Ayoub Zaki
 

Hi,

I would like to make changes on systemd.bblcass in my layer. 

I can create new one e.g my-systemd.bbclass to overwrite the default one but this will not work since I want that ALL recipes in all layers I'm using to make usage of it.

are there ways to achieve this?


Thank you !


Cheers



Re: gpg: can't connect to the agent: File name too long

Damien LEFEVRE
 

I think my problem is that the do_image_* are running as fakeroot/pseudo.

Is there a way to run this task as a normal local user.

I read that I should create the socket when not running under local user with
gpgconf --create-socketdir

But this fails too although I set permissions for all on the gpg files and directories:
'''
| gpgconf: socketdir is '/test-warrior/build-jetson-xavier/tmp/work/jetson_xavier-poky-linux/test-image/1.0-r0/my_img/home'
| gpgconf: no /run/user dir
| gpgconf: using homedir as fallback
| gpgconf: error creating socket directory
| gpgconf: fatal error (exit status 1)
'''

Basically I need to, as a normal user, run gpg after do_image_tegra.

Any hint?


Re: overwrite LAYERSERIES_COMPAT_ for different layer

TRO <thomas.roos@...>
 

Hi,
thanks for the reply - I've put it in the bblayers.conf - working.
cheers, Thomas


Enhancements/Bugs closed WW21!

Stephen Jolley
 

All,

The below were the owners of enhancements or bugs closed during the last week!

Who

Count

jean-marie.lemetayer@...

10

akuster808@...

3

randy.macleod@...

2

otavio@...

1

ross@...

1

ricardo.ribalda@...

1

steve@...

1

nicolas.dechesne@...

1

alex.kanavin@...

1

Grand Total

21

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

(    Cell:                (208) 244-4460

* Email:              sjolley.yp.pm@...

 


Yocto Project Newcomer & Unassigned Bugs - Help Needed

Stephen Jolley
 

All,

 

The triage team is starting to try and collect up and classify bugs which a newcomer to the project would be able to work on in a way which means people can find them. They're being listed on the triage page under the appropriate heading:

 

https://wiki.yoctoproject.org/wiki/Bug_Triage#Newcomer_Bugs

 

The idea is these bugs should be straight forward for a person to help work on who doesn't have deep experience with the project.  If anyone can help, please take ownership of the bug and send patches!  If anyone needs help/advice there are people on irc who can likely do so, or some of the more experienced contributors will likely be happy to help too.

 

Also, the triage team meets weekly and does its best to handle the bugs reported into the Bugzilla. The number of people attending that meeting has fallen, as have the number of people available to help fix bugs. One of the things we hear users report is they don't know how to help. We (the triage team) are therefore going to start reporting out the currently 344 unassigned or newcomer bugs.

 

We're hoping people may be able to spare some time now and again to help out with these.  Bugs are split into two types, "true bugs" where things don't work as they should and "enhancements" which are features we'd want to add to the system.  There are also roughly four different "priority" classes right now, “3.1”, “3.2, "3.99" and "Future", the more pressing/urgent issues being in "3.1" and then “3.2”.

 

Please review this link and if a bug is something you would be able to help with either take ownership of the bug, or send me (sjolley.yp.pm@...) an e-mail with the bug number you would like and I will assign it to you (please make sure you have a Bugzilla account).  The list is at: https://wiki.yoctoproject.org/wiki/Bug_Triage_Archive#Unassigned_or_Newcomer_Bugs

 

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

(    Cell:                (208) 244-4460

* Email:              sjolley.yp.pm@...

 


Current high bug count owners for Yocto Project 3.2

Stephen Jolley
 

All,

Below is the list as of top 50 bug owners as of the end of WW21 of who have open medium or higher bugs and enhancements against YP 3.2.   There are 110 possible work days left until the final release candidates for YP 3.2 needs to be released.

Who

Count

richard.purdie@...

30

david.reyna@...

19

bluelightning@...

17

akuster808@...

12

bruce.ashfield@...

12

kai.kang@...

10

ross@...

9

Qi.Chen@...

9

trevor.gamblin@...

8

mark.morton@...

8

randy.macleod@...

7

JPEWhacker@...

7

changqing.li@...

6

timothy.t.orling@...

6

michael@...

5

rpjday@...

4

pbarker@...

4

anuj.mittal@...

4

mingli.yu@...

4

yi.zhao@...

3

raj.khem@...

3

jon.mason@...

3

kexin.hao@...

3

alex.kanavin@...

3

hongxu.jia@...

3

mostthingsweb@...

2

mark.hatle@...

2

dl9pf@...

2

seebs@...

2

ycnakajsph@...

2

kergoth@...

2

alejandro@...

2

chee.yang.lee@...

2

jaewon@...

2

akuster@...

2

jpuhlman@...

2

joe.slater@...

1

nicolas.dechesne@...

1

ydirson@...

1

steve@...

1

ricardo.ribalda@...

1

naveen.kumar.saini@...

1

denis@...

1

kai.ruhnau@...

1

jason.wessel@...

1

sakib.sajal@...

1

matthew.zeng@...

1

Martin.Jansa@...

1

liu.ming50@...

1

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

(    Cell:                (208) 244-4460

* Email:              sjolley.yp.pm@...

 


Re: how to un-blacklist a blacklisted recipe?

Robert P. J. Day
 

On Mon, 25 May 2020, Martin Jansa wrote:

Yes, it's worth. Thanks
done, and submitted. it was just my horrific luck that i was testing
that feature, and randomly grabbed two recipes that used standard
assignment. le *sigh* ...

rday


Re: how to un-blacklist a blacklisted recipe?

Martin Jansa
 

Yes, it's worth. Thanks

On Mon, May 25, 2020 at 7:40 PM Robert P. J. Day <rpjday@...> wrote:
On Mon, 25 May 2020, Robert P. J. Day wrote:

> On Mon, 25 May 2020, Martin Jansa wrote:
>
> > Yes, in local.conf or distro config, both work fine. Maybe your
> > PNBLACKLIST you're trying to unblacklist is using normal assignment
> > instead of weak one? See
>
>   i just now determined that that is exactly what is happening --
> tried it with nanopb recipe from meta-oe, which contains:
>
>   PNBLACKLIST[nanopb] = "Needs forward porting to use python3"
>
> switched that to weak assignment, all good. does that imply that,
> other than in perhaps exceptional circumstances, *all* blacklisting
> should use weak assignment?

  never mind, just took a look at the commit which does exactly this.
is it worth submitting a patch to tweak the few hard assignments?

rday

8281 - 8300 of 57767