Date   

[yocto-autobuilder2][PATCH 1/2] config.py: define workers for honister

Anuj Mittal
 

Define the worker list for honister so we can test reliably now the
release support is ending.

Signed-off-by: Anuj Mittal <anuj.mittal@...>
---
config.py | 1 +
1 file changed, 1 insertion(+)

diff --git a/config.py b/config.py
index f36c273..e7539d9 100644
--- a/config.py
+++ b/config.py
@@ -152,6 +152,7 @@ all_workers = workers + workers_bringup + workers_buildperf + workers_arm

# Worker filtering for older releases
workers_prev_releases = {
+ "honister" : ("alma8", "centos7", "centos8", "debian8", "debian9", "debian10", "debian11", "fedora29", "fedora30", "fedora31", "fedora32", "fedora33", "fedora34", "fedora35", "opensuse150", "opensuse151", "opensuse152", "opensuse153", "ubuntu1604", "ubuntu1804", "ubuntu1904", "ubuntu2004", "ubuntu2110", "ubuntu2204", "perf-"),
"hardknott" : ("centos7", "centos8", "debian8", "debian9", "debian10", "debian11", "fedora31", "fedora32", "fedora33", "fedora34", "opensuse152", "ubuntu1604", "ubuntu1804", "ubuntu2004", "perf-"),
"gatesgarth" : ("centos7", "centos8", "debian8", "debian9", "debian10", "fedora30", "fedora31", "fedora32", "opensuse150", "opensuse151", "opensuse152", "ubuntu1604", "ubuntu1804", "ubuntu1904", "ubuntu2004", "perf-"),
"dunfell" : (""alma8", centos7", "centos8", "debian8", "debian9", "debian10", "debian11", "fedora29", "fedora30", "fedora31", "fedora32", "fedora33", "fedora34", "fedora35", "opensuse150", "opensuse151", "opensuse152", "opensuse153", "ubuntu1604", "ubuntu1804", "ubuntu1904", "ubuntu2004", "perf-"),
--
2.35.3


SHA384 signature for FIT images

Gangadhar N
 

Hi,
I want to use SHA384 instead of SHA256 to sign FIT images. 

diff --git a/poky/meta/classes/kernel-fitimage.bbclass b/poky/meta/classes/kernel-fitimage.bbclass
index bb2f3c4cc..d4f9dddf2 100644
--- a/poky/meta/classes/kernel-fitimage.bbclass
+++ b/poky/meta/classes/kernel-fitimage.bbclass
@@ -51,13 +51,13 @@ python __anonymous () {
 UBOOT_MKIMAGE_DTCOPTS ??= ""

 # fitImage Hash Algo
-FIT_HASH_ALG ?= "sha256"
+FIT_HASH_ALG ?= "sha384"

 # fitImage Signature Algo
 FIT_SIGN_ALG ?= "rsa2048"

 # Generate keys for signing fitImage
-FIT_GENERATE_KEYS ?= "0"
+FIT_GENERATE_KEYS ?= "1"

 # Size of private key in number of bits
 FIT_SIGN_NUMBITS ?= "2048"


I get below error,
ERROR: linux-obmc-5.8.17+gitAUTOINC+c26e1233f9-r0 do_assemble_fitimage: Execution of '/home/gangadhar/openbmc/build/tmp/work/linux-gnueabi/linux-obmc/5.8.17+gitAUTOINC+c26e1233f9-r0/temp/run.do_assemble_fitimage.17762' failed with exit code 255:
none
fit-image.its:8.26-20.19: Warning (unit_address_vs_reg): /images/kernel@1: node has a unit name, but no reg property
fit-image.its:17.32-19.27: Warning (unit_address_vs_reg): /images/kernel@1/hash@1: node has a unit name, but no reg property
fit-image.its:21.29-31.19: Warning (unit_address_vs_reg): /images/fdt@...: node has a unit name, but no reg property
fit-image.its:28.32-30.27: Warning (unit_address_vs_reg): /images/fdt@.../hash@1: node has a unit name, but no reg property
fit-image.its:36.30-50.19: Warning (unit_address_vs_reg): /configurations/conf@...: node has a unit name, but no reg property
fit-image.its:42.32-44.27: Warning (unit_address_vs_reg): /configurations/conf@.../hash@1: node has a unit name, but no reg property
fit-image.its:45.37-49.27: Warning (unit_address_vs_reg): /configurations/conf@.../signature@1: node has a unit name, but no reg property
uboot-mkimage Can't add hashes to FIT blob: -93
Unsupported hash algorithm (sha384) for 'hash@1' hash node in 'kernel@1' image node
WARNING: exit code 255 from a shell command.

Thanks & Regards,
Gangadhar


do_patch failing when executed multiple times in the same S=WORKDIR Was: [yocto] Strange sporadic build issues (incremental builds in docker container)

Martin Jansa
 

On Wed, Mar 30, 2022 at 11:29 PM Trevor Woerner <twoerner@...> wrote:
On Wed 2022-03-30 @ 04:08:31 PM, Richard Purdie wrote:
> On Wed, 2022-03-30 at 09:40 -0400, Trevor Woerner wrote:
> > Hi Matthias,
> >
> > On Wed 2022-03-30 @ 06:32:00 AM, Matthias Klein wrote:
> > > Yes, you are right, it is mostly the same recipes that fail. But they also change from time to time.
> > > Today it happened to me even without Jenkins and Docker, normally in the console with the recipe keymaps_1.0.bb.
> >
> > And keymaps follows the exact same pattern as modutils-initscripts and
> > initscripts; namely that their sources are entirely contained in-tree:
> >
> >     keymaps/
> >     ├── files
> >     │   ├── GPLv2.patch
> >     │   └── keymap.sh
> >     └── keymaps_1.0.bb
> >
> >     keymaps/keymaps_1.0.bb
> >      23 SRC_URI = "file://keymap.sh \
> >      24            file://GPLv2.patch"
> >
> > Any recipe that follows this pattern is susceptible, it's probably just a
> > coincidence that most of my failures happened to be with the two recipes I
> > mentioned.
> >
> > This issue has revealed a bug, and fixing that bug would be great. However,
> > the thing is, keymap.sh is a shell program written 12 years ago which hasn't
> > changed since. The GPL/COPYING file is only there for "reasons". The license
> > file doesn't *need* to be moved into the build area for this recipe to get its
> > job done (namely installing keymap.sh into the image's sysvinit).
>
> The "good" news is I did work out how to reproduce this.
>
> bitbake keymaps -c clean
> bitbake keymaps
> bitbake keymaps -c unpack -f
> bitbake keymaps -c patch
> bitbake keymaps -c unpack -f
> bitbake keymaps -c patch

Awesome! That is a very simple and quick reproducer!

> I haven't looked at why but hopefully that helps us more forward with looking at
> the issue.
>
> The complications with S == WORKDIR were one of the reasons I did start work on
> patches to make it work better and maybe move fetching into a dedicated
> direction rather than WORKDIR and then symlink things. I never got that patch to
> work well enough to submit though (and it is too late for a major change like
> that in this release).

As per our conversation I quickly tried the following (not that I expected
this to be a final solution, but just a poking-around kind of thing):

        diff --git a/meta/classes/base.bbclass b/meta/classes/base.bbclass
        index cc81461473..503da61b3d 100644
        --- a/meta/classes/base.bbclass
        +++ b/meta/classes/base.bbclass
        @@ -170,6 +170,7 @@ do_unpack[dirs] = "${WORKDIR}"
         do_unpack[cleandirs] = "${@d.getVar('S') if os.path.normpath(d.getVar('S')) != os.path.normpath(d.getVar('WORKDIR')) else os.path.join('${S}', 'patches')}"

         python base_do_unpack() {
        +    bb.utils.remove(d.getVar('B') + "/.pc", recurse=True)
             src_uri = (d.getVar('SRC_URI') or "").split()
             if not src_uri:
                 return

And it changed the error message from:

        $ bitbake keymaps -c patch
        ...
        ERROR: keymaps-1.0-r31 do_patch: Applying patch 'GPLv2.patch' on target directory '/z/build-master/quilt-fix/qemux86/nodistro/build/tmp-glibc/work/qemux86-oe-linux/keymaps/1.0-r31'
        CmdError('quilt --quiltrc /z/build-master/quilt-fix/qemux86/nodistro/build/tmp-glibc/work/qemux86-oe-linux/keymaps/1.0-r31/recipe-sysroot-native/etc/quiltrc push', 0, 'stdout:
        stderr: File series fully applied, ends at patch GPLv2.patch
        ')

to:

        $ bitbake keymaps -c patch
        ...
        ERROR: keymaps-1.0-r31 do_patch: Applying patch 'GPLv2.patch' on target directory '/z/build-master/quilt-fix/qemux86/nodistro/build/tmp-glibc/work/qemux86-oe-linux/keymaps/1.0-r31'
        CmdError('quilt --quiltrc /z/build-master/quilt-fix/qemux86/nodistro/build/tmp-glibc/work/qemux86-oe-linux/keymaps/1.0-r31/recipe-sysroot-native/etc/quiltrc push', 0, 'stdout: Applying patch GPLv2.patch
        The next patch would create the file COPYING,
        which already exists!  Applying it anyway.
        patching file COPYING
        Hunk #1 FAILED at 1.
        1 out of 1 hunk FAILED -- rejects in file COPYING
        Patch GPLv2.patch can be reverse-applied

        stderr: ')

progress? https://www.reddit.com/r/ProgrammerHumor/comments/8j5qim/progress/

+oe-core ML as it isn't poky/yocto specific

Just small update as multiple people mentioned this (in case I don't send the final fix later today).

There are couple recipes affected by this e.g. keymaps (.patch already removed in oe-core), makedevs (.patch removal sent to ML yesterday https://lists.openembedded.org/g/openembedded-core/message/166172), devmem2 (https://lists.openembedded.org/g/openembedded-devel/message/97270), but there are other recipes with S = "${WORKDIR}" where you can trigger this e.g. by having a .patch file in DISTRO layer .bbappend (e.g. tzdata with webOS https://github.com/webosose/meta-webosose/blob/06e5298d9f5c47679b679081d9930f8d1c776142/meta-webos/recipes-extended/tzdata/tzdata.bbappend#L10)

This do_patch issue is caused by:
introduced in kirkstone with:

I'm still looking how to fix this properly, but the shortest sequence to reproduce this is just
bitbake keymaps -c patch
bitbake keymaps -c unpack -f
bitbake keymaps -c patch

And the change in quilt behavior is causing QuiltTree.Clean (quilt pop -a -f) in:

to fail with "No series file found" before undoing the patches in WORKDIR.

Removing ".pc" as Trevor tried above doesn't help, because we really need quilt's help to undo the patches (in this case to delete COPYING file from WORKDIR before applying the .patch which tries to add it again), because do_unpack cannot just wipe S and start over (because S == WORKDIR) - not selectively removing the files listed in SRC_URI, because COPYING file isn't listed there.

Using skip_series_check in 'quilt pop' (partially reverting the change from upstream) helps a bit, but might be difficult to upstream.

Will send a fix later today or next week.

Cheers,


Re: Need help in namespace journal implementation

Prashant Badsheshi <prashantsbemail@...>
 

Can someone help here?

"I am trying to add 'namespace journal' logging for debug purpose"


On Wed, May 25, 2022 at 7:29 PM Prashant Badsheshi via lists.yoctoproject.org <prashantsbemail=gmail.com@...> wrote:

Hi,

I am working on a yocto based project, I am trying to add namespace journal logging for debug purpose.

Can anyone share the steps to create a namespace journal logging in the yocto based project.

Also it would be helpful if we have any examples implemented for namespace journals.

 

Thanks,

Prashant





Re: Kirkstone 4.0.1 - Exception: NameError: name 'json_summary_name' is not defined

Darcy Watkins
 

Hi Steve,

 

Awesome!  Thanks, I will watch for it.

 

 

 

Regards,

 

Darcy

 

Darcy Watkins ::  Senior Staff Engineer, Firmware

 

SIERRA WIRELESS

Direct  +1 604 233 7989   ::  Fax  +1 604 231 1109  ::  Main  +1 604 231 1100

13811 Wireless Way  :: Richmond, BC Canada V6V 3A4

[M4]

dwatkins@... :: www.sierrawireless.com

 

From: Steve Sakoman <steve@...>
Date: Wednesday, May 25, 2022 at 12:58 PM
To: Darcy Watkins <dwatkins@...>
Cc: yocto@... <yocto@...>
Subject: Re: [yocto] Kirkstone 4.0.1 - Exception: NameError: name 'json_summary_name' is not defined

On Wed, May 25, 2022 at 9:34 AM Darcy Watkins
<dwatkins@...> wrote:
>
> Hi,
>
>
>
> After sync-up with kirkstone 4.0.1, I get the following error…
>
>
>
> Image CVE report stored in: /home/dwatkins/workspace/mgos/voyager1/build/tmp/deploy/images/mg90/omg-supplement-mfwimages-mg90-20220525182226.rootfs.cve
>
> ERROR: omg-supplement-mfwimages-1.0-r0 do_rootfs: Error executing a python function in exec_func_python() autogenerated:

There is a fix for this issue on list for review, it should be
available in the kirkstone branch later today.

Steve

> The stack trace of python calls that resulted in this exception/failure was:
>
> File: 'exec_func_python() autogenerated', lineno: 2, function: <module>
>
>      0001:
>
>  *** 0002:cve_check_write_rootfs_manifest(d)
>
>      0003:
>
> File: '/home/dwatkins/workspace/mgos/voyager1/upstream/yocto/poky/meta/classes/cve-check.bbclass', lineno: 213, function: cve_check_write_rootfs_manifest
>
>      0209:
>
>      0210:        link_path = os.path.join(deploy_dir, "%s.json" % link_name)
>
>      0211:        manifest_path = d.getVar("CVE_CHECK_MANIFEST_JSON")
>
>      0212:        bb.note("Generating JSON CVE manifest")
>
>  *** 0213:        generate_json_report(json_summary_name, json_summary_link_name)
>
>      0214:        bb.plain("Image CVE JSON report stored in: %s" % link_path)
>
>      0215:}
>
>      0216:
>
>      0217:ROOTFS_POSTPROCESS_COMMAND:prepend = "${@'cve_check_write_rootfs_manifest; ' if d.getVar('CVE_CHECK_CREATE_MANIFEST') == '1' else ''}"
>
> Exception: NameError: name 'json_summary_name' is not defined
>
>
>
> ERROR: Logfile of failure stored in: /home/dwatkins/workspace/mgos/voyager1/build/tmp/work/mg90-poky-linux-gnueabi/omg-supplement-mfwimages/1.0-r0/temp/log.do_rootfs.16510
>
> ERROR: Task (/home/dwatkins/workspace/mgos/voyager1/meta-mgos-core/recipes-images/images/omg-supplement-mfwimages.bb:do_rootfs) failed with exit code '1'
>
> NOTE: Tasks Summary: Attempted 8400 tasks of which 7718 didn't need to be rerun and 1 failed.
>
> NOTE: Generating JSON CVE summary
>
> CVE report summary created at: /home/dwatkins/workspace/mgos/voyager1/build/tmp/log/cve/cve-summary.json
>
>
>
>
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 130)         json_summary_link_name = os.path.join(cvelogpath, d.getVar("CVE_CHECK_SUMMARY_FILE_NAME_JSON"))
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 131)         json_summary_name = os.path.join(cvelogpath, "%s-%s.json" % (cve_summary_name, timestamp))
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 132)         generate_json_report(json_summary_name, json_summary_link_name)
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 133)         bb.plain("CVE report summary created at: %s" % json_summary_link_name)
>
>
>
>
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 210)         link_path = os.path.join(deploy_dir, "%s.json" % link_name)
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 211)         manifest_path = d.getVar("CVE_CHECK_MANIFEST_JSON")
>
> 777f1d42b62 (Marta Rybczynska       2022-03-29 14:54:31 +0200 212)         bb.note("Generating JSON CVE manifest")
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 213)         generate_json_report(json_summary_name, json_summary_link_name)
>
> 645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 214)         bb.plain("Image CVE JSON report stored in: %s" % link_path)
>
>
>
>
>
> I am not sure whether we need to locally setup the “json_summary_name” in  “python cve_check_write_rootfs_manifest ()” like was done inside “python cve_save_summary_handler ()” or if something different was supposed to be passed to “generate_json_report()” instead.
>
>
>
>
>
> Regards,
>
>
>
> Darcy
>
>
>
> Darcy Watkins ::  Senior Staff Engineer, Firmware
>
>
>
> SIERRA WIRELESS
>
> Direct  +1 604 233 7989   ::  Fax  +1 604 231 1109  ::  Main  +1 604 231 1100
>
> 13811 Wireless Way  :: Richmond, BC Canada V6V 3A4
>
> [M4]
>
> dwatkins@... :: https://can01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.sierrawireless.com%2F&amp;data=05%7C01%7Cdwatkins%40sierrawireless.com%7C8154326625634f53c21308da3e88f58a%7C08059a4c248643dd89e33a747e0dcbe8%7C0%7C0%7C637891055218478725%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=Beu3mEcOFIh3vQC1sN5kjjqaaE6DkHiNtYZVC3l%2FwbI%3D&amp;reserved=0
>
>
>
>


Re: Kirkstone 4.0.1 - Exception: NameError: name 'json_summary_name' is not defined

Steve Sakoman
 

On Wed, May 25, 2022 at 9:34 AM Darcy Watkins
<dwatkins@...> wrote:

Hi,



After sync-up with kirkstone 4.0.1, I get the following error…



Image CVE report stored in: /home/dwatkins/workspace/mgos/voyager1/build/tmp/deploy/images/mg90/omg-supplement-mfwimages-mg90-20220525182226.rootfs.cve

ERROR: omg-supplement-mfwimages-1.0-r0 do_rootfs: Error executing a python function in exec_func_python() autogenerated:
There is a fix for this issue on list for review, it should be
available in the kirkstone branch later today.

Steve

The stack trace of python calls that resulted in this exception/failure was:

File: 'exec_func_python() autogenerated', lineno: 2, function: <module>

0001:

*** 0002:cve_check_write_rootfs_manifest(d)

0003:

File: '/home/dwatkins/workspace/mgos/voyager1/upstream/yocto/poky/meta/classes/cve-check.bbclass', lineno: 213, function: cve_check_write_rootfs_manifest

0209:

0210: link_path = os.path.join(deploy_dir, "%s.json" % link_name)

0211: manifest_path = d.getVar("CVE_CHECK_MANIFEST_JSON")

0212: bb.note("Generating JSON CVE manifest")

*** 0213: generate_json_report(json_summary_name, json_summary_link_name)

0214: bb.plain("Image CVE JSON report stored in: %s" % link_path)

0215:}

0216:

0217:ROOTFS_POSTPROCESS_COMMAND:prepend = "${@'cve_check_write_rootfs_manifest; ' if d.getVar('CVE_CHECK_CREATE_MANIFEST') == '1' else ''}"

Exception: NameError: name 'json_summary_name' is not defined



ERROR: Logfile of failure stored in: /home/dwatkins/workspace/mgos/voyager1/build/tmp/work/mg90-poky-linux-gnueabi/omg-supplement-mfwimages/1.0-r0/temp/log.do_rootfs.16510

ERROR: Task (/home/dwatkins/workspace/mgos/voyager1/meta-mgos-core/recipes-images/images/omg-supplement-mfwimages.bb:do_rootfs) failed with exit code '1'

NOTE: Tasks Summary: Attempted 8400 tasks of which 7718 didn't need to be rerun and 1 failed.

NOTE: Generating JSON CVE summary

CVE report summary created at: /home/dwatkins/workspace/mgos/voyager1/build/tmp/log/cve/cve-summary.json





645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 130) json_summary_link_name = os.path.join(cvelogpath, d.getVar("CVE_CHECK_SUMMARY_FILE_NAME_JSON"))

645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 131) json_summary_name = os.path.join(cvelogpath, "%s-%s.json" % (cve_summary_name, timestamp))

645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 132) generate_json_report(json_summary_name, json_summary_link_name)

645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 133) bb.plain("CVE report summary created at: %s" % json_summary_link_name)





645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 210) link_path = os.path.join(deploy_dir, "%s.json" % link_name)

645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 211) manifest_path = d.getVar("CVE_CHECK_MANIFEST_JSON")

777f1d42b62 (Marta Rybczynska 2022-03-29 14:54:31 +0200 212) bb.note("Generating JSON CVE manifest")

645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 213) generate_json_report(json_summary_name, json_summary_link_name)

645c157befa (Davide Gardenal 2022-05-03 09:51:43 +0200 214) bb.plain("Image CVE JSON report stored in: %s" % link_path)





I am not sure whether we need to locally setup the “json_summary_name” in “python cve_check_write_rootfs_manifest ()” like was done inside “python cve_save_summary_handler ()” or if something different was supposed to be passed to “generate_json_report()” instead.





Regards,



Darcy



Darcy Watkins :: Senior Staff Engineer, Firmware



SIERRA WIRELESS

Direct +1 604 233 7989 :: Fax +1 604 231 1109 :: Main +1 604 231 1100

13811 Wireless Way :: Richmond, BC Canada V6V 3A4

[M4]

dwatkins@... :: www.sierrawireless.com




Kirkstone 4.0.1 - Exception: NameError: name 'json_summary_name' is not defined

Darcy Watkins
 

Hi,

 

After sync-up with kirkstone 4.0.1, I get the following error…

 

Image CVE report stored in: /home/dwatkins/workspace/mgos/voyager1/build/tmp/deploy/images/mg90/omg-supplement-mfwimages-mg90-20220525182226.rootfs.cve

ERROR: omg-supplement-mfwimages-1.0-r0 do_rootfs: Error executing a python function in exec_func_python() autogenerated:

 

The stack trace of python calls that resulted in this exception/failure was:

File: 'exec_func_python() autogenerated', lineno: 2, function: <module>

     0001:

 *** 0002:cve_check_write_rootfs_manifest(d)

     0003:

File: '/home/dwatkins/workspace/mgos/voyager1/upstream/yocto/poky/meta/classes/cve-check.bbclass', lineno: 213, function: cve_check_write_rootfs_manifest

     0209:

     0210:        link_path = os.path.join(deploy_dir, "%s.json" % link_name)

     0211:        manifest_path = d.getVar("CVE_CHECK_MANIFEST_JSON")

     0212:        bb.note("Generating JSON CVE manifest")

 *** 0213:        generate_json_report(json_summary_name, json_summary_link_name)

     0214:        bb.plain("Image CVE JSON report stored in: %s" % link_path)

     0215:}

     0216:

     0217:ROOTFS_POSTPROCESS_COMMAND:prepend = "${@'cve_check_write_rootfs_manifest; ' if d.getVar('CVE_CHECK_CREATE_MANIFEST') == '1' else ''}"

Exception: NameError: name 'json_summary_name' is not defined

 

ERROR: Logfile of failure stored in: /home/dwatkins/workspace/mgos/voyager1/build/tmp/work/mg90-poky-linux-gnueabi/omg-supplement-mfwimages/1.0-r0/temp/log.do_rootfs.16510

ERROR: Task (/home/dwatkins/workspace/mgos/voyager1/meta-mgos-core/recipes-images/images/omg-supplement-mfwimages.bb:do_rootfs) failed with exit code '1'

NOTE: Tasks Summary: Attempted 8400 tasks of which 7718 didn't need to be rerun and 1 failed.

NOTE: Generating JSON CVE summary

CVE report summary created at: /home/dwatkins/workspace/mgos/voyager1/build/tmp/log/cve/cve-summary.json

 

 

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 130)         json_summary_link_name = os.path.join(cvelogpath, d.getVar("CVE_CHECK_SUMMARY_FILE_NAME_JSON"))

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 131)         json_summary_name = os.path.join(cvelogpath, "%s-%s.json" % (cve_summary_name, timestamp))

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 132)         generate_json_report(json_summary_name, json_summary_link_name)

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 133)         bb.plain("CVE report summary created at: %s" % json_summary_link_name)

 

 

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 210)         link_path = os.path.join(deploy_dir, "%s.json" % link_name)

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 211)         manifest_path = d.getVar("CVE_CHECK_MANIFEST_JSON")

777f1d42b62 (Marta Rybczynska       2022-03-29 14:54:31 +0200 212)         bb.note("Generating JSON CVE manifest")

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 213)         generate_json_report(json_summary_name, json_summary_link_name)

645c157befa (Davide Gardenal        2022-05-03 09:51:43 +0200 214)         bb.plain("Image CVE JSON report stored in: %s" % link_path)

 

 

I am not sure whether we need to locally setup the “json_summary_name” in  “python cve_check_write_rootfs_manifest ()” like was done inside “python cve_save_summary_handler ()” or if something different was supposed to be passed to “generate_json_report()” instead.

 

 

Regards,

 

Darcy

 

Darcy Watkins ::  Senior Staff Engineer, Firmware

 

SIERRA WIRELESS

Direct  +1 604 233 7989   ::  Fax  +1 604 231 1109  ::  Main  +1 604 231 1100

13811 Wireless Way  :: Richmond, BC Canada V6V 3A4

[M4]

dwatkins@... :: www.sierrawireless.com


Re: [RFC][WIP]{honister] kernel-lab manual

Michael Opdenacker
 

Hi Tim

Many thanks for these instructions, and sorry for the late reply.
However, I wouldn't have forgotten to review it if you had copied the
docs@ mailing list ;-)

On 5/12/22 20:10, Tim Orling wrote:
I have the restructured text conversion far enough along for the
'kernel-lab' to share it now. Because I was last working on this for
Yocto Project Summit 2021.11, the current qemux86 base is on
'honister' (although I am upgrading it to honister-3.4.4 tag).

Please realize there is a lot of history to this material and some of
it was done by folks that have left this mortal coil and some respect
for that posterity is included in this work. We can change and morph
in the future, once it has been captured close to what it is here.

I also have a separate workflow going for the Yocto Project Summit
2022.05 which is in Google Slides and is qemuarm64 based
('kirkstone'). Eventually I will find the time to update the
kernel-lab manual to follow suit, but our collective discussion may
impact that.

You can take a look at YP Summit 2021.11 to see a preview of what is
coming for YP Summit 2022.05 (once I figure out the pesky
printk/pr_info issue):
https://elinux.org/images/b/be/Yps2021.11-handson-kernel.pdf

Current working branch of kernel-lab manual:
https://github.com/moto-timo/yocto-docs/tree/timo/honister/kernel-lab

And the accompanying metadata training materials:
https://github.com/moto-timo/kernel-lab-layers/tree/wip-honister

The intent is that for a given release of the docs, we would have
exercises for  LTS, Stable and Mainline (really this means
current-stable, not -dev). Currently, LTS would be 5.10, Stable would
be 5.15 and Mainline would be 5.17.

The whole instructions look very good and ready for inclusion when the
mentioned repository for the lab layers exists.

I'm starting to run them.

How should we proceed? I'd suggest to:

* Publish the repository for the lab layers at the specified location
* Submit the sources to the docs@ mailing list for public review. I
have a few minor issues to report, and this could happen then.

What do you think?

So far, there's just one thing that bothers me a bit: the .bb or .conf
files that we are supposed to open could be useful to show directly in
the documentation. It looks a bit strange to talk about the contents of
a file without showing it at the same time. I know, there's a risk to
see them getting out of sync with the actual sources.

Maybe we can find a way to include the contents of files from branches
in cloned repositories. This would be handy in many places...

Thanks again,
Cheers
Michael.

--
Michael Opdenacker, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


Re: CVE metrics tracking from the autobuilder

Richard Purdie
 

Hi Anuj,

On Wed, 2022-05-25 at 14:38 +0000, Mittal, Anuj wrote:
On Wed, 2022-05-25 at 14:30 +0100, Richard Purdie wrote:


This is working for dunfell/kirkstone/master. It is enabled for
honister but doesn't work since the json CVE output for honister
isn't
there.

Not sure if we want to add the json CVE output to honister to enable
that for the short time that release has left?
Yeah, there is only a week left and I wasn't planning to take those
patches in my final pull request.
I will just disable it then, thanks for the info.

Cheers,

Richard


Re: CVE metrics tracking from the autobuilder

Anuj Mittal
 

Hi Richard,

On Wed, 2022-05-25 at 14:30 +0100, Richard Purdie wrote:
I'm happy to say that automatic CVE metric tracking is now on the
autobuilder and automatically feeding to:

https://autobuilder.yocto.io/pub/non-release/patchmetrics/

and the git repository that backs it:

https://git.yoctoproject.org/yocto-metrics/log/
This is very nice.


This is working for dunfell/kirkstone/master. It is enabled for
honister but doesn't work since the json CVE output for honister
isn't
there.

Not sure if we want to add the json CVE output to honister to enable
that for the short time that release has left?
Yeah, there is only a week left and I wasn't planning to take those
patches in my final pull request.

Thanks,

Anuj



I plan to run the autobuilder job powering this nightly.

Currently it adds a json file for each run into the yocto-metrics
repository. These are 6MB each though so we're going to get into
silly
amounts of data rather quickly so I may have to adjust it to just
write
the latest. It would also help the size to use tabs instead of spaces
for indentation.

The autobuilder job currently throws warnings but I think Ross said
he'd send a patch to allow that to be configurable.

Also, this doesn't send the CVE emails Steve currently sends. It
would
be possible to add, I'm hoping someone might like to send some
patches!

Cheers,

Richard





Need help in namespace journal implementation

Prashant Badsheshi <prashantsbemail@...>
 

Hi,

I am working on a yocto based project, I am trying to add namespace journal logging.

Can anyone share the steps to create a namespace journal logging in the yocto based project.

Also it would be helpful if we have any examples implemented for namespace journals.

 

Thanks,

Prashant


CVE metrics tracking from the autobuilder

Richard Purdie
 

I'm happy to say that automatic CVE metric tracking is now on the
autobuilder and automatically feeding to:

https://autobuilder.yocto.io/pub/non-release/patchmetrics/

and the git repository that backs it:

https://git.yoctoproject.org/yocto-metrics/log/

This is working for dunfell/kirkstone/master. It is enabled for
honister but doesn't work since the json CVE output for honister isn't
there.

Not sure if we want to add the json CVE output to honister to enable
that for the short time that release has left?

I plan to run the autobuilder job powering this nightly.

Currently it adds a json file for each run into the yocto-metrics
repository. These are 6MB each though so we're going to get into silly
amounts of data rather quickly so I may have to adjust it to just write
the latest. It would also help the size to use tabs instead of spaces
for indentation.

The autobuilder job currently throws warnings but I think Ross said
he'd send a patch to allow that to be configurable.

Also, this doesn't send the CVE emails Steve currently sends. It would
be possible to add, I'm hoping someone might like to send some patches!

Cheers,

Richard


Re: [meta-security][PATCH] meta-parsec: Update Parsec runtime tests

Armin Kuster
 

Very nice. This is much better than what I did.

may thanks,
Armin

On 5/24/22 11:05, Anton Antonov wrote:
Signed-off-by: Anton Antonov <Anton.Antonov@...>
---
meta-parsec/README.md | 65 +++++++++
meta-parsec/lib/oeqa/runtime/cases/parsec.py | 135 ++++++++++++++++--
.../images/security-parsec-image.bb | 5 +-
.../packagegroup-security-parsec.bb | 1 -
meta-tpm/classes/sanity-meta-tpm.bbclass | 4 +-
5 files changed, 191 insertions(+), 19 deletions(-)

diff --git a/meta-parsec/README.md b/meta-parsec/README.md
index 97026ea..f720cd2 100644
--- a/meta-parsec/README.md
+++ b/meta-parsec/README.md
@@ -88,6 +88,71 @@ https://github.com/meta-rust/cargo-bitbake
2. Run cargo-bitbake inside the repository. It will produce a BB file.
3. Create a new include file with SRC_URI and LIC_FILES_CHKSUM from the BB file.
+Automated Parsec testing with runqemu
+=====================================
+
+ The Yocto build system has the ability to run a series of automated tests for qemu images.
+All the tests are actually commands run on the target system over ssh.
+
+ Meta-parsec includes automated unittests which run end to end Parsec tests.
+The tests are run against:
+- all providers pre-configured in the Parsec config file included in the image.
+- PKCS11 and TPM providers with software backends if softhsm and
+ swtpm packages included in the image.
+
+Meta-parsec also contains a recipe for `security-parsec-image` image with Parsec,
+softhsm and swtpm included.
+
+ Please notice that the account you use to run bitbake should have access to `/dev/kvm`.
+You might need to change permissions or add the account into `kvm` unix group.
+
+1. Testing Parsec with your own image where `parsec-service` and `parsec-tool` are already included.
+
+- Add into your `local.conf`:
+```
+INHERIT += "testimage"
+TEST_SUITES = "ping ssh parsec"
+```
+- Build your image
+```bash
+bitbake <your-image>
+```
+- Run tests
+```bash
+bitbake <your-image> -c testimage
+```
+
+2. Testing Parsec with pre-defined `security-parsec-image` image.
+
+- Add into your `local.conf`:
+```
+DISTRO_FEATURES += " tpm2"
+INHERIT += "testimage"
+TEST_SUITES = "ping ssh parsec"
+```
+- Build security-parsec-image image
+```bash
+bitbake security-parsec-image
+```
+- Run tests
+```bash
+bitbake security-parsec-image -c testimage
+```
+
+Output of a successfull tests run should look similar to:
+```
+RESULTS:
+RESULTS - ping.PingTest.test_ping: PASSED (0.05s)
+RESULTS - ssh.SSHTest.test_ssh: PASSED (0.25s)
+RESULTS - parsec.ParsecTest.test_all_providers: PASSED (1.84s)
+RESULTS - parsec.ParsecTest.test_pkcs11_provider: PASSED (2.91s)
+RESULTS - parsec.ParsecTest.test_tpm_provider: PASSED (3.33s)
+SUMMARY:
+security-parsec-image () - Ran 5 tests in 8.386s
+security-parsec-image - OK - All required tests passed (successes=5, skipped=0, failures=0, errors=0)
+```
+
+
Manual testing with runqemu
===========================
diff --git a/meta-parsec/lib/oeqa/runtime/cases/parsec.py b/meta-parsec/lib/oeqa/runtime/cases/parsec.py
index 547f74c..d3d3f2e 100644
--- a/meta-parsec/lib/oeqa/runtime/cases/parsec.py
+++ b/meta-parsec/lib/oeqa/runtime/cases/parsec.py
@@ -1,33 +1,138 @@
# Copyright (C) 2022 Armin Kuster <akuster808@...>
+# Copyright (C) 2022 Anton Antonov <Anton.Antonov@...>
#
import re
+from tempfile import mkstemp
from oeqa.runtime.case import OERuntimeTestCase
from oeqa.core.decorator.depends import OETestDepends
from oeqa.runtime.decorator.package import OEHasPackage
+from oeqa.core.decorator.data import skipIfNotFeature
class ParsecTest(OERuntimeTestCase):
+ @classmethod
+ def setUpClass(cls):
+ cls.toml_file = '/etc/parsec/config.toml'
+
+ def setUp(self):
+ super(ParsecTest, self).setUp()
+ if 'systemd' in self.tc.td['DISTRO_FEATURES']:
+ self.parsec_status='systemctl status -l parsec'
+ self.parsec_reload='systemctl restart parsec'
+ else:
+ self.parsec_status='pgrep -l parsec'
+ self.parsec_reload='/etc/init.d/parsec reload'
+
+ def copy_subconfig(self, cfg, provider):
+ """ Copy a provider configuration to target and append it to Parsec config """
+
+ tmp_fd, tmp_path = mkstemp()
+ with os.fdopen(tmp_fd, 'w') as f:
+ f.write('\n'.join(cfg))
+
+ (status, output) = self.target.copyTo(tmp_path, "%s-%s" % (self.toml_file, provider))
+ self.assertEqual(status, 0, msg='File could not be copied.\n%s' % output)
+ status, output = self.target.run('cat %s-%s >>%s' % (self.toml_file, provider, self.toml_file))
+ os.remove(tmp_path)
+
+ def check_parsec_providers(self, provider=None, prov_id=None):
+ """ Get Parsec providers list and check for one if defined """
+
+ status, output = self.target.run(self.parsec_status)
+ self.assertEqual(status, 0, msg='Parsec service is not running.\n%s' % output)
+
+ status, output = self.target.run('parsec-tool list-providers')
+ self.assertEqual(status, 0, msg='Cannot get a list of Parsec providers.\n%s' % output)
+ if provider and prov_id:
+ self.assertIn("ID: 0x0%d (%s provider)" % (prov_id, provider),
+ output, msg='%s provider is not configured.' % provider)
+
+ def run_cli_tests(self, prov_id=None):
+ """ Run Parsec CLI end-to-end tests against one or all providers """
+
+ status, output = self.target.run('parsec-cli-tests.sh %s' % ("-%d" % prov_id if prov_id else ""))
+ self.assertEqual(status, 0, msg='Parsec CLI tests failed.\n %s' % output)
+
@OEHasPackage(['parsec-service'])
@OETestDepends(['ssh.SSHTest.test_ssh'])
- def test_parsec_service(self):
- toml_file = '/etc/parsec/config.tom'
- status, output = self.target.run('echo library_path = "/usr/lib/softhsm/libsofthsm2.so" >> %s' %(toml_file))
- status, output = self.target.run('echo slot_number = 0 >> %s' %(toml_file))
- status, output = self.target.run('echo user_pin = "123456" >> %s' %(toml_file))
+ def test_all_providers(self):
+ """ Test Parsec service with all pre-defined providers """
+
+ self.check_parsec_providers()
+ self.run_cli_tests()
+
+ def configure_tpm_provider(self):
+ """ Create Parsec TPM provider configuration """
+
+ cfg = [
+ '',
+ '[[provider]]',
+ 'name = "tpm-provider"',
+ 'provider_type = "Tpm"',
+ 'key_info_manager = "sqlite-manager"',
+ 'tcti = "swtpm:port=2321"',
+ 'owner_hierarchy_auth = ""',
+ ]
+ self.copy_subconfig(cfg, "TPM")
+
cmds = [
- '/etc/init.d/parsec stop',
- 'sleep 5',
- 'softhsm2-util --init-token --slot 0 --label "Parsec Service" --pin 123456 --so-pin 123456',
- 'for d in /var/lib/softhsm/tokens/*; do chown -R parsec $d; done',
'mkdir /tmp/myvtpm',
- 'swtpm socket --tpmstate dir=/tmp/myvtpm --tpm2 --ctrl type=tcp,port=2322 --server type=tcp,port=2321 --flags not-need-init &',
- 'export TPM2TOOLS_TCTI="swtpm:port=2321"',
- 'tpm2_startup -c',
- 'sleep 2',
- '/etc/init.d/parsec start',
- 'parsec-cli-tests.sh'
+ 'swtpm socket -d --tpmstate dir=/tmp/myvtpm --tpm2 --ctrl type=tcp,port=2322 --server type=tcp,port=2321 --flags not-need-init',
+ 'tpm2_startup -c -T "swtpm:port=2321"',
+ self.parsec_reload,
]
for cmd in cmds:
status, output = self.target.run(cmd)
self.assertEqual(status, 0, msg='\n'.join([cmd, output]))
+
+ @OEHasPackage(['parsec-service'])
+ @OEHasPackage(['swtpm'])
+ @skipIfNotFeature('tpm2','Test parsec_tpm_provider requires tpm2 to be in DISTRO_FEATURES')
+ @OETestDepends(['ssh.SSHTest.test_ssh', 'parsec.ParsecTest.test_all_providers'])
+ def test_tpm_provider(self):
+ """ Configure and test Parsec TPM provider with swtpm as a backend """
+
+ prov_id = 3
+ self.configure_tpm_provider()
+ self.check_parsec_providers("TPM", prov_id)
+ self.run_cli_tests(prov_id)
+
+ def configure_pkcs11_provider(self):
+ """ Create Parsec PKCS11 provider configuration """
+
+ status, output = self.target.run('softhsm2-util --init-token --free --label "Parsec Service" --pin 123456 --so-pin 123456')
+ self.assertEqual(status, 0, msg='Failed to init PKCS11 token.\n%s' % output)
+
+ slot = re.search('The token has been initialized and is reassigned to slot (\d*)', output)
+ if slot is None:
+ self.fail('Failed to get PKCS11 slot serial number.\n%s' % output)
+ self.assertNotEqual(slot.group(1), None, msg='Failed to get PKCS11 slot serial number.\n%s' % output)
+
+ cfg = [
+ '',
+ '[[provider]]',
+ 'name = "pkcs11-provider"',
+ 'provider_type = "Pkcs11"',
+ 'key_info_manager = "sqlite-manager"',
+ 'library_path = "/usr/lib/softhsm/libsofthsm2.so"',
+ 'slot_number = %s' % slot.group(1),
+ 'user_pin = "123456"',
+ 'allow_export = true',
+ ]
+ self.copy_subconfig(cfg, "PKCS11")
+
+ status, output = self.target.run('for d in /var/lib/softhsm/tokens/*; do chown -R parsec $d; done')
+ status, output = self.target.run(self.parsec_reload)
+ self.assertEqual(status, 0, msg='Failed to reload Parsec.\n%s' % output)
+
+ @OEHasPackage(['parsec-service'])
+ @OEHasPackage(['softhsm'])
+ @OETestDepends(['ssh.SSHTest.test_ssh', 'parsec.ParsecTest.test_all_providers'])
+ def test_pkcs11_provider(self):
+ """ Configure and test Parsec PKCS11 provider with softhsm as a backend """
+
+ prov_id = 2
+ self.configure_pkcs11_provider()
+ self.check_parsec_providers("PKCS #11", prov_id)
+ self.run_cli_tests(prov_id)
diff --git a/meta-parsec/recipes-core/images/security-parsec-image.bb b/meta-parsec/recipes-core/images/security-parsec-image.bb
index 2ddc543..7add74b 100644
--- a/meta-parsec/recipes-core/images/security-parsec-image.bb
+++ b/meta-parsec/recipes-core/images/security-parsec-image.bb
@@ -1,4 +1,4 @@
-DESCRIPTION = "A small image for building meta-parsec packages"
+DESCRIPTION = "A small image for testing Parsec service with MbedCrypto, TPM and PKCS11 providers"
inherit core-image
@@ -10,7 +10,8 @@ IMAGE_INSTALL = "\
packagegroup-security-tpm2 \
packagegroup-security-parsec \
swtpm \
- os-release"
+ softhsm \
+ os-release"
export IMAGE_BASENAME = "security-parsec-image"
diff --git a/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb b/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb
index b6c4f59..0af9c3d 100644
--- a/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb
+++ b/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb
@@ -11,7 +11,6 @@ PACKAGES = "\
SUMMARY:packagegroup-security-parsec = "Security Parsec"
RDEPENDS:packagegroup-security-parsec = "\
- softhsm \
parsec-tool \
parsec-service \
"
diff --git a/meta-tpm/classes/sanity-meta-tpm.bbclass b/meta-tpm/classes/sanity-meta-tpm.bbclass
index 2f8b52d..1ab03c8 100644
--- a/meta-tpm/classes/sanity-meta-tpm.bbclass
+++ b/meta-tpm/classes/sanity-meta-tpm.bbclass
@@ -2,7 +2,9 @@ addhandler tpm_machinecheck
tpm_machinecheck[eventmask] = "bb.event.SanityCheck"
python tpm_machinecheck() {
skip_check = e.data.getVar('SKIP_META_TPM_SANITY_CHECK') == "1"
- if 'tpm' not in e.data.getVar('DISTRO_FEATURES').split() and not skip_check:
+ if 'tpm' not in e.data.getVar('DISTRO_FEATURES').split() and \
+ 'tpm2' not in e.data.getVar('DISTRO_FEATURES').split() and \
+ not skip_check:
bb.warn("You have included the meta-tpm layer, but \
'tpm or tpm2' has not been enabled in your DISTRO_FEATURES. Some bbappend files \
and preferred version setting may not take effect. See the meta-tpm README \


[meta-security][PATCH] meta-parsec: Update Parsec runtime tests

Anton Antonov
 

Signed-off-by: Anton Antonov <Anton.Antonov@...>
---
meta-parsec/README.md | 65 +++++++++
meta-parsec/lib/oeqa/runtime/cases/parsec.py | 135 ++++++++++++++++--
.../images/security-parsec-image.bb | 5 +-
.../packagegroup-security-parsec.bb | 1 -
meta-tpm/classes/sanity-meta-tpm.bbclass | 4 +-
5 files changed, 191 insertions(+), 19 deletions(-)

diff --git a/meta-parsec/README.md b/meta-parsec/README.md
index 97026ea..f720cd2 100644
--- a/meta-parsec/README.md
+++ b/meta-parsec/README.md
@@ -88,6 +88,71 @@ https://github.com/meta-rust/cargo-bitbake
2. Run cargo-bitbake inside the repository. It will produce a BB file.
3. Create a new include file with SRC_URI and LIC_FILES_CHKSUM from the BB file.

+Automated Parsec testing with runqemu
+=====================================
+
+ The Yocto build system has the ability to run a series of automated tests for qemu images.
+All the tests are actually commands run on the target system over ssh.
+
+ Meta-parsec includes automated unittests which run end to end Parsec tests.
+The tests are run against:
+- all providers pre-configured in the Parsec config file included in the image.
+- PKCS11 and TPM providers with software backends if softhsm and
+ swtpm packages included in the image.
+
+Meta-parsec also contains a recipe for `security-parsec-image` image with Parsec,
+softhsm and swtpm included.
+
+ Please notice that the account you use to run bitbake should have access to `/dev/kvm`.
+You might need to change permissions or add the account into `kvm` unix group.
+
+1. Testing Parsec with your own image where `parsec-service` and `parsec-tool` are already included.
+
+- Add into your `local.conf`:
+```
+INHERIT += "testimage"
+TEST_SUITES = "ping ssh parsec"
+```
+- Build your image
+```bash
+bitbake <your-image>
+```
+- Run tests
+```bash
+bitbake <your-image> -c testimage
+```
+
+2. Testing Parsec with pre-defined `security-parsec-image` image.
+
+- Add into your `local.conf`:
+```
+DISTRO_FEATURES += " tpm2"
+INHERIT += "testimage"
+TEST_SUITES = "ping ssh parsec"
+```
+- Build security-parsec-image image
+```bash
+bitbake security-parsec-image
+```
+- Run tests
+```bash
+bitbake security-parsec-image -c testimage
+```
+
+Output of a successfull tests run should look similar to:
+```
+RESULTS:
+RESULTS - ping.PingTest.test_ping: PASSED (0.05s)
+RESULTS - ssh.SSHTest.test_ssh: PASSED (0.25s)
+RESULTS - parsec.ParsecTest.test_all_providers: PASSED (1.84s)
+RESULTS - parsec.ParsecTest.test_pkcs11_provider: PASSED (2.91s)
+RESULTS - parsec.ParsecTest.test_tpm_provider: PASSED (3.33s)
+SUMMARY:
+security-parsec-image () - Ran 5 tests in 8.386s
+security-parsec-image - OK - All required tests passed (successes=5, skipped=0, failures=0, errors=0)
+```
+
+
Manual testing with runqemu
===========================

diff --git a/meta-parsec/lib/oeqa/runtime/cases/parsec.py b/meta-parsec/lib/oeqa/runtime/cases/parsec.py
index 547f74c..d3d3f2e 100644
--- a/meta-parsec/lib/oeqa/runtime/cases/parsec.py
+++ b/meta-parsec/lib/oeqa/runtime/cases/parsec.py
@@ -1,33 +1,138 @@
# Copyright (C) 2022 Armin Kuster <akuster808@...>
+# Copyright (C) 2022 Anton Antonov <Anton.Antonov@...>
#
import re
+from tempfile import mkstemp

from oeqa.runtime.case import OERuntimeTestCase
from oeqa.core.decorator.depends import OETestDepends
from oeqa.runtime.decorator.package import OEHasPackage
+from oeqa.core.decorator.data import skipIfNotFeature

class ParsecTest(OERuntimeTestCase):
+ @classmethod
+ def setUpClass(cls):
+ cls.toml_file = '/etc/parsec/config.toml'
+
+ def setUp(self):
+ super(ParsecTest, self).setUp()
+ if 'systemd' in self.tc.td['DISTRO_FEATURES']:
+ self.parsec_status='systemctl status -l parsec'
+ self.parsec_reload='systemctl restart parsec'
+ else:
+ self.parsec_status='pgrep -l parsec'
+ self.parsec_reload='/etc/init.d/parsec reload'
+
+ def copy_subconfig(self, cfg, provider):
+ """ Copy a provider configuration to target and append it to Parsec config """
+
+ tmp_fd, tmp_path = mkstemp()
+ with os.fdopen(tmp_fd, 'w') as f:
+ f.write('\n'.join(cfg))
+
+ (status, output) = self.target.copyTo(tmp_path, "%s-%s" % (self.toml_file, provider))
+ self.assertEqual(status, 0, msg='File could not be copied.\n%s' % output)
+ status, output = self.target.run('cat %s-%s >>%s' % (self.toml_file, provider, self.toml_file))
+ os.remove(tmp_path)
+
+ def check_parsec_providers(self, provider=None, prov_id=None):
+ """ Get Parsec providers list and check for one if defined """
+
+ status, output = self.target.run(self.parsec_status)
+ self.assertEqual(status, 0, msg='Parsec service is not running.\n%s' % output)
+
+ status, output = self.target.run('parsec-tool list-providers')
+ self.assertEqual(status, 0, msg='Cannot get a list of Parsec providers.\n%s' % output)
+ if provider and prov_id:
+ self.assertIn("ID: 0x0%d (%s provider)" % (prov_id, provider),
+ output, msg='%s provider is not configured.' % provider)
+
+ def run_cli_tests(self, prov_id=None):
+ """ Run Parsec CLI end-to-end tests against one or all providers """
+
+ status, output = self.target.run('parsec-cli-tests.sh %s' % ("-%d" % prov_id if prov_id else ""))
+ self.assertEqual(status, 0, msg='Parsec CLI tests failed.\n %s' % output)
+
@OEHasPackage(['parsec-service'])
@OETestDepends(['ssh.SSHTest.test_ssh'])
- def test_parsec_service(self):
- toml_file = '/etc/parsec/config.tom'
- status, output = self.target.run('echo library_path = "/usr/lib/softhsm/libsofthsm2.so" >> %s' %(toml_file))
- status, output = self.target.run('echo slot_number = 0 >> %s' %(toml_file))
- status, output = self.target.run('echo user_pin = "123456" >> %s' %(toml_file))
+ def test_all_providers(self):
+ """ Test Parsec service with all pre-defined providers """
+
+ self.check_parsec_providers()
+ self.run_cli_tests()
+
+ def configure_tpm_provider(self):
+ """ Create Parsec TPM provider configuration """
+
+ cfg = [
+ '',
+ '[[provider]]',
+ 'name = "tpm-provider"',
+ 'provider_type = "Tpm"',
+ 'key_info_manager = "sqlite-manager"',
+ 'tcti = "swtpm:port=2321"',
+ 'owner_hierarchy_auth = ""',
+ ]
+ self.copy_subconfig(cfg, "TPM")
+
cmds = [
- '/etc/init.d/parsec stop',
- 'sleep 5',
- 'softhsm2-util --init-token --slot 0 --label "Parsec Service" --pin 123456 --so-pin 123456',
- 'for d in /var/lib/softhsm/tokens/*; do chown -R parsec $d; done',
'mkdir /tmp/myvtpm',
- 'swtpm socket --tpmstate dir=/tmp/myvtpm --tpm2 --ctrl type=tcp,port=2322 --server type=tcp,port=2321 --flags not-need-init &',
- 'export TPM2TOOLS_TCTI="swtpm:port=2321"',
- 'tpm2_startup -c',
- 'sleep 2',
- '/etc/init.d/parsec start',
- 'parsec-cli-tests.sh'
+ 'swtpm socket -d --tpmstate dir=/tmp/myvtpm --tpm2 --ctrl type=tcp,port=2322 --server type=tcp,port=2321 --flags not-need-init',
+ 'tpm2_startup -c -T "swtpm:port=2321"',
+ self.parsec_reload,
]

for cmd in cmds:
status, output = self.target.run(cmd)
self.assertEqual(status, 0, msg='\n'.join([cmd, output]))
+
+ @OEHasPackage(['parsec-service'])
+ @OEHasPackage(['swtpm'])
+ @skipIfNotFeature('tpm2','Test parsec_tpm_provider requires tpm2 to be in DISTRO_FEATURES')
+ @OETestDepends(['ssh.SSHTest.test_ssh', 'parsec.ParsecTest.test_all_providers'])
+ def test_tpm_provider(self):
+ """ Configure and test Parsec TPM provider with swtpm as a backend """
+
+ prov_id = 3
+ self.configure_tpm_provider()
+ self.check_parsec_providers("TPM", prov_id)
+ self.run_cli_tests(prov_id)
+
+ def configure_pkcs11_provider(self):
+ """ Create Parsec PKCS11 provider configuration """
+
+ status, output = self.target.run('softhsm2-util --init-token --free --label "Parsec Service" --pin 123456 --so-pin 123456')
+ self.assertEqual(status, 0, msg='Failed to init PKCS11 token.\n%s' % output)
+
+ slot = re.search('The token has been initialized and is reassigned to slot (\d*)', output)
+ if slot is None:
+ self.fail('Failed to get PKCS11 slot serial number.\n%s' % output)
+ self.assertNotEqual(slot.group(1), None, msg='Failed to get PKCS11 slot serial number.\n%s' % output)
+
+ cfg = [
+ '',
+ '[[provider]]',
+ 'name = "pkcs11-provider"',
+ 'provider_type = "Pkcs11"',
+ 'key_info_manager = "sqlite-manager"',
+ 'library_path = "/usr/lib/softhsm/libsofthsm2.so"',
+ 'slot_number = %s' % slot.group(1),
+ 'user_pin = "123456"',
+ 'allow_export = true',
+ ]
+ self.copy_subconfig(cfg, "PKCS11")
+
+ status, output = self.target.run('for d in /var/lib/softhsm/tokens/*; do chown -R parsec $d; done')
+ status, output = self.target.run(self.parsec_reload)
+ self.assertEqual(status, 0, msg='Failed to reload Parsec.\n%s' % output)
+
+ @OEHasPackage(['parsec-service'])
+ @OEHasPackage(['softhsm'])
+ @OETestDepends(['ssh.SSHTest.test_ssh', 'parsec.ParsecTest.test_all_providers'])
+ def test_pkcs11_provider(self):
+ """ Configure and test Parsec PKCS11 provider with softhsm as a backend """
+
+ prov_id = 2
+ self.configure_pkcs11_provider()
+ self.check_parsec_providers("PKCS #11", prov_id)
+ self.run_cli_tests(prov_id)
diff --git a/meta-parsec/recipes-core/images/security-parsec-image.bb b/meta-parsec/recipes-core/images/security-parsec-image.bb
index 2ddc543..7add74b 100644
--- a/meta-parsec/recipes-core/images/security-parsec-image.bb
+++ b/meta-parsec/recipes-core/images/security-parsec-image.bb
@@ -1,4 +1,4 @@
-DESCRIPTION = "A small image for building meta-parsec packages"
+DESCRIPTION = "A small image for testing Parsec service with MbedCrypto, TPM and PKCS11 providers"

inherit core-image

@@ -10,7 +10,8 @@ IMAGE_INSTALL = "\
packagegroup-security-tpm2 \
packagegroup-security-parsec \
swtpm \
- os-release"
+ softhsm \
+ os-release"

export IMAGE_BASENAME = "security-parsec-image"

diff --git a/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb b/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb
index b6c4f59..0af9c3d 100644
--- a/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb
+++ b/meta-parsec/recipes-core/packagegroups/packagegroup-security-parsec.bb
@@ -11,7 +11,6 @@ PACKAGES = "\

SUMMARY:packagegroup-security-parsec = "Security Parsec"
RDEPENDS:packagegroup-security-parsec = "\
- softhsm \
parsec-tool \
parsec-service \
"
diff --git a/meta-tpm/classes/sanity-meta-tpm.bbclass b/meta-tpm/classes/sanity-meta-tpm.bbclass
index 2f8b52d..1ab03c8 100644
--- a/meta-tpm/classes/sanity-meta-tpm.bbclass
+++ b/meta-tpm/classes/sanity-meta-tpm.bbclass
@@ -2,7 +2,9 @@ addhandler tpm_machinecheck
tpm_machinecheck[eventmask] = "bb.event.SanityCheck"
python tpm_machinecheck() {
skip_check = e.data.getVar('SKIP_META_TPM_SANITY_CHECK') == "1"
- if 'tpm' not in e.data.getVar('DISTRO_FEATURES').split() and not skip_check:
+ if 'tpm' not in e.data.getVar('DISTRO_FEATURES').split() and \
+ 'tpm2' not in e.data.getVar('DISTRO_FEATURES').split() and \
+ not skip_check:
bb.warn("You have included the meta-tpm layer, but \
'tpm or tpm2' has not been enabled in your DISTRO_FEATURES. Some bbappend files \
and preferred version setting may not take effect. See the meta-tpm README \
--
2.25.1


OpenEmbedded Happy Hour May 25 5pm/1700 UTC

Tim Orling
 

All,

You are cordially invited to the next OpenEmbedded Happy Hour on May 25
for Europe/Americas time zones @ 1700/5pm UTC (1pm ET / 10am PT).


Regards,
Tim "moto-timo" Orling


[meta-selinux][master][kirkstone][PATCH 2/2] refpolicy: add file context for findfs alternative

Yi Zhao
 

Add file context for findfs alternative which is provided by util-linux.

Signed-off-by: Yi Zhao <yi.zhao@...>
---
...s-apply-policy-to-findfs-alternative.patch | 29 +++++++++++++++++++
.../refpolicy/refpolicy_common.inc | 1 +
2 files changed, 30 insertions(+)
create mode 100644 recipes-security/refpolicy/refpolicy/0069-fc-fstools-apply-policy-to-findfs-alternative.patch

diff --git a/recipes-security/refpolicy/refpolicy/0069-fc-fstools-apply-policy-to-findfs-alternative.patch b/recipes-security/refpolicy/refpolicy/0069-fc-fstools-apply-policy-to-findfs-alternative.patch
new file mode 100644
index 0000000..6535a4b
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0069-fc-fstools-apply-policy-to-findfs-alternative.patch
@@ -0,0 +1,29 @@
+From 3e3ec39659ae068d20efbb5f13054d90960c3c3f Mon Sep 17 00:00:00 2001
+From: Yi Zhao <yi.zhao@...>
+Date: Thu, 19 May 2022 16:51:49 +0800
+Subject: [PATCH] fc/fstools: apply policy to findfs alternative
+
+Add file context for findfs alternative which is provided by util-linux.
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/system/fstools.fc | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/policy/modules/system/fstools.fc b/policy/modules/system/fstools.fc
+index bef711850..91be0ef3d 100644
+--- a/policy/modules/system/fstools.fc
++++ b/policy/modules/system/fstools.fc
+@@ -77,6 +77,7 @@
+ /usr/sbin/fdisk -- gen_context(system_u:object_r:fsadm_exec_t,s0)
+ /usr/sbin/fdisk\.util-linux -- gen_context(system_u:object_r:fsadm_exec_t,s0)
+ /usr/sbin/findfs -- gen_context(system_u:object_r:fsadm_exec_t,s0)
++/usr/sbin/findfs\.util-linux -- gen_context(system_u:object_r:fsadm_exec_t,s0)
+ /usr/sbin/fsck.* -- gen_context(system_u:object_r:fsadm_exec_t,s0)
+ /usr/sbin/gdisk -- gen_context(system_u:object_r:fsadm_exec_t,s0)
+ /usr/sbin/hdparm -- gen_context(system_u:object_r:fsadm_exec_t,s0)
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy_common.inc b/recipes-security/refpolicy/refpolicy_common.inc
index 1d5a5c0..bb0c0dd 100644
--- a/recipes-security/refpolicy/refpolicy_common.inc
+++ b/recipes-security/refpolicy/refpolicy_common.inc
@@ -84,6 +84,7 @@ SRC_URI += " \
file://0066-systemd-add-missing-file-context-for-run-systemd-net.patch \
file://0067-systemd-add-file-contexts-for-systemd-network-genera.patch \
file://0068-systemd-udev-allow-udev-to-read-systemd-networkd-run.patch \
+ file://0069-fc-fstools-apply-policy-to-findfs-alternative.patch \
"

S = "${WORKDIR}/refpolicy"
--
2.25.1


[meta-selinux][master][kirkstone][PATCH 1/2] refpolicy: backport patches to fix policy issues for systemd 250

Yi Zhao
 

Backport the following patches to fix systemd-resolved and
systemd-netowrkd policy issues:
systemd-systemd-resolved-is-linked-to-libselinux.patch
sysnetwork-systemd-allow-DNS-resolution-over-io.syst.patch
term-init-allow-systemd-to-watch-and-watch-reads-on-.patch
systemd-add-file-transition-for-systemd-networkd-run.patch
systemd-add-missing-file-context-for-run-systemd-net.patch
systemd-add-file-contexts-for-systemd-network-genera.patch
systemd-udev-allow-udev-to-read-systemd-networkd-run.patch

Signed-off-by: Yi Zhao <yi.zhao@...>
---
...emd-resolved-is-linked-to-libselinux.patch | 33 +++++++
...md-allow-DNS-resolution-over-io.syst.patch | 63 +++++++++++++
...systemd-to-watch-and-watch-reads-on-.patch | 94 +++++++++++++++++++
...-transition-for-systemd-networkd-run.patch | 32 +++++++
...ing-file-context-for-run-systemd-net.patch | 29 ++++++
...-contexts-for-systemd-network-genera.patch | 38 ++++++++
...ow-udev-to-read-systemd-networkd-run.patch | 34 +++++++
.../refpolicy/refpolicy_common.inc | 7 ++
8 files changed, 330 insertions(+)
create mode 100644 recipes-security/refpolicy/refpolicy/0062-systemd-systemd-resolved-is-linked-to-libselinux.patch
create mode 100644 recipes-security/refpolicy/refpolicy/0063-sysnetwork-systemd-allow-DNS-resolution-over-io.syst.patch
create mode 100644 recipes-security/refpolicy/refpolicy/0064-term-init-allow-systemd-to-watch-and-watch-reads-on-.patch
create mode 100644 recipes-security/refpolicy/refpolicy/0065-systemd-add-file-transition-for-systemd-networkd-run.patch
create mode 100644 recipes-security/refpolicy/refpolicy/0066-systemd-add-missing-file-context-for-run-systemd-net.patch
create mode 100644 recipes-security/refpolicy/refpolicy/0067-systemd-add-file-contexts-for-systemd-network-genera.patch
create mode 100644 recipes-security/refpolicy/refpolicy/0068-systemd-udev-allow-udev-to-read-systemd-networkd-run.patch

diff --git a/recipes-security/refpolicy/refpolicy/0062-systemd-systemd-resolved-is-linked-to-libselinux.patch b/recipes-security/refpolicy/refpolicy/0062-systemd-systemd-resolved-is-linked-to-libselinux.patch
new file mode 100644
index 0000000..e0db7d3
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0062-systemd-systemd-resolved-is-linked-to-libselinux.patch
@@ -0,0 +1,33 @@
+From 52a4222397f5d3b28ca15a45bb2ace209a4afc3e Mon Sep 17 00:00:00 2001
+From: Kenton Groombridge <me@...>
+Date: Thu, 31 Mar 2022 13:09:10 -0400
+Subject: [PATCH] systemd: systemd-resolved is linked to libselinux
+
+systemd-resolved as of systemd 250 fails to start with this error:
+
+Failed to initialize SELinux labeling handle: No such file or directory
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/refpolicy/commit/3a22db2410de479e5baa88f3f668a7a4ac198950]
+
+Signed-off-by: Kenton Groombridge <me@...>
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/system/systemd.te | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/policy/modules/system/systemd.te b/policy/modules/system/systemd.te
+index 8cea6baa1..beb301cc6 100644
+--- a/policy/modules/system/systemd.te
++++ b/policy/modules/system/systemd.te
+@@ -1261,6 +1261,7 @@ fs_getattr_cgroup(systemd_resolved_t)
+
+ init_dgram_send(systemd_resolved_t)
+
++seutil_libselinux_linked(systemd_resolved_t)
+ seutil_read_file_contexts(systemd_resolved_t)
+
+ systemd_log_parse_environment(systemd_resolved_t)
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy/0063-sysnetwork-systemd-allow-DNS-resolution-over-io.syst.patch b/recipes-security/refpolicy/refpolicy/0063-sysnetwork-systemd-allow-DNS-resolution-over-io.syst.patch
new file mode 100644
index 0000000..63da7cd
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0063-sysnetwork-systemd-allow-DNS-resolution-over-io.syst.patch
@@ -0,0 +1,63 @@
+From 1ba0911e157c64ea15636c5707f38f1bdc9a46c8 Mon Sep 17 00:00:00 2001
+From: Kenton Groombridge <me@...>
+Date: Wed, 27 Apr 2022 01:09:52 -0400
+Subject: [PATCH] sysnetwork, systemd: allow DNS resolution over
+ io.systemd.Resolve
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/refpolicy/commit/1a0acc9c0d8c7c49ad4ca2cabd44bc66450f45e0]
+
+Signed-off-by: Kenton Groombridge <me@...>
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/system/sysnetwork.if | 1 +
+ policy/modules/system/systemd.if | 21 +++++++++++++++++++++
+ 2 files changed, 22 insertions(+)
+
+diff --git a/policy/modules/system/sysnetwork.if b/policy/modules/system/sysnetwork.if
+index 8664a67c8..140d48508 100644
+--- a/policy/modules/system/sysnetwork.if
++++ b/policy/modules/system/sysnetwork.if
+@@ -844,6 +844,7 @@ interface(`sysnet_dns_name_resolve',`
+ ifdef(`init_systemd',`
+ optional_policy(`
+ systemd_dbus_chat_resolved($1)
++ systemd_stream_connect_resolved($1)
+ ')
+ # This seems needed when the mymachines NSS module is used
+ optional_policy(`
+diff --git a/policy/modules/system/systemd.if b/policy/modules/system/systemd.if
+index 5f2038f22..9143fb4c0 100644
+--- a/policy/modules/system/systemd.if
++++ b/policy/modules/system/systemd.if
+@@ -1835,6 +1835,27 @@ interface(`systemd_tmpfilesd_managed',`
+ ')
+ ')
+
++#######################################
++## <summary>
++## Connect to systemd resolved over
++## /run/systemd/resolve/io.systemd.Resolve .
++## </summary>
++## <param name="domain">
++## <summary>
++## Domain allowed access.
++## </summary>
++## </param>
++#
++interface(`systemd_stream_connect_resolved',`
++ gen_require(`
++ type systemd_resolved_t;
++ type systemd_resolved_runtime_t;
++ ')
++
++ files_search_runtime($1)
++ stream_connect_pattern($1, systemd_resolved_runtime_t, systemd_resolved_runtime_t, systemd_resolved_t)
++')
++
+ ########################################
+ ## <summary>
+ ## Send and receive messages from
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy/0064-term-init-allow-systemd-to-watch-and-watch-reads-on-.patch b/recipes-security/refpolicy/refpolicy/0064-term-init-allow-systemd-to-watch-and-watch-reads-on-.patch
new file mode 100644
index 0000000..88f070d
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0064-term-init-allow-systemd-to-watch-and-watch-reads-on-.patch
@@ -0,0 +1,94 @@
+From 50670946f04257cc2110facbc61884e2cf0d8327 Mon Sep 17 00:00:00 2001
+From: Kenton Groombridge <me@...>
+Date: Fri, 6 May 2022 21:16:29 -0400
+Subject: [PATCH] term, init: allow systemd to watch and watch reads on
+ unallocated ttys
+
+As of systemd 250, systemd needs to be able to add a watch on and watch
+reads on unallocated ttys in order to start getty.
+
+systemd[55548]: getty@...: Failed to set up standard input: Permission denied
+systemd[55548]: getty@...: Failed at step STDIN spawning /sbin/agetty: Permission denied
+
+time->Fri May 6 21:17:58 2022
+type=PROCTITLE msg=audit(1651886278.452:1770): proctitle="(agetty)"
+type=PATH msg=audit(1651886278.452:1770): item=0 name="/dev/tty1" inode=18 dev=00:05 mode=020620 ouid=0 ogid=5 rdev=04:01 obj=system_u:object_r:tty_device_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
+type=CWD msg=audit(1651886278.452:1770): cwd="/"
+type=SYSCALL msg=audit(1651886278.452:1770): arch=c000003e syscall=254 success=no exit=-13 a0=3 a1=60ba5c21e020 a2=18 a3=23 items=1 ppid=1 pid=55551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(agetty)" exe="/lib/systemd/systemd" subj=system_u:system_r:init_t:s0 key=(null)
+type=AVC msg=audit(1651886278.452:1770): avc: denied { watch watch_reads } for pid=55551 comm="(agetty)" path="/dev/tty1" dev="devtmpfs" ino=18 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:tty_device_t:s0 tclass=chr_file permissive=0
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/refpolicy/commit/308ab9f69a4623f5dace8da151e70c6316f055a8]
+
+Signed-off-by: Kenton Groombridge <me@...>
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/kernel/terminal.if | 38 +++++++++++++++++++++++++++++++
+ policy/modules/system/init.te | 2 ++
+ 2 files changed, 40 insertions(+)
+
+diff --git a/policy/modules/kernel/terminal.if b/policy/modules/kernel/terminal.if
+index e8c0735eb..6e9f654ac 100644
+--- a/policy/modules/kernel/terminal.if
++++ b/policy/modules/kernel/terminal.if
+@@ -1287,6 +1287,44 @@ interface(`term_dontaudit_use_unallocated_ttys',`
+ dontaudit $1 tty_device_t:chr_file rw_chr_file_perms;
+ ')
+
++########################################
++## <summary>
++## Watch unallocated ttys.
++## </summary>
++## <param name="domain">
++## <summary>
++## Domain allowed access.
++## </summary>
++## </param>
++#
++interface(`term_watch_unallocated_ttys',`
++ gen_require(`
++ type tty_device_t;
++ ')
++
++ dev_list_all_dev_nodes($1)
++ allow $1 tty_device_t:chr_file watch;
++')
++
++########################################
++## <summary>
++## Watch reads on unallocated ttys.
++## </summary>
++## <param name="domain">
++## <summary>
++## Domain allowed access.
++## </summary>
++## </param>
++#
++interface(`term_watch_reads_unallocated_ttys',`
++ gen_require(`
++ type tty_device_t;
++ ')
++
++ dev_list_all_dev_nodes($1)
++ allow $1 tty_device_t:chr_file watch_reads;
++')
++
+ ########################################
+ ## <summary>
+ ## Get the attributes of all tty device nodes.
+diff --git a/policy/modules/system/init.te b/policy/modules/system/init.te
+index 5a19f0e43..24cef0924 100644
+--- a/policy/modules/system/init.te
++++ b/policy/modules/system/init.te
+@@ -518,6 +518,8 @@ ifdef(`init_systemd',`
+ term_create_devpts_dirs(init_t)
+ term_create_ptmx(init_t)
+ term_create_controlling_term(init_t)
++ term_watch_unallocated_ttys(init_t)
++ term_watch_reads_unallocated_ttys(init_t)
+
+ # udevd is a "systemd kobject uevent socket activated daemon"
+ udev_create_kobject_uevent_sockets(init_t)
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy/0065-systemd-add-file-transition-for-systemd-networkd-run.patch b/recipes-security/refpolicy/refpolicy/0065-systemd-add-file-transition-for-systemd-networkd-run.patch
new file mode 100644
index 0000000..1029490
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0065-systemd-add-file-transition-for-systemd-networkd-run.patch
@@ -0,0 +1,32 @@
+From 6f8a8ecd8bafd6e8a3515b53db2a2982a02ff254 Mon Sep 17 00:00:00 2001
+From: Kenton Groombridge <me@...>
+Date: Thu, 31 Mar 2022 13:22:37 -0400
+Subject: [PATCH] systemd: add file transition for systemd-networkd runtime
+
+systemd-networkd creates the /run/systemd/network directory which should
+be labeled appropriately.
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/refpolicy/commit/663b62f27cb12c22f056eba9326cf3f7f78d8a9e]
+
+Signed-off-by: Kenton Groombridge <me@...>
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/system/systemd.te | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/policy/modules/system/systemd.te b/policy/modules/system/systemd.te
+index beb301cc6..654c6a42a 100644
+--- a/policy/modules/system/systemd.te
++++ b/policy/modules/system/systemd.te
+@@ -917,6 +917,7 @@ auth_use_nsswitch(systemd_networkd_t)
+
+ init_dgram_send(systemd_networkd_t)
+ init_read_state(systemd_networkd_t)
++init_runtime_filetrans(systemd_networkd_t, systemd_networkd_runtime_t, dir)
+
+ logging_send_syslog_msg(systemd_networkd_t)
+
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy/0066-systemd-add-missing-file-context-for-run-systemd-net.patch b/recipes-security/refpolicy/refpolicy/0066-systemd-add-missing-file-context-for-run-systemd-net.patch
new file mode 100644
index 0000000..f84eb4a
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0066-systemd-add-missing-file-context-for-run-systemd-net.patch
@@ -0,0 +1,29 @@
+From 2e3f371b59bee343c42e4c69495df0f3719b6e24 Mon Sep 17 00:00:00 2001
+From: Kenton Groombridge <me@...>
+Date: Sat, 2 Apr 2022 15:44:01 -0400
+Subject: [PATCH] systemd: add missing file context for /run/systemd/network
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/refpolicy/commit/f2fe1ae15485da7b6269b7d0d7dbed9a834f1876]
+
+Signed-off-by: Kenton Groombridge <me@...>
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/system/systemd.fc | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/policy/modules/system/systemd.fc b/policy/modules/system/systemd.fc
+index 34db8c034..d21914227 100644
+--- a/policy/modules/system/systemd.fc
++++ b/policy/modules/system/systemd.fc
+@@ -85,6 +85,7 @@ HOME_DIR/\.local/share/systemd(/.*)? gen_context(system_u:object_r:systemd_data
+
+ /run/systemd/ask-password(/.*)? gen_context(system_u:object_r:systemd_passwd_runtime_t,s0)
+ /run/systemd/ask-password-block(/.*)? gen_context(system_u:object_r:systemd_passwd_runtime_t,s0)
++/run/systemd/network(/.*)? gen_context(system_u:object_r:systemd_networkd_runtime_t,s0)
+ /run/systemd/resolve(/.*)? gen_context(system_u:object_r:systemd_resolved_runtime_t,s0)
+ /run/systemd/seats(/.*)? gen_context(system_u:object_r:systemd_sessions_runtime_t,s0)
+ /run/systemd/sessions(/.*)? gen_context(system_u:object_r:systemd_sessions_runtime_t,s0)
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy/0067-systemd-add-file-contexts-for-systemd-network-genera.patch b/recipes-security/refpolicy/refpolicy/0067-systemd-add-file-contexts-for-systemd-network-genera.patch
new file mode 100644
index 0000000..0aaf096
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0067-systemd-add-file-contexts-for-systemd-network-genera.patch
@@ -0,0 +1,38 @@
+From 143d339b2e6611c56cd0210279757ebee9632731 Mon Sep 17 00:00:00 2001
+From: Kenton Groombridge <me@...>
+Date: Thu, 19 May 2022 11:42:51 -0400
+Subject: [PATCH] systemd: add file contexts for systemd-network-generator
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/refpolicy/commit/73adba0a39b7409bc4bbfa0e962108c2b1e5f2a5]
+
+Thanks-To: Zhao Yi
+Signed-off-by: Kenton Groombridge <me@...>
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/system/systemd.fc | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/policy/modules/system/systemd.fc b/policy/modules/system/systemd.fc
+index d21914227..1a35bd65c 100644
+--- a/policy/modules/system/systemd.fc
++++ b/policy/modules/system/systemd.fc
+@@ -35,6 +35,7 @@
+ /usr/lib/systemd/systemd-machined -- gen_context(system_u:object_r:systemd_machined_exec_t,s0)
+ /usr/lib/systemd/systemd-modules-load -- gen_context(system_u:object_r:systemd_modules_load_exec_t,s0)
+ /usr/lib/systemd/systemd-networkd -- gen_context(system_u:object_r:systemd_networkd_exec_t,s0)
++/usr/lib/systemd/systemd-network-generator -- gen_context(system_u:object_r:systemd_networkd_exec_t,s0)
+ /usr/lib/systemd/systemd-pstore -- gen_context(system_u:object_r:systemd_pstore_exec_t,s0)
+ /usr/lib/systemd/systemd-resolved -- gen_context(system_u:object_r:systemd_resolved_exec_t,s0)
+ /usr/lib/systemd/systemd-rfkill -- gen_context(system_u:object_r:systemd_rfkill_exec_t,s0)
+@@ -60,6 +61,7 @@ HOME_DIR/\.local/share/systemd(/.*)? gen_context(system_u:object_r:systemd_data
+ /usr/lib/systemd/system/systemd-backlight.* -- gen_context(system_u:object_r:systemd_backlight_unit_t,s0)
+ /usr/lib/systemd/system/systemd-binfmt.* -- gen_context(system_u:object_r:systemd_binfmt_unit_t,s0)
+ /usr/lib/systemd/system/systemd-networkd.* gen_context(system_u:object_r:systemd_networkd_unit_t,s0)
++/usr/lib/systemd/system/systemd-network-generator.* gen_context(system_u:object_r:systemd_networkd_unit_t,s0)
+ /usr/lib/systemd/system/systemd-rfkill.* -- gen_context(system_u:object_r:systemd_rfkill_unit_t,s0)
+ /usr/lib/systemd/system/systemd-socket-proxyd\.service -- gen_context(system_u:object_r:systemd_socket_proxyd_unit_file_t,s0)
+
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy/0068-systemd-udev-allow-udev-to-read-systemd-networkd-run.patch b/recipes-security/refpolicy/refpolicy/0068-systemd-udev-allow-udev-to-read-systemd-networkd-run.patch
new file mode 100644
index 0000000..259863c
--- /dev/null
+++ b/recipes-security/refpolicy/refpolicy/0068-systemd-udev-allow-udev-to-read-systemd-networkd-run.patch
@@ -0,0 +1,34 @@
+From 6508bc8a3440525384fcfcd8ad55a4cd5c79b912 Mon Sep 17 00:00:00 2001
+From: Kenton Groombridge <me@...>
+Date: Thu, 19 May 2022 11:43:44 -0400
+Subject: [PATCH] systemd, udev: allow udev to read systemd-networkd runtime
+
+udev searches for .link files and applies custom udev rules to devices
+as they come up.
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/refpolicy/commit/998ef975f38c70d57e7220b88ae5e62c88ebb770]
+
+Thanks-To: Zhao Yi
+Signed-off-by: Kenton Groombridge <me@...>
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ policy/modules/system/udev.te | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/policy/modules/system/udev.te b/policy/modules/system/udev.te
+index 4c5a690fb..8e243c0f2 100644
+--- a/policy/modules/system/udev.te
++++ b/policy/modules/system/udev.te
+@@ -270,6 +270,8 @@ ifdef(`init_systemd',`
+ systemd_read_hwdb(udev_t)
+ systemd_read_logind_sessions_files(udev_t)
+ systemd_read_logind_runtime_files(udev_t)
++ # udev searches for .link files and applies custom udev rules
++ systemd_read_networkd_runtime(udev_t)
+
+ optional_policy(`
+ init_dbus_chat(udev_t)
+--
+2.25.1
+
diff --git a/recipes-security/refpolicy/refpolicy_common.inc b/recipes-security/refpolicy/refpolicy_common.inc
index 96d0da1..1d5a5c0 100644
--- a/recipes-security/refpolicy/refpolicy_common.inc
+++ b/recipes-security/refpolicy/refpolicy_common.inc
@@ -77,6 +77,13 @@ SRC_URI += " \
file://0059-policy-modules-system-setrans-allow-setrans_t-use-fd.patch \
file://0060-policy-modules-system-systemd-make-_systemd_t-MLS-tr.patch \
file://0061-policy-modules-system-logging-make-syslogd_runtime_t.patch \
+ file://0062-systemd-systemd-resolved-is-linked-to-libselinux.patch \
+ file://0063-sysnetwork-systemd-allow-DNS-resolution-over-io.syst.patch \
+ file://0064-term-init-allow-systemd-to-watch-and-watch-reads-on-.patch \
+ file://0065-systemd-add-file-transition-for-systemd-networkd-run.patch \
+ file://0066-systemd-add-missing-file-context-for-run-systemd-net.patch \
+ file://0067-systemd-add-file-contexts-for-systemd-network-genera.patch \
+ file://0068-systemd-udev-allow-udev-to-read-systemd-networkd-run.patch \
"

S = "${WORKDIR}/refpolicy"
--
2.25.1


Re: [ANNOUNCEMENT] Yocto Project 4.0.1 is Released

Lee Chee Yang
 

Now that we also have release notes in the documentation (see
https://docs.yoctoproject.org/migration-guides/release-notes-3.4.2.html
for example, and the source code on
https://git.yoctoproject.org/yocto-docs/tree/documentation/migration-
guides/release-notes-3.4.2.rst),
what about modifying the scripts to generate such notes directly in Sphinx
syntax, and right before a new release is made, add them to the
documentation directory?
This is in my to do list.

Chee Yang


Enhancements/Bugs closed WW21

Stephen Jolley
 

All,

The below were the owners of enhancements or bugs closed during the last week!

Who

Count

michael.opdenacker@...

2

mhalstead@...

1

Grand Total

3

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

(    Cell:                (208) 244-4460

* Email:              sjolley.yp.pm@...

 


Current high bug count owners for Yocto Project 4.1

Stephen Jolley
 

All,

Below is the list as of top 36 bug owners as of the end of WW21 of who have open medium or higher bugs and enhancements against YP 4.1.   There are 110 possible work days left until the final release candidates for YP 4.1 needs to be released.

Who

Count

michael.opdenacker@...

38

ross.burton@...

23

david.reyna@...

21

bruce.ashfield@...

20

randy.macleod@...

17

sakib.sajal@...

12

richard.purdie@...

12

JPEWhacker@...

9

tim.orling@...

8

saul.wold@...

7

kai.kang@...

4

jon.mason@...

4

pavel@...

4

mhalstead@...

3

akuster808@...

3

Qi.Chen@...

2

abongwabonalais@...

2

tvgamblin@...

2

hongxu.jia@...

2

pgowda.cve@...

2

Aryaman.Gupta@...

2

liezhi.yang@...

1

raj.khem@...

1

martin.beeger@...

1

shachar@...

1

Martin.Jansa@...

1

alexandre.belloni@...

1

aehs29@...

1

nicolas.dechesne@...

1

sundeep.kokkonda@...

1

thomas.perrot@...

1

mostthingsweb@...

1

jay.shen.teoh@...

1

kexin.hao@...

1

open.source@...

1

alejandro@...

1

Grand Total

212

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

(    Cell:                (208) 244-4460

* Email:              sjolley.yp.pm@...