[PATCH yocto-autobuilder-helper 1/6] scripts: run-docs-build: factor out all bitbake branches building
Quentin Schulz
From: Quentin Schulz <quentin.schulz@...>
master and master-next only differ from other branches by their output directory name. Let's put everything in common and only have a check on whether the branch is master or master-next and modify the output dir in those cases. Cc: Quentin Schulz <foss+yocto@...> Signed-off-by: Quentin Schulz <quentin.schulz@...> --- scripts/run-docs-build | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/scripts/run-docs-build b/scripts/run-docs-build index b9b331b..f7b5f97 100755 --- a/scripts/run-docs-build +++ b/scripts/run-docs-build @@ -30,33 +30,29 @@ echo Extracing old content from archive tar -xJf $docbookarchive cd $bbdocs -echo Building bitbake master branch -git checkout master -make clean -make publish mkdir $outputdir/bitbake -cp -r ./_build/final/* $outputdir/bitbake -git checkout master-next -echo Building bitbake master-next branch -make clean -make publish -mkdir $outputdir/bitbake/next -cp -r ./_build/final/* $outputdir/bitbake/next - -# stable branches # A decision was made to keep updating all the Sphinx generated docs for the moment, # even the ones corresponding to no longer supported releases # https://lists.yoctoproject.org/g/docs/message/2193 # We copy the releases.rst file from master so that all versions of the docs # see the latest releases. -for branch in 1.46 1.48 1.50 1.52; do +for branch in 1.46 1.48 1.50 1.52 master master-next; do echo Building bitbake $branch branch git checkout $branch git checkout master releases.rst make clean make publish - mkdir $outputdir/bitbake/$branch + + if [ "$branch" = "master-next" ]; then + branch="next" + mkdir $outputdir/bitbake/$branch + elif [ "$branch" = "master" ]; then + branch="" + else + mkdir $outputdir/bitbake/$branch + fi + cp -r ./_build/final/* $outputdir/bitbake/$branch git reset --hard done -- 2.35.1
|
|
Re: [meta-security][PATCH] python3-privacyidea_3.6.2: remove more py3 that got dropped
Richard Purdie
On Fri, 2022-03-18 at 09:30 -0700, Armin Kuster wrote:
Signed-off-by: Armin Kuster <akuster808@...>I'd hold off that, these just moved to core? Cheers, Richard
|
|
[meta-security][PATCH] python3-privacyidea_3.6.2: remove more py3 that got dropped
Signed-off-by: Armin Kuster <akuster808@...>
--- recipes-security/mfa/python3-privacyidea_3.6.2.bb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/recipes-security/mfa/python3-privacyidea_3.6.2.bb b/recipes-security/mfa/python3-privacyidea_3.6.2.bb index 40f6d15..3a09b80 100644 --- a/recipes-security/mfa/python3-privacyidea_3.6.2.bb +++ b/recipes-security/mfa/python3-privacyidea_3.6.2.bb @@ -24,7 +24,7 @@ FILES:${PN} += " ${prefix}/etc/privacyidea/* ${datadir}/lib/privacyidea/*" RDEPENDS:${PN} += " bash perl freeradius-mysql freeradius-utils" RDEPENDS:${PN} += "python3 python3-alembic python3-babel python3-bcrypt" -RDEPENDS:${PN} += "python3-beautifulsoup4 python3-cbor2 python3-certifi python3-cffi python3-chardet" +RDEPENDS:${PN} += "python3-beautifulsoup4 python3-cbor2 python3-certifi python3-cffi" RDEPENDS:${PN} += "python3-click python3-configobj python3-croniter python3-cryptography python3-defusedxml" RDEPENDS:${PN} += "python3-ecdsa python3-flask python3-flask-babel python3-flask-migrate" RDEPENDS:${PN} += "python3-flask-script python3-flask-sqlalchemy python3-flask-versioned" @@ -33,6 +33,6 @@ RDEPENDS:${PN} += "python3-itsdangerous python3-jinja2 python3-ldap python3-lxml RDEPENDS:${PN} += "python3-markupsafe python3-netaddr python3-oauth2client python3-passlib python3-pillow" RDEPENDS:${PN} += "python3-pyasn1 python3-pyasn1-modules python3-pycparser python3-pyjwt python3-pymysql" RDEPENDS:${PN} += "python3-pyopenssl python3-pyrad python3-dateutil python3-editor python3-gnupg" -RDEPENDS:${PN} += "python3-pytz python3-pyyaml python3-qrcode python3-redis python3-requests python3-rsa" -RDEPENDS:${PN} += "python3-six python3-smpplib python3-soupsieve python3-soupsieve " +RDEPENDS:${PN} += "python3-pytz python3-pyyaml python3-qrcode python3-redis python3-rsa" +RDEPENDS:${PN} += "python3-six python3-soupsieve python3-soupsieve " RDEPENDS:${PN} += "python3-sqlalchemy python3-sqlsoup python3-urllib3 python3-werkzeug" -- 2.25.1
|
|
Re: QA notification for completed autobuilder build (yocto-3.1.15.rc1)
Teoh, Jay Shen
Hi All,
toggle quoted messageShow quoted text
This is the full report for yocto-3.1.15.rc1: https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults ======= Summary ======== No high milestone defects. No new issue found. Thanks, Jay
-----Original Message-----
|
|
Multiconfig dependency
Hello guys!
Could you please help me with Multiconfig setup? I’ve "default" configuration with SystemD by default. And "initramfs" configuration with Busybox and other settings. I use next targets/recipes with initramfs configuration: 1. core-image-rootfs - packs core-image-minimal ext4 image to debian package; 2. initramfs-flasher-image - image that has core-image-rootfs; Default configuration: 1. core-image-minimal - main rootfs; 2. flasher - packs initramfs-flasher-image squashfs image to debian package; 3. app-flasher - special application that has inside squashfs file from flasher package. Everything works fine if I do clean build. If I change somethings for core-image-minial (like IMAGE_INSTALL), it builds core-image-minial only: bitbake app-flasherBut no updates for core-image-rootfs, initramfs-flasher-image, flasher and app-flasher. Here how my multiconfig dependency in core-image-rootfs.bb: do_install[mcdepends] ="mc:initramfs::core-image-minimal:do_image_complete"
|
|
do_image_wic error with custom machine
Goubaux, Nicolas
Hello,
I would like to implement two custom machines based on genericx86_64 but I have issue with the do_image_wic Couldn't get bitbake variable from /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env. File /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env doesn't exist. Can help me to fix it ? ---------- meta-product1/machine/machine1.conf ---------- #@TYPE: Machine #@NAME: Product1 Board #@DESCRIPTION: Machine configuration for the Product1 DEFAULTTUNE ?= "core2-64" TUNE_PKGARCH_tune-core2-64 = "core2-64" #PACKAGE_ARCH = "x86_64" #PACKAGE_ARCHS_append = " genericx86_64" require conf/machine/include/tune-core2.inc require conf/machine/include/genericx86-common.inc KMACHINE_machine1 = "x86_64" PREFERRED_PROVIDER_virtual/kernel = "linux-yocto" PREFERRED_VERSION_linux-yocto = "5.10%" SERIAL_CONSOLES_CHECK = "ttyS0" #For runqemu QB_SYSTEM_NAME = "qemu-system-x86_64" WKS_FILE ?= "${MACHINE}.wks" IMAGE_INSTALL_append = " packagegroup-sms-apps packagegroup-sms-tools packagegroup-sms-lib packagegroup-sms-dev" hostname:pn-base-files = "product1-sms" ---------- Log ---------- + BUILDDIR=/home/vagrant/poky/build PSEUDO_UNLOAD=1 wic create /home/vagrant/poky/build/../../layers/meta-project1-sms/scripts/lib/wic/canned-wks/project1-sms-x86_64.wks --vars /home/vagrant/poky/build/tmp/sysroots/project1-sms-x86_64/imgdata/ -e project1-image-sms -o /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/build-wic/ -w /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/tmp-wic INFO: Creating image(s)... Couldn't get bitbake variable from /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env. File /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env doesn't exist. Traceback (most recent call last): File "/home/vagrant/poky/scripts/wic", line 542, in <module> sys.exit(main(sys.argv[1:])) File "/home/vagrant/poky/scripts/wic", line 537, in main return hlp.invoke_subcommand(args, parser, hlp.wic_help_usage, subcommands) File "/home/vagrant/poky/scripts/lib/wic/help.py", line 83, in invoke_subcommand subcmd[0](args, usage) File "/home/vagrant/poky/scripts/wic", line 219, in wic_create_subcommand engine.wic_create(wks_file, rootfs_dir, bootimg_dir, kernel_dir, File "/home/vagrant/poky/scripts/lib/wic/engine.py", line 190, in wic_create plugin.do_create() File "/home/vagrant/poky/scripts/lib/wic/plugins/imager/direct.py", line 97, in do_create self.create() File "/home/vagrant/poky/scripts/lib/wic/plugins/imager/direct.py", line 181, in create self._image.prepare(self) File "/home/vagrant/poky/scripts/lib/wic/plugins/imager/direct.py", line 356, in prepare part.prepare(imager, imager.workdir, imager.oe_builddir, File "/home/vagrant/poky/scripts/lib/wic/partition.py", line 182, in prepare plugin.do_prepare_partition(self, srcparams_dict, creator, File "/home/vagrant/poky/scripts/lib/wic/plugins/source/rootfs.py", line 96, in do_prepare_partition part.rootfs_dir = cls.__get_rootfs_dir(rootfs_dir) File "/home/vagrant/poky/scripts/lib/wic/plugins/source/rootfs.py", line 57, in __get_rootfs_dir if not os.path.isdir(image_rootfs_dir): File "/home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/recipe-sysroot-native/usr/lib/python3.9/genericpath.py", line 42, in isdir st = os.stat(s) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType + bb_sh_exit_handler + ret=1 + [ 1 != 0 ] + echo WARNING: exit code 1 from a shell command. WARNING: exit code 1 from a shell command. + exit 1 ERROR: project1-image-sms-1.0-r0 do_image_wic: ExecutionError('/home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/temp/run.do_image_wic.4026945', 1, None, None) ERROR: Logfile of failure stored in: /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/temp/log.do_image_wic.4026945 ERROR: Task (/home/vagrant/poky/build/../../layers/meta-project1-sms/recipes-core/images/project1-image-sms.bb:do_image_wic) failed with exit code '1' — Nicolas G.
|
|
Re: Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
On 2022-03-17 11:28, Trevor Gamblin
wrote:
Done. ../Randy
-- # Randy MacLeod # Wind River Linux
|
|
Re: Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
Stephen Jolley
AR completed.
Stephen
From: Trevor Gamblin <trevor.gamblin@...>
Sent: Thursday, March 17, 2022 8:28 AM To: Yocto-mailing-list <yocto@...> Cc: sjolley.yp.pm@...; Richard Purdie <richard.purdie@...>; alexandre.belloni@...; luca.ceresoli@...; MacLeod, Randy <Randy.MacLeod@...>; Wold, Saul <saul.wold@...>; tim.orling@...; daiane.angolini@...; Ross Burton <ross@...>; Steve Sakoman <steve@...> Subject: Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
Wiki: https://wiki.yoctoproject.org/wiki/Bug_Triage Attendees: Alexandre, Daiane, Luca, Randy, Richard, Ross, Saul, Stephen, Steve, Tim, Trevor ARs: - Randy to move remaining Medium+ M3s to M1 (and move to newcomer bugs category, where appropriate) - Stephen to create an issue for Michael run milestone naming script (3.6 to 4.1 and 3.99 to 4.99) - Everyone to review assigned Old Milestone M3 bugs and move to later milestones
Notes: - ~43% of AB workers have been switched to SSDs. Failure rate appears lower, but still TBD. More coming soon! Medium+ 3.5 Unassigned Enhancements/Bugs: 68 (Last week 73) Medium+ 3.6 Unassigned Enhancements/Bugs: 10 (Last week 2) Medium+ 3.99 Unassigned Enhancements/Bugs: 38 (Last week 38)
AB Bugs: 72 (Last week 71)
|
|
Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
Trevor Gamblin
Wiki: https://wiki.yoctoproject.org/wiki/Bug_Triage Attendees: Alexandre, Daiane, Luca, Randy, Richard,
Ross, Saul, Stephen, Steve, Tim, Trevor ARs: - Randy to move remaining Medium+ M3s to M1 (and move to
newcomer bugs category, where appropriate) - Stephen to create an issue for Michael run milestone naming script (3.6 to 4.1 and 3.99 to 4.99) - Everyone to review assigned Old Milestone M3 bugs and move to
later milestones Notes:
- ~43% of AB workers have been switched to SSDs. Failure rate
appears lower, but still TBD. More coming soon! Medium+ 3.5 Unassigned Enhancements/Bugs: 68 (Last week 73) Medium+ 3.6 Unassigned Enhancements/Bugs: 10 (Last week
2) AB Bugs: 72
(Last week 71)
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Edgar Mobile
Ah, that's an interesting information.
The dmesg log gives the impression that virgl starts correctly and in the normal shell the examples work flawlessly. The problems start once I tell weston to use the ivi-shell instead of the desktop shell.
From: Alexander Kanavin <alex.kanavin@...>
Sent: Thursday, March 17, 2022 1:51 PM To: Edgar Mobile <heideggm@...> Cc: yocto@... <yocto@...> Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. There is no hardware acceleration with bochs at all, if you want it,
you need to make virtio/virgl driver work. Alex On Thu, 17 Mar 2022 at 14:02, Edgar Mobile <heideggm@...> wrote: > > Do you know if bochs driver is available and active for yocto 3.4 or 3.5? > > ________________________________ > From: Alexander Kanavin <alex.kanavin@...> > Sent: Thursday, March 17, 2022 11:26 AM > To: Edgar Mobile <heideggm@...> > Cc: yocto@... <yocto@...> > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > As I told you, we do not support or test this combination. Which means > that figuring out what the error messages mean and how to fix them is > on you - patches welcome. > > Alex > > On Thu, 17 Mar 2022 at 11:41, Edgar Mobile <heideggm@...> wrote: > > > > I tried that first and it was horribly slow. That's why I try hardware acceleration now. > > > > Do you _know_ it doesn't work? If yes, why? > > > > ________________________________ > > From: Alexander Kanavin <alex.kanavin@...> > > Sent: Thursday, March 17, 2022 10:33 AM > > To: Edgar Mobile <heideggm@...> > > Cc: yocto@... <yocto@...> > > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > If you want an aarch guest on x86, then drop the gl option from > > runqemu. This will fall back to software rendering. > > > > Alex > > > > On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote: > > > > > > Sorry, but I need an Aarch64 guest. > > > > > > Ok, using a newer qemu I now encounter the following problem: > > > > > > root@qemuarm64:/usr/bin# XDG_RUNTIME_DIR=/run/user/0 ./eglinfo > > > EGL client extensions string: > > > EGL_EXT_client_extensions EGL_EXT_device_base > > > EGL_EXT_device_enumeration EGL_EXT_device_query EGL_EXT_platform_base > > > EGL_KHR_client_get_all_proc_addresses EGL_KHR_debug > > > EGL_EXT_platform_device EGL_EXT_platform_wayland > > > EGL_KHR_platform_wayland EGL_EXT_platform_x11 EGL_KHR_platform_x11 > > > EGL_MESA_platform_xcb EGL_MESA_platform_gbm EGL_KHR_platform_gbm > > > EGL_MESA_platform_surfaceless > > > > > > GBM platform: > > > pci id for fd 3: 1234:1111, driver (null) > > > MESA-LOADER: failed to open bochs-drm: /usr/lib/dri/bochs-drm_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/dri) > > > failed to load driver: bochs-drm > > > ... > > > > > > > > > What is this bochs-drm_dri.so and does Yocto / the Mesa in Yocto provide it? > > > > > > ________________________________ > > > From: Alexander Kanavin <alex.kanavin@...> > > > Sent: Wednesday, March 16, 2022 2:51 PM > > > To: Edgar Mobile <heideggm@...> > > > Cc: yocto@... <yocto@...> > > > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > > > This configuration is not tested. If you want accelerated gl, build > > > for the qemux86-64 target. > > > > > > Alex > > > > > > On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote: > > > > > > > > Greetings, > > > > > > > > I tried to run an Aarch64 Yocto with qemu on amd 64 Host. For that purpose, I built core-image-weston from Hardknott following the manual > > > > > > > > https://www.mail-archive.com/yocto@.../msg07306.html > > > > > > > > I then try to run > > > > > > > > runqemu sdl gl > > > > > > > > But it always aborts with > > > > > > > > runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > > > > > What can I do? > > > > > > > > Regards > > > > > > > > > > > >
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Alexander Kanavin
There is no hardware acceleration with bochs at all, if you want it,
toggle quoted messageShow quoted text
you need to make virtio/virgl driver work. Alex
On Thu, 17 Mar 2022 at 14:02, Edgar Mobile <heideggm@...> wrote:
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Edgar Mobile
Do you know if bochs driver is available and active for yocto 3.4 or 3.5?
From: Alexander Kanavin <alex.kanavin@...>
Sent: Thursday, March 17, 2022 11:26 AM To: Edgar Mobile <heideggm@...> Cc: yocto@... <yocto@...> Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. As I told you, we do not support or test this combination. Which means
that figuring out what the error messages mean and how to fix them is on you - patches welcome. Alex On Thu, 17 Mar 2022 at 11:41, Edgar Mobile <heideggm@...> wrote: > > I tried that first and it was horribly slow. That's why I try hardware acceleration now. > > Do you _know_ it doesn't work? If yes, why? > > ________________________________ > From: Alexander Kanavin <alex.kanavin@...> > Sent: Thursday, March 17, 2022 10:33 AM > To: Edgar Mobile <heideggm@...> > Cc: yocto@... <yocto@...> > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > If you want an aarch guest on x86, then drop the gl option from > runqemu. This will fall back to software rendering. > > Alex > > On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote: > > > > Sorry, but I need an Aarch64 guest. > > > > Ok, using a newer qemu I now encounter the following problem: > > > > root@qemuarm64:/usr/bin# XDG_RUNTIME_DIR=/run/user/0 ./eglinfo > > EGL client extensions string: > > EGL_EXT_client_extensions EGL_EXT_device_base > > EGL_EXT_device_enumeration EGL_EXT_device_query EGL_EXT_platform_base > > EGL_KHR_client_get_all_proc_addresses EGL_KHR_debug > > EGL_EXT_platform_device EGL_EXT_platform_wayland > > EGL_KHR_platform_wayland EGL_EXT_platform_x11 EGL_KHR_platform_x11 > > EGL_MESA_platform_xcb EGL_MESA_platform_gbm EGL_KHR_platform_gbm > > EGL_MESA_platform_surfaceless > > > > GBM platform: > > pci id for fd 3: 1234:1111, driver (null) > > MESA-LOADER: failed to open bochs-drm: /usr/lib/dri/bochs-drm_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/dri) > > failed to load driver: bochs-drm > > ... > > > > > > What is this bochs-drm_dri.so and does Yocto / the Mesa in Yocto provide it? > > > > ________________________________ > > From: Alexander Kanavin <alex.kanavin@...> > > Sent: Wednesday, March 16, 2022 2:51 PM > > To: Edgar Mobile <heideggm@...> > > Cc: yocto@... <yocto@...> > > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > This configuration is not tested. If you want accelerated gl, build > > for the qemux86-64 target. > > > > Alex > > > > On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote: > > > > > > Greetings, > > > > > > I tried to run an Aarch64 Yocto with qemu on amd 64 Host. For that purpose, I built core-image-weston from Hardknott following the manual > > > > > > https://www.mail-archive.com/yocto@.../msg07306.html > > > > > > I then try to run > > > > > > runqemu sdl gl > > > > > > But it always aborts with > > > > > > runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > > > What can I do? > > > > > > Regards > > > > > > > > >
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Alexander Kanavin
As I told you, we do not support or test this combination. Which means
toggle quoted messageShow quoted text
that figuring out what the error messages mean and how to fix them is on you - patches welcome. Alex
On Thu, 17 Mar 2022 at 11:41, Edgar Mobile <heideggm@...> wrote:
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Edgar Mobile
I tried that first and it was horribly slow. That's why I try hardware acceleration now.
Do you _know_ it doesn't work? If yes, why?
From: Alexander Kanavin <alex.kanavin@...>
Sent: Thursday, March 17, 2022 10:33 AM To: Edgar Mobile <heideggm@...> Cc: yocto@... <yocto@...> Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. If you want an aarch guest on x86, then drop the gl option from
runqemu. This will fall back to software rendering. Alex On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote: > > Sorry, but I need an Aarch64 guest. > > Ok, using a newer qemu I now encounter the following problem: > > root@qemuarm64:/usr/bin# XDG_RUNTIME_DIR=/run/user/0 ./eglinfo > EGL client extensions string: > EGL_EXT_client_extensions EGL_EXT_device_base > EGL_EXT_device_enumeration EGL_EXT_device_query EGL_EXT_platform_base > EGL_KHR_client_get_all_proc_addresses EGL_KHR_debug > EGL_EXT_platform_device EGL_EXT_platform_wayland > EGL_KHR_platform_wayland EGL_EXT_platform_x11 EGL_KHR_platform_x11 > EGL_MESA_platform_xcb EGL_MESA_platform_gbm EGL_KHR_platform_gbm > EGL_MESA_platform_surfaceless > > GBM platform: > pci id for fd 3: 1234:1111, driver (null) > MESA-LOADER: failed to open bochs-drm: /usr/lib/dri/bochs-drm_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/dri) > failed to load driver: bochs-drm > ... > > > What is this bochs-drm_dri.so and does Yocto / the Mesa in Yocto provide it? > > ________________________________ > From: Alexander Kanavin <alex.kanavin@...> > Sent: Wednesday, March 16, 2022 2:51 PM > To: Edgar Mobile <heideggm@...> > Cc: yocto@... <yocto@...> > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > This configuration is not tested. If you want accelerated gl, build > for the qemux86-64 target. > > Alex > > On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote: > > > > Greetings, > > > > I tried to run an Aarch64 Yocto with qemu on amd 64 Host. For that purpose, I built core-image-weston from Hardknott following the manual > > > > https://www.mail-archive.com/yocto@.../msg07306.html > > > > I then try to run > > > > runqemu sdl gl > > > > But it always aborts with > > > > runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > What can I do? > > > > Regards > > > > > >
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Alexander Kanavin
If you want an aarch guest on x86, then drop the gl option from
toggle quoted messageShow quoted text
runqemu. This will fall back to software rendering. Alex
On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote:
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Edgar Mobile
Sorry, but I need an Aarch64 guest.
Ok, using a newer qemu I now encounter the following problem:
root@qemuarm64:/usr/bin# XDG_RUNTIME_DIR=/run/user/0 ./eglinfo
EGL client extensions string: EGL_EXT_client_extensions EGL_EXT_device_base EGL_EXT_device_enumeration EGL_EXT_device_query EGL_EXT_platform_base EGL_KHR_client_get_all_proc_addresses EGL_KHR_debug EGL_EXT_platform_device EGL_EXT_platform_wayland EGL_KHR_platform_wayland EGL_EXT_platform_x11 EGL_KHR_platform_x11 EGL_MESA_platform_xcb EGL_MESA_platform_gbm EGL_KHR_platform_gbm EGL_MESA_platform_surfaceless GBM platform: pci id for fd 3: 1234:1111, driver (null) MESA-LOADER: failed to open bochs-drm: /usr/lib/dri/bochs-drm_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/dri) failed to load driver: bochs-drm
...
What is this
bochs-drm_dri.so and does Yocto / the Mesa in Yocto provide it?
From: Alexander Kanavin <alex.kanavin@...>
Sent: Wednesday, March 16, 2022 2:51 PM To: Edgar Mobile <heideggm@...> Cc: yocto@... <yocto@...> Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. This configuration is not tested. If you want accelerated gl, build
for the qemux86-64 target. Alex On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote: > > Greetings, > > I tried to run an Aarch64 Yocto with qemu on amd 64 Host. For that purpose, I built core-image-weston from Hardknott following the manual > > https://www.mail-archive.com/yocto@.../msg07306.html > > I then try to run > > runqemu sdl gl > > But it always aborts with > > runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > What can I do? > > Regards > > >
|
|
[meta-openssl102-fips][dunfell][PATCH 2/2] openssh: Adapt the patch for CVE-2020-14145 fix on poky/dunfell
From: Harshal Gohel <harshaldhruvkumar.gohel@...>
openssh-8.2p1-fips.patch does not apply after CVE-2020-14145 patch introduced in (poky: f5882b194b58b6bbb06db511a2c3612f5d6430fd) CVE-2020-14145 added comments and introduced new code in sshconnect2.c This adaptation corrects diff offsets and replaces each occurance of `options.hostkeyalgorithms` with the FIPS_mode() conditional just as in original patch. --- .../openssh/0001-openssh-8.2p1-fips.patch | 31 ++++++++++++++----- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/recipes-connectivity/openssh/openssh/0001-openssh-8.2p1-fips.patch b/recipes-connectivity/openssh/openssh/0001-openssh-8.2p1-fips.patch index c1de130..5b8814d 100644 --- a/recipes-connectivity/openssh/openssh/0001-openssh-8.2p1-fips.patch +++ b/recipes-connectivity/openssh/openssh/0001-openssh-8.2p1-fips.patch @@ -27,10 +27,10 @@ Signed-off-by: Yi Zhao <yi.zhao@...> servconf.c | 15 ++++++++++----- ssh-keygen.c | 16 +++++++++++++++- ssh.c | 16 ++++++++++++++++ - sshconnect2.c | 8 ++++++-- + sshconnect2.c | 14 ++++++++++---- sshd.c | 19 +++++++++++++++++++ sshkey.c | 4 ++++ - 16 files changed, 178 insertions(+), 23 deletions(-) + 16 files changed, 182 insertions(+), 25 deletions(-) diff --git a/Makefile.in b/Makefile.in index e754947..57f94f4 100644 @@ -408,7 +408,7 @@ index 15aee56..49331fc 100644 * Discard other fds that are hanging around. These can cause problem * with backgrounded ssh processes started by ControlPersist. diff --git a/sshconnect2.c b/sshconnect2.c -index af00fb3..639fc51 100644 +index 5df94779..df3cd317 100644 --- a/sshconnect2.c +++ b/sshconnect2.c @@ -44,6 +44,8 @@ @@ -420,17 +420,34 @@ index af00fb3..639fc51 100644 #include "openbsd-compat/sys-queue.h" #include "xmalloc.h" -@@ -119,7 +121,8 @@ order_hostkeyalgs(char *host, struct sockaddr *hostaddr, u_short port) - for (i = 0; i < options.num_system_hostfiles; i++) - load_hostkeys(hostkeys, hostname, options.system_hostfiles[i]); +@@ -139,12 +141,14 @@ order_hostkeyalgs(char *host, struct sockaddr *hostaddr, u_short port) + * certificate type, as sshconnect.c will downgrade certs to + * plain keys if necessary. + */ +- best = first_alg(options.hostkeyalgorithms); ++ best = first_alg(FIPS_mode() ++ ? KEX_FIPS_PK_ALG : options.hostkeyalgorithms); + if (lookup_key_in_hostkeys_by_type(hostkeys, + sshkey_type_plain(sshkey_type_from_name(best)), NULL)) { + debug3("%s: have matching best-preference key type %s, " + "using HostkeyAlgorithms verbatim", __func__, best); +- ret = xstrdup(options.hostkeyalgorithms); ++ ret = xstrdup(FIPS_mode() ++ ? KEX_FIPS_PK_ALG : options.hostkeyalgorithms); + goto out; + } +@@ -152,7 +156,8 @@ order_hostkeyalgs(char *host, struct sockaddr *hostaddr, u_short port) + * Otherwise, prefer the host key algorithms that match known keys + * while keeping the ordering of HostkeyAlgorithms as much as possible. + */ - oavail = avail = xstrdup(options.hostkeyalgorithms); + oavail = avail = xstrdup((FIPS_mode() + ? KEX_FIPS_PK_ALG : options.hostkeyalgorithms)); maxlen = strlen(avail) + 1; first = xmalloc(maxlen); last = xmalloc(maxlen); -@@ -179,7 +182,8 @@ ssh_kex2(struct ssh *ssh, char *host, struct sockaddr *hostaddr, u_short port) +@@ -214,7 +219,8 @@ ssh_kex2(struct ssh *ssh, char *host, struct sockaddr *hostaddr, u_short port) /* Expand or fill in HostkeyAlgorithms */ all_key = sshkey_alg_list(0, 0, 1, ','); if (kex_assemble_names(&options.hostkeyalgorithms, -- 2.25.1 -- - Harshal Gohel
|
|
[meta-openssl102-fips][dunfell][PATCH 1/2] conf: Make layer compatible with dunfell
From: Harshal Gohel <harshaldhruvkumar.gohel@...>
Create branch "dunfell" from 634d497355f4169237b97a57a2f32486b0972167 --- conf/layer.conf | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/conf/layer.conf b/conf/layer.conf index 892cf79..fe6d6db 100644 --- a/conf/layer.conf +++ b/conf/layer.conf @@ -10,7 +10,7 @@ BBFILE_PRIORITY_meta-openssl-one-zero-two-fips = "5" LAYERVERSION_meta-openssl-one-zero-two-fips = "1" -LAYERSERIES_COMPAT_meta-openssl-one-zero-two-fips = "zeus" +LAYERSERIES_COMPAT_meta-openssl-one-zero-two-fips = "dunfell" LAYERPATH_meta-openssl-one-zero-two-fips = "${LAYERDIR}" -- 2.25.1 -- - Harshal Gohel
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Alexander Kanavin
This configuration is not tested. If you want accelerated gl, build
toggle quoted messageShow quoted text
for the qemux86-64 target. Alex
On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote:
|
|
[meta-selinux][dunfell][PATCH] openssh: don't overwrite sshd_config unconditionally
Ranjitsinh Rathod
From: Nisha Parrakat <Nisha.Parrakat@...>
The current implementation was overwriting the sshd_config and sshd assuming PAM is needed by default openssh should use the default sshd_config packaged with the component if no distro specific needs are present and not overwrite the full sshd_config file 1. If PAM is enabled as a distro then enable the UsePAM option in sshd_config 2. Moved the file sshd to pam directory so that when pam is enabled, then replace the default from poky by installing the same Signed-off-by: Ranjitsinh Rathod <ranjitsinh.rathod@...> Signed-off-by: Ranjitsinh Rathod <ranjitsinhrathod1991@...> --- .../openssh/files/{ => pam}/sshd | 0 .../openssh/files/sshd_config | 118 ------------------ .../openssh/openssh_%.bbappend | 14 +++ 3 files changed, 14 insertions(+), 118 deletions(-) rename recipes-connectivity/openssh/files/{ => pam}/sshd (100%) delete mode 100644 recipes-connectivity/openssh/files/sshd_config diff --git a/recipes-connectivity/openssh/files/sshd b/recipes-connectivity/openssh/files/pam/sshd similarity index 100% rename from recipes-connectivity/openssh/files/sshd rename to recipes-connectivity/openssh/files/pam/sshd diff --git a/recipes-connectivity/openssh/files/sshd_config b/recipes-connectivity/openssh/files/sshd_config deleted file mode 100644 index 1c33ad0..0000000 --- a/recipes-connectivity/openssh/files/sshd_config +++ /dev/null @@ -1,118 +0,0 @@ -# $OpenBSD: sshd_config,v 1.102 2018/02/16 02:32:40 djm Exp $ - -# This is the sshd server system-wide configuration file. See -# sshd_config(5) for more information. - -# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin - -# The strategy used for options in the default sshd_config shipped with -# OpenSSH is to specify options with their default value where -# possible, but leave them commented. Uncommented options override the -# default value. - -#Port 22 -#AddressFamily any -#ListenAddress 0.0.0.0 -#ListenAddress :: - -#HostKey /etc/ssh/ssh_host_rsa_key -#HostKey /etc/ssh/ssh_host_ecdsa_key -#HostKey /etc/ssh/ssh_host_ed25519_key - -# Ciphers and keying -#RekeyLimit default none - -# Logging -#SyslogFacility AUTH -#LogLevel INFO - -# Authentication: - -#LoginGraceTime 2m -#PermitRootLogin prohibit-password -#StrictModes yes -#MaxAuthTries 6 -#MaxSessions 10 - -#PubkeyAuthentication yes - -# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2 -# but this is overridden so installations will only check .ssh/authorized_keys -#AuthorizedKeysFile .ssh/authorized_keys - -#AuthorizedPrincipalsFile none - -#AuthorizedKeysCommand none -#AuthorizedKeysCommandUser nobody - -# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts -#HostbasedAuthentication no -# Change to yes if you don't trust ~/.ssh/known_hosts for -# HostbasedAuthentication -#IgnoreUserKnownHosts no -# Don't read the user's ~/.rhosts and ~/.shosts files -#IgnoreRhosts yes - -# To disable tunneled clear text passwords, change to no here! -#PasswordAuthentication yes -#PermitEmptyPasswords no - -# Change to yes to enable challenge-response passwords (beware issues with -# some PAM modules and threads) -ChallengeResponseAuthentication no - -# Kerberos options -#KerberosAuthentication no -#KerberosOrLocalPasswd yes -#KerberosTicketCleanup yes -#KerberosGetAFSToken no - -# GSSAPI options -#GSSAPIAuthentication no -#GSSAPICleanupCredentials yes - -# Set this to 'yes' to enable PAM authentication, account processing, -# and session processing. If this is enabled, PAM authentication will -# be allowed through the ChallengeResponseAuthentication and -# PasswordAuthentication. Depending on your PAM configuration, -# PAM authentication via ChallengeResponseAuthentication may bypass -# the setting of "PermitRootLogin without-password". -# If you just want the PAM account and session checks to run without -# PAM authentication, then enable this but set PasswordAuthentication -# and ChallengeResponseAuthentication to 'no'. -UsePAM yes - -#AllowAgentForwarding yes -#AllowTcpForwarding yes -#GatewayPorts no -#X11Forwarding no -#X11DisplayOffset 10 -#X11UseLocalhost yes -#PermitTTY yes -#PrintMotd yes -#PrintLastLog yes -#TCPKeepAlive yes -#UseLogin no -#PermitUserEnvironment no -Compression no -ClientAliveInterval 15 -ClientAliveCountMax 4 -#UseDNS no -#PidFile /var/run/sshd.pid -#MaxStartups 10:30:100 -#PermitTunnel no -#ChrootDirectory none -#VersionAddendum none - -# no default banner path -#Banner none - -# override default of no subsystems -Subsystem sftp /usr/libexec/sftp-server - -# Example of overriding settings on a per-user basis -#Match User anoncvs -# X11Forwarding no -# AllowTcpForwarding no -# PermitTTY no -# ForceCommand cvs server diff --git a/recipes-connectivity/openssh/openssh_%.bbappend b/recipes-connectivity/openssh/openssh_%.bbappend index 7719d3b..99c51bf 100644 --- a/recipes-connectivity/openssh/openssh_%.bbappend +++ b/recipes-connectivity/openssh/openssh_%.bbappend @@ -1 +1,15 @@ require ${@bb.utils.contains('DISTRO_FEATURES', 'selinux', '${BPN}_selinux.inc', '', d)} + +# if pam feature is enabled in the distro then take sshd from the pam directory. +FILESEXTRAPATHS_prepend := "${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${THISDIR}/files/pam:', '', d)}" + +do_install_append(){ + + if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then + # Make sure UsePAM entry is in the sshd_config file. + # If entry not present then append it. + grep -q 'UsePAM' "${D}/etc/ssh/sshd_config" && \ + sed -i 's/.*UsePAM.*/UsePAM yes/' "${D}/etc/ssh/sshd_config" || \ + echo 'UsePAM yes' >> "${D}/etc/ssh/sshd_config" + fi +} -- 2.17.1
|
|