[PATCH yocto-autobuilder-helper 6/6] scripts: run-docs-build: factor out yocto-docs tags and branches building
Quentin Schulz
From: Quentin Schulz <quentin.schulz@...>
Except patching which is specific to tags and yocto- tag prefix stripping, the logic is identical, so let's merge both loops together. Cc: Quentin Schuls <foss+yocto@...> Signed-off-by: Quentin Schulz <quentin.schulz@...> --- scripts/run-docs-build | 36 ++++++++++++------------------------ 1 file changed, 12 insertions(+), 24 deletions(-) diff --git a/scripts/run-docs-build b/scripts/run-docs-build index ab5b6db..ceda213 100755 --- a/scripts/run-docs-build +++ b/scripts/run-docs-build @@ -71,7 +71,8 @@ cd $ypdocs # Again, keeping even the no longer supported releases (see above comment) first_sphinx_commit=01dd5af7954e24552aca022917669b27bb0541ed -for branch in dunfell transition $(git branch --remote --contains "$first_sphinx_commit" --format '%(refname:lstrip=3)'); do +first_dunfell_sphinx_commit=c25fe058b88b893b0d146f3ed27320b47cdec236 +for branch in dunfell transition $(git branch --remote --contains "$first_sphinx_commit" --format '%(refname:lstrip=3)') $(git tag --contains "$first_sphinx_commit" --contains "$first_dunfell_sphinx_commit" 'yocto-*'); do if [ "$branch" = "HEAD" ]; then continue fi @@ -82,12 +83,21 @@ for branch in dunfell transition $(git branch --remote --contains "$first_sphinx continue fi - echo Building $branch branch + echo Building $branch git checkout $branch + + if [ -e "${scriptdir}/docs-build-patches/${branch}/" ]; then + echo Adding patch for $branch + git am "${scriptdir}/docs-build-patches/${branch}/"000* + fi + git checkout master releases.rst make clean make publish + # Strip yocto- from tag names + branch=$(echo "$branch" | sed 's/yocto-//') + if [ "$branch" = "master-next" ]; then branch="next" mkdir $outputdir/$branch @@ -101,28 +111,6 @@ for branch in dunfell transition $(git branch --remote --contains "$first_sphinx git reset --hard done -# Yocto Project releases/tags -first_dunfell_sphinx_commit=c25fe058b88b893b0d146f3ed27320b47cdec236 - -cd $ypdocs -for tag in $(git tag --contains "$first_sphinx_commit" --contains "$first_dunfell_sphinx_commit" 'yocto-*'); do - echo Processing $tag - cd $ypdocs - git checkout $tag - if [ -e "${scriptdir}/docs-build-patches/${tag}/" ]; then - echo Adding patch for $tag - git am "${scriptdir}/docs-build-patches/${tag}/"000* - fi - git checkout master releases.rst - make clean - make publish - version=$(echo $tag | cut -c7-) - mkdir $outputdir/$version - cp -r ./_build/final/* $outputdir/$version - git reset --hard - echo Finished processing $tag -done - # get current release (e.g. most recent tag), and add a 'current' link tag=$(git tag --list 'yocto-*' | sort --version-sort | tail -1 | cut -c7-) echo Linking to $tag as current -- 2.35.1
|
|
[PATCH yocto-autobuilder-helper 5/6] scripts: run-docs-build: simplify sphinx-buildable yocto-docs tag list fetching
Quentin Schulz
From: Quentin Schulz <quentin.schulz@...>
The commit that introduced Sphinx support in yocto-docs is 01dd5af7954e24552aca022917669b27bb0541ed. Any tag containing this commit is buildable by sphinx. Dunfell tags don't all have Sphinx support. However, all tags containing the introducing commit c25fe058b88b893b0d146f3ed27320b47cdec236 are buildable by sphinx. Therefore, let's just list all tags which contains either of those two commits instead of the complex series of pipes and shell commands. Cc: Quentin Schulz <foss+yocto@...> Signed-off-by: Quentin Schulz <quentin.schulz@...> --- scripts/run-docs-build | 36 +++++++++++++++++------------------- 1 file changed, 17 insertions(+), 19 deletions(-) diff --git a/scripts/run-docs-build b/scripts/run-docs-build index 1656975..ab5b6db 100755 --- a/scripts/run-docs-build +++ b/scripts/run-docs-build @@ -102,27 +102,25 @@ for branch in dunfell transition $(git branch --remote --contains "$first_sphinx done # Yocto Project releases/tags -v_sphinx='yocto-3.1.5' #This and newer versions have Sphinx docs. +first_dunfell_sphinx_commit=c25fe058b88b893b0d146f3ed27320b47cdec236 + cd $ypdocs -for tag in $(git tag --list 'yocto-*'); do - first=$(printf '%s\n%s' $tag $v_sphinx | sort --version-sort | head -n1) - if [ "$first" = "$v_sphinx" ]; then - echo Processing $tag - cd $ypdocs - git checkout $tag - if [ -e "${scriptdir}/docs-build-patches/${tag}/" ]; then - echo Adding patch for $tag - git am "${scriptdir}/docs-build-patches/${tag}/"000* - fi - git checkout master releases.rst - make clean - make publish - version=$(echo $tag | cut -c7-) - mkdir $outputdir/$version - cp -r ./_build/final/* $outputdir/$version - git reset --hard - echo Finished processing $tag +for tag in $(git tag --contains "$first_sphinx_commit" --contains "$first_dunfell_sphinx_commit" 'yocto-*'); do + echo Processing $tag + cd $ypdocs + git checkout $tag + if [ -e "${scriptdir}/docs-build-patches/${tag}/" ]; then + echo Adding patch for $tag + git am "${scriptdir}/docs-build-patches/${tag}/"000* fi + git checkout master releases.rst + make clean + make publish + version=$(echo $tag | cut -c7-) + mkdir $outputdir/$version + cp -r ./_build/final/* $outputdir/$version + git reset --hard + echo Finished processing $tag done # get current release (e.g. most recent tag), and add a 'current' link -- 2.35.1
|
|
[PATCH yocto-autobuilder-helper 4/6] scripts: run-docs-build: automatically build new yocto-docs branches
Quentin Schulz
From: Quentin Schulz <quentin.schulz@...>
Since commit 01dd5af7954e24552aca022917669b27bb0541ed, all later releases of yocto-docs can be built with Sphinx. Instead of manually updating this list, let's have git return the list of remote branches which contains the commit. dunfell branch was initially released without Sphinx support but was later patched, hence why it's explicitly listed. Cc: Quentin Schulz <foss+yocto@...> Signed-off-by: Quentin Schulz <quentin.schulz@...> --- scripts/run-docs-build | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/scripts/run-docs-build b/scripts/run-docs-build index 0055b19..1656975 100755 --- a/scripts/run-docs-build +++ b/scripts/run-docs-build @@ -70,7 +70,18 @@ rsync -irlp --checksum --ignore-times --delete bitbake docs@... cd $ypdocs # Again, keeping even the no longer supported releases (see above comment) -for branch in dunfell gatesgarth hardknott honister master master-next transition; do +first_sphinx_commit=01dd5af7954e24552aca022917669b27bb0541ed +for branch in dunfell transition $(git branch --remote --contains "$first_sphinx_commit" --format '%(refname:lstrip=3)'); do + if [ "$branch" = "HEAD" ]; then + continue + fi + + # Do not build <release>-next branches as they are development branches only + # Do build master-next branch though! + if echo "$branch" | grep -v "master-next" | grep -q -E "-next$"; then + continue + fi + echo Building $branch branch git checkout $branch git checkout master releases.rst -- 2.35.1
|
|
[PATCH yocto-autobuilder-helper 3/6] scripts: run-docs-build: factor out all yocto-docs branches building
Quentin Schulz
From: Quentin Schulz <quentin.schulz@...>
master, master-next and transition only differ from other branches by their output directory name. Let's put everything in common and only have a check on whether the branch is master, master-next or transition and modify the output dir in those cases. Cc: Quentin Schulz <foss+yocto@...> Signed-off-by: Quentin Schulz <quentin.schulz@...> --- scripts/run-docs-build | 35 +++++++++++------------------------ 1 file changed, 11 insertions(+), 24 deletions(-) diff --git a/scripts/run-docs-build b/scripts/run-docs-build index d8d77c7..0055b19 100755 --- a/scripts/run-docs-build +++ b/scripts/run-docs-build @@ -68,37 +68,24 @@ cd $outputdir rsync -irlp --checksum --ignore-times --delete bitbake docs@...:docs/ cd $ypdocs -echo Building master branch -git checkout master -make clean -make publish -cp -r ./_build/final/* $outputdir -cd $ypdocs -echo Building transition branch -git checkout transition -make clean -make publish -cp -r ./_build/final/* $outputdir/ - -cd $ypdocs -echo Building master-next branch -git checkout master-next -make clean -make publish -mkdir $outputdir/next -cp -r ./_build/final/* $outputdir/next - -# stable branches # Again, keeping even the no longer supported releases (see above comment) -for branch in dunfell gatesgarth hardknott honister; do - cd $ypdocs +for branch in dunfell gatesgarth hardknott honister master master-next transition; do echo Building $branch branch git checkout $branch git checkout master releases.rst make clean make publish - mkdir $outputdir/$branch + + if [ "$branch" = "master-next" ]; then + branch="next" + mkdir $outputdir/$branch + elif [ "$branch" = "master" ] || [ "$branch" = "transition" ]; then + branch="" + else + mkdir $outputdir/$branch + fi + cp -r ./_build/final/* $outputdir/$branch git reset --hard done -- 2.35.1
|
|
[PATCH yocto-autobuilder-helper 2/6] scripts: run-docs-build: automatically build new Bitbake branches
Quentin Schulz
From: Quentin Schulz <quentin.schulz@...>
Since commit 84ccba0f4aff91528f764523fe1205a354c889ed, docs of all later releases can be built with Sphinx. Instead of manually updating this list, let's have git return the list of remote branches which contains this commit. 1.46 branch was initially released without Sphinx support but was later patched, hence why it's explicitly listed. Cc: Quentin Schulz <foss+yocto@...> Signed-off-by: Quentin Schulz <quentin.schulz@...> --- scripts/run-docs-build | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/scripts/run-docs-build b/scripts/run-docs-build index f7b5f97..d8d77c7 100755 --- a/scripts/run-docs-build +++ b/scripts/run-docs-build @@ -37,7 +37,12 @@ mkdir $outputdir/bitbake # https://lists.yoctoproject.org/g/docs/message/2193 # We copy the releases.rst file from master so that all versions of the docs # see the latest releases. -for branch in 1.46 1.48 1.50 1.52 master master-next; do +first_sphinx_commit=84ccba0f4aff91528f764523fe1205a354c889ed +for branch in 1.46 $(git branch --remote --contains "$first_sphinx_commit" --format '%(refname:lstrip=3)'); do + if [ "$branch" = "HEAD" ]; then + continue + fi + echo Building bitbake $branch branch git checkout $branch git checkout master releases.rst -- 2.35.1
|
|
[PATCH yocto-autobuilder-helper 1/6] scripts: run-docs-build: factor out all bitbake branches building
Quentin Schulz
From: Quentin Schulz <quentin.schulz@...>
master and master-next only differ from other branches by their output directory name. Let's put everything in common and only have a check on whether the branch is master or master-next and modify the output dir in those cases. Cc: Quentin Schulz <foss+yocto@...> Signed-off-by: Quentin Schulz <quentin.schulz@...> --- scripts/run-docs-build | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/scripts/run-docs-build b/scripts/run-docs-build index b9b331b..f7b5f97 100755 --- a/scripts/run-docs-build +++ b/scripts/run-docs-build @@ -30,33 +30,29 @@ echo Extracing old content from archive tar -xJf $docbookarchive cd $bbdocs -echo Building bitbake master branch -git checkout master -make clean -make publish mkdir $outputdir/bitbake -cp -r ./_build/final/* $outputdir/bitbake -git checkout master-next -echo Building bitbake master-next branch -make clean -make publish -mkdir $outputdir/bitbake/next -cp -r ./_build/final/* $outputdir/bitbake/next - -# stable branches # A decision was made to keep updating all the Sphinx generated docs for the moment, # even the ones corresponding to no longer supported releases # https://lists.yoctoproject.org/g/docs/message/2193 # We copy the releases.rst file from master so that all versions of the docs # see the latest releases. -for branch in 1.46 1.48 1.50 1.52; do +for branch in 1.46 1.48 1.50 1.52 master master-next; do echo Building bitbake $branch branch git checkout $branch git checkout master releases.rst make clean make publish - mkdir $outputdir/bitbake/$branch + + if [ "$branch" = "master-next" ]; then + branch="next" + mkdir $outputdir/bitbake/$branch + elif [ "$branch" = "master" ]; then + branch="" + else + mkdir $outputdir/bitbake/$branch + fi + cp -r ./_build/final/* $outputdir/bitbake/$branch git reset --hard done -- 2.35.1
|
|
Re: [meta-security][PATCH] python3-privacyidea_3.6.2: remove more py3 that got dropped
Richard Purdie
On Fri, 2022-03-18 at 09:30 -0700, Armin Kuster wrote:
Signed-off-by: Armin Kuster <akuster808@...>I'd hold off that, these just moved to core? Cheers, Richard
|
|
[meta-security][PATCH] python3-privacyidea_3.6.2: remove more py3 that got dropped
Signed-off-by: Armin Kuster <akuster808@...>
--- recipes-security/mfa/python3-privacyidea_3.6.2.bb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/recipes-security/mfa/python3-privacyidea_3.6.2.bb b/recipes-security/mfa/python3-privacyidea_3.6.2.bb index 40f6d15..3a09b80 100644 --- a/recipes-security/mfa/python3-privacyidea_3.6.2.bb +++ b/recipes-security/mfa/python3-privacyidea_3.6.2.bb @@ -24,7 +24,7 @@ FILES:${PN} += " ${prefix}/etc/privacyidea/* ${datadir}/lib/privacyidea/*" RDEPENDS:${PN} += " bash perl freeradius-mysql freeradius-utils" RDEPENDS:${PN} += "python3 python3-alembic python3-babel python3-bcrypt" -RDEPENDS:${PN} += "python3-beautifulsoup4 python3-cbor2 python3-certifi python3-cffi python3-chardet" +RDEPENDS:${PN} += "python3-beautifulsoup4 python3-cbor2 python3-certifi python3-cffi" RDEPENDS:${PN} += "python3-click python3-configobj python3-croniter python3-cryptography python3-defusedxml" RDEPENDS:${PN} += "python3-ecdsa python3-flask python3-flask-babel python3-flask-migrate" RDEPENDS:${PN} += "python3-flask-script python3-flask-sqlalchemy python3-flask-versioned" @@ -33,6 +33,6 @@ RDEPENDS:${PN} += "python3-itsdangerous python3-jinja2 python3-ldap python3-lxml RDEPENDS:${PN} += "python3-markupsafe python3-netaddr python3-oauth2client python3-passlib python3-pillow" RDEPENDS:${PN} += "python3-pyasn1 python3-pyasn1-modules python3-pycparser python3-pyjwt python3-pymysql" RDEPENDS:${PN} += "python3-pyopenssl python3-pyrad python3-dateutil python3-editor python3-gnupg" -RDEPENDS:${PN} += "python3-pytz python3-pyyaml python3-qrcode python3-redis python3-requests python3-rsa" -RDEPENDS:${PN} += "python3-six python3-smpplib python3-soupsieve python3-soupsieve " +RDEPENDS:${PN} += "python3-pytz python3-pyyaml python3-qrcode python3-redis python3-rsa" +RDEPENDS:${PN} += "python3-six python3-soupsieve python3-soupsieve " RDEPENDS:${PN} += "python3-sqlalchemy python3-sqlsoup python3-urllib3 python3-werkzeug" -- 2.25.1
|
|
Re: QA notification for completed autobuilder build (yocto-3.1.15.rc1)
Teoh, Jay Shen
Hi All,
toggle quoted messageShow quoted text
This is the full report for yocto-3.1.15.rc1: https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults ======= Summary ======== No high milestone defects. No new issue found. Thanks, Jay
-----Original Message-----
|
|
Multiconfig dependency
Hello guys!
Could you please help me with Multiconfig setup? I’ve "default" configuration with SystemD by default. And "initramfs" configuration with Busybox and other settings. I use next targets/recipes with initramfs configuration: 1. core-image-rootfs - packs core-image-minimal ext4 image to debian package; 2. initramfs-flasher-image - image that has core-image-rootfs; Default configuration: 1. core-image-minimal - main rootfs; 2. flasher - packs initramfs-flasher-image squashfs image to debian package; 3. app-flasher - special application that has inside squashfs file from flasher package. Everything works fine if I do clean build. If I change somethings for core-image-minial (like IMAGE_INSTALL), it builds core-image-minial only: bitbake app-flasherBut no updates for core-image-rootfs, initramfs-flasher-image, flasher and app-flasher. Here how my multiconfig dependency in core-image-rootfs.bb: do_install[mcdepends] ="mc:initramfs::core-image-minimal:do_image_complete"
|
|
do_image_wic error with custom machine
Goubaux, Nicolas
Hello,
I would like to implement two custom machines based on genericx86_64 but I have issue with the do_image_wic Couldn't get bitbake variable from /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env. File /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env doesn't exist. Can help me to fix it ? ---------- meta-product1/machine/machine1.conf ---------- #@TYPE: Machine #@NAME: Product1 Board #@DESCRIPTION: Machine configuration for the Product1 DEFAULTTUNE ?= "core2-64" TUNE_PKGARCH_tune-core2-64 = "core2-64" #PACKAGE_ARCH = "x86_64" #PACKAGE_ARCHS_append = " genericx86_64" require conf/machine/include/tune-core2.inc require conf/machine/include/genericx86-common.inc KMACHINE_machine1 = "x86_64" PREFERRED_PROVIDER_virtual/kernel = "linux-yocto" PREFERRED_VERSION_linux-yocto = "5.10%" SERIAL_CONSOLES_CHECK = "ttyS0" #For runqemu QB_SYSTEM_NAME = "qemu-system-x86_64" WKS_FILE ?= "${MACHINE}.wks" IMAGE_INSTALL_append = " packagegroup-sms-apps packagegroup-sms-tools packagegroup-sms-lib packagegroup-sms-dev" hostname:pn-base-files = "product1-sms" ---------- Log ---------- + BUILDDIR=/home/vagrant/poky/build PSEUDO_UNLOAD=1 wic create /home/vagrant/poky/build/../../layers/meta-project1-sms/scripts/lib/wic/canned-wks/project1-sms-x86_64.wks --vars /home/vagrant/poky/build/tmp/sysroots/project1-sms-x86_64/imgdata/ -e project1-image-sms -o /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/build-wic/ -w /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/tmp-wic INFO: Creating image(s)... Couldn't get bitbake variable from /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env. File /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/rootfs/var/dolby.env doesn't exist. Traceback (most recent call last): File "/home/vagrant/poky/scripts/wic", line 542, in <module> sys.exit(main(sys.argv[1:])) File "/home/vagrant/poky/scripts/wic", line 537, in main return hlp.invoke_subcommand(args, parser, hlp.wic_help_usage, subcommands) File "/home/vagrant/poky/scripts/lib/wic/help.py", line 83, in invoke_subcommand subcmd[0](args, usage) File "/home/vagrant/poky/scripts/wic", line 219, in wic_create_subcommand engine.wic_create(wks_file, rootfs_dir, bootimg_dir, kernel_dir, File "/home/vagrant/poky/scripts/lib/wic/engine.py", line 190, in wic_create plugin.do_create() File "/home/vagrant/poky/scripts/lib/wic/plugins/imager/direct.py", line 97, in do_create self.create() File "/home/vagrant/poky/scripts/lib/wic/plugins/imager/direct.py", line 181, in create self._image.prepare(self) File "/home/vagrant/poky/scripts/lib/wic/plugins/imager/direct.py", line 356, in prepare part.prepare(imager, imager.workdir, imager.oe_builddir, File "/home/vagrant/poky/scripts/lib/wic/partition.py", line 182, in prepare plugin.do_prepare_partition(self, srcparams_dict, creator, File "/home/vagrant/poky/scripts/lib/wic/plugins/source/rootfs.py", line 96, in do_prepare_partition part.rootfs_dir = cls.__get_rootfs_dir(rootfs_dir) File "/home/vagrant/poky/scripts/lib/wic/plugins/source/rootfs.py", line 57, in __get_rootfs_dir if not os.path.isdir(image_rootfs_dir): File "/home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/recipe-sysroot-native/usr/lib/python3.9/genericpath.py", line 42, in isdir st = os.stat(s) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType + bb_sh_exit_handler + ret=1 + [ 1 != 0 ] + echo WARNING: exit code 1 from a shell command. WARNING: exit code 1 from a shell command. + exit 1 ERROR: project1-image-sms-1.0-r0 do_image_wic: ExecutionError('/home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/temp/run.do_image_wic.4026945', 1, None, None) ERROR: Logfile of failure stored in: /home/vagrant/poky/build/tmp/work/machine1-poky-linux/project1-image-sms/1.0-r0/temp/log.do_image_wic.4026945 ERROR: Task (/home/vagrant/poky/build/../../layers/meta-project1-sms/recipes-core/images/project1-image-sms.bb:do_image_wic) failed with exit code '1' — Nicolas G.
|
|
Re: Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
On 2022-03-17 11:28, Trevor Gamblin
wrote:
Done. ../Randy
-- # Randy MacLeod # Wind River Linux
|
|
Re: Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
Stephen Jolley
AR completed.
Stephen
From: Trevor Gamblin <trevor.gamblin@...>
Sent: Thursday, March 17, 2022 8:28 AM To: Yocto-mailing-list <yocto@...> Cc: sjolley.yp.pm@...; Richard Purdie <richard.purdie@...>; alexandre.belloni@...; luca.ceresoli@...; MacLeod, Randy <Randy.MacLeod@...>; Wold, Saul <saul.wold@...>; tim.orling@...; daiane.angolini@...; Ross Burton <ross@...>; Steve Sakoman <steve@...> Subject: Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
Wiki: https://wiki.yoctoproject.org/wiki/Bug_Triage Attendees: Alexandre, Daiane, Luca, Randy, Richard, Ross, Saul, Stephen, Steve, Tim, Trevor ARs: - Randy to move remaining Medium+ M3s to M1 (and move to newcomer bugs category, where appropriate) - Stephen to create an issue for Michael run milestone naming script (3.6 to 4.1 and 3.99 to 4.99) - Everyone to review assigned Old Milestone M3 bugs and move to later milestones
Notes: - ~43% of AB workers have been switched to SSDs. Failure rate appears lower, but still TBD. More coming soon! Medium+ 3.5 Unassigned Enhancements/Bugs: 68 (Last week 73) Medium+ 3.6 Unassigned Enhancements/Bugs: 10 (Last week 2) Medium+ 3.99 Unassigned Enhancements/Bugs: 38 (Last week 38)
AB Bugs: 72 (Last week 71)
|
|
Minutes: Yocto Project Weekly Triage Meeting 3/17/2022
Trevor Gamblin
Wiki: https://wiki.yoctoproject.org/wiki/Bug_Triage Attendees: Alexandre, Daiane, Luca, Randy, Richard,
Ross, Saul, Stephen, Steve, Tim, Trevor ARs: - Randy to move remaining Medium+ M3s to M1 (and move to
newcomer bugs category, where appropriate) - Stephen to create an issue for Michael run milestone naming script (3.6 to 4.1 and 3.99 to 4.99) - Everyone to review assigned Old Milestone M3 bugs and move to
later milestones Notes:
- ~43% of AB workers have been switched to SSDs. Failure rate
appears lower, but still TBD. More coming soon! Medium+ 3.5 Unassigned Enhancements/Bugs: 68 (Last week 73) Medium+ 3.6 Unassigned Enhancements/Bugs: 10 (Last week
2) AB Bugs: 72
(Last week 71)
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Edgar Mobile
Ah, that's an interesting information.
The dmesg log gives the impression that virgl starts correctly and in the normal shell the examples work flawlessly. The problems start once I tell weston to use the ivi-shell instead of the desktop shell.
From: Alexander Kanavin <alex.kanavin@...>
Sent: Thursday, March 17, 2022 1:51 PM To: Edgar Mobile <heideggm@...> Cc: yocto@... <yocto@...> Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. There is no hardware acceleration with bochs at all, if you want it,
you need to make virtio/virgl driver work. Alex On Thu, 17 Mar 2022 at 14:02, Edgar Mobile <heideggm@...> wrote: > > Do you know if bochs driver is available and active for yocto 3.4 or 3.5? > > ________________________________ > From: Alexander Kanavin <alex.kanavin@...> > Sent: Thursday, March 17, 2022 11:26 AM > To: Edgar Mobile <heideggm@...> > Cc: yocto@... <yocto@...> > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > As I told you, we do not support or test this combination. Which means > that figuring out what the error messages mean and how to fix them is > on you - patches welcome. > > Alex > > On Thu, 17 Mar 2022 at 11:41, Edgar Mobile <heideggm@...> wrote: > > > > I tried that first and it was horribly slow. That's why I try hardware acceleration now. > > > > Do you _know_ it doesn't work? If yes, why? > > > > ________________________________ > > From: Alexander Kanavin <alex.kanavin@...> > > Sent: Thursday, March 17, 2022 10:33 AM > > To: Edgar Mobile <heideggm@...> > > Cc: yocto@... <yocto@...> > > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > If you want an aarch guest on x86, then drop the gl option from > > runqemu. This will fall back to software rendering. > > > > Alex > > > > On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote: > > > > > > Sorry, but I need an Aarch64 guest. > > > > > > Ok, using a newer qemu I now encounter the following problem: > > > > > > root@qemuarm64:/usr/bin# XDG_RUNTIME_DIR=/run/user/0 ./eglinfo > > > EGL client extensions string: > > > EGL_EXT_client_extensions EGL_EXT_device_base > > > EGL_EXT_device_enumeration EGL_EXT_device_query EGL_EXT_platform_base > > > EGL_KHR_client_get_all_proc_addresses EGL_KHR_debug > > > EGL_EXT_platform_device EGL_EXT_platform_wayland > > > EGL_KHR_platform_wayland EGL_EXT_platform_x11 EGL_KHR_platform_x11 > > > EGL_MESA_platform_xcb EGL_MESA_platform_gbm EGL_KHR_platform_gbm > > > EGL_MESA_platform_surfaceless > > > > > > GBM platform: > > > pci id for fd 3: 1234:1111, driver (null) > > > MESA-LOADER: failed to open bochs-drm: /usr/lib/dri/bochs-drm_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/dri) > > > failed to load driver: bochs-drm > > > ... > > > > > > > > > What is this bochs-drm_dri.so and does Yocto / the Mesa in Yocto provide it? > > > > > > ________________________________ > > > From: Alexander Kanavin <alex.kanavin@...> > > > Sent: Wednesday, March 16, 2022 2:51 PM > > > To: Edgar Mobile <heideggm@...> > > > Cc: yocto@... <yocto@...> > > > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > > > This configuration is not tested. If you want accelerated gl, build > > > for the qemux86-64 target. > > > > > > Alex > > > > > > On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote: > > > > > > > > Greetings, > > > > > > > > I tried to run an Aarch64 Yocto with qemu on amd 64 Host. For that purpose, I built core-image-weston from Hardknott following the manual > > > > > > > > https://www.mail-archive.com/yocto@.../msg07306.html > > > > > > > > I then try to run > > > > > > > > runqemu sdl gl > > > > > > > > But it always aborts with > > > > > > > > runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > > > > > What can I do? > > > > > > > > Regards > > > > > > > > > > > >
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Alexander Kanavin
There is no hardware acceleration with bochs at all, if you want it,
toggle quoted messageShow quoted text
you need to make virtio/virgl driver work. Alex
On Thu, 17 Mar 2022 at 14:02, Edgar Mobile <heideggm@...> wrote:
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Edgar Mobile
Do you know if bochs driver is available and active for yocto 3.4 or 3.5?
From: Alexander Kanavin <alex.kanavin@...>
Sent: Thursday, March 17, 2022 11:26 AM To: Edgar Mobile <heideggm@...> Cc: yocto@... <yocto@...> Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. As I told you, we do not support or test this combination. Which means
that figuring out what the error messages mean and how to fix them is on you - patches welcome. Alex On Thu, 17 Mar 2022 at 11:41, Edgar Mobile <heideggm@...> wrote: > > I tried that first and it was horribly slow. That's why I try hardware acceleration now. > > Do you _know_ it doesn't work? If yes, why? > > ________________________________ > From: Alexander Kanavin <alex.kanavin@...> > Sent: Thursday, March 17, 2022 10:33 AM > To: Edgar Mobile <heideggm@...> > Cc: yocto@... <yocto@...> > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > If you want an aarch guest on x86, then drop the gl option from > runqemu. This will fall back to software rendering. > > Alex > > On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote: > > > > Sorry, but I need an Aarch64 guest. > > > > Ok, using a newer qemu I now encounter the following problem: > > > > root@qemuarm64:/usr/bin# XDG_RUNTIME_DIR=/run/user/0 ./eglinfo > > EGL client extensions string: > > EGL_EXT_client_extensions EGL_EXT_device_base > > EGL_EXT_device_enumeration EGL_EXT_device_query EGL_EXT_platform_base > > EGL_KHR_client_get_all_proc_addresses EGL_KHR_debug > > EGL_EXT_platform_device EGL_EXT_platform_wayland > > EGL_KHR_platform_wayland EGL_EXT_platform_x11 EGL_KHR_platform_x11 > > EGL_MESA_platform_xcb EGL_MESA_platform_gbm EGL_KHR_platform_gbm > > EGL_MESA_platform_surfaceless > > > > GBM platform: > > pci id for fd 3: 1234:1111, driver (null) > > MESA-LOADER: failed to open bochs-drm: /usr/lib/dri/bochs-drm_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/dri) > > failed to load driver: bochs-drm > > ... > > > > > > What is this bochs-drm_dri.so and does Yocto / the Mesa in Yocto provide it? > > > > ________________________________ > > From: Alexander Kanavin <alex.kanavin@...> > > Sent: Wednesday, March 16, 2022 2:51 PM > > To: Edgar Mobile <heideggm@...> > > Cc: yocto@... <yocto@...> > > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > This configuration is not tested. If you want accelerated gl, build > > for the qemux86-64 target. > > > > Alex > > > > On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote: > > > > > > Greetings, > > > > > > I tried to run an Aarch64 Yocto with qemu on amd 64 Host. For that purpose, I built core-image-weston from Hardknott following the manual > > > > > > https://www.mail-archive.com/yocto@.../msg07306.html > > > > > > I then try to run > > > > > > runqemu sdl gl > > > > > > But it always aborts with > > > > > > runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > > > What can I do? > > > > > > Regards > > > > > > > > >
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Alexander Kanavin
As I told you, we do not support or test this combination. Which means
toggle quoted messageShow quoted text
that figuring out what the error messages mean and how to fix them is on you - patches welcome. Alex
On Thu, 17 Mar 2022 at 11:41, Edgar Mobile <heideggm@...> wrote:
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Edgar Mobile
I tried that first and it was horribly slow. That's why I try hardware acceleration now.
Do you _know_ it doesn't work? If yes, why?
From: Alexander Kanavin <alex.kanavin@...>
Sent: Thursday, March 17, 2022 10:33 AM To: Edgar Mobile <heideggm@...> Cc: yocto@... <yocto@...> Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. If you want an aarch guest on x86, then drop the gl option from
runqemu. This will fall back to software rendering. Alex On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote: > > Sorry, but I need an Aarch64 guest. > > Ok, using a newer qemu I now encounter the following problem: > > root@qemuarm64:/usr/bin# XDG_RUNTIME_DIR=/run/user/0 ./eglinfo > EGL client extensions string: > EGL_EXT_client_extensions EGL_EXT_device_base > EGL_EXT_device_enumeration EGL_EXT_device_query EGL_EXT_platform_base > EGL_KHR_client_get_all_proc_addresses EGL_KHR_debug > EGL_EXT_platform_device EGL_EXT_platform_wayland > EGL_KHR_platform_wayland EGL_EXT_platform_x11 EGL_KHR_platform_x11 > EGL_MESA_platform_xcb EGL_MESA_platform_gbm EGL_KHR_platform_gbm > EGL_MESA_platform_surfaceless > > GBM platform: > pci id for fd 3: 1234:1111, driver (null) > MESA-LOADER: failed to open bochs-drm: /usr/lib/dri/bochs-drm_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/dri) > failed to load driver: bochs-drm > ... > > > What is this bochs-drm_dri.so and does Yocto / the Mesa in Yocto provide it? > > ________________________________ > From: Alexander Kanavin <alex.kanavin@...> > Sent: Wednesday, March 16, 2022 2:51 PM > To: Edgar Mobile <heideggm@...> > Cc: yocto@... <yocto@...> > Subject: Re: [yocto] runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > This configuration is not tested. If you want accelerated gl, build > for the qemux86-64 target. > > Alex > > On Wed, 16 Mar 2022 at 12:46, Edgar Mobile <heideggm@...> wrote: > > > > Greetings, > > > > I tried to run an Aarch64 Yocto with qemu on amd 64 Host. For that purpose, I built core-image-weston from Hardknott following the manual > > > > https://www.mail-archive.com/yocto@.../msg07306.html > > > > I then try to run > > > > runqemu sdl gl > > > > But it always aborts with > > > > runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available. > > > > What can I do? > > > > Regards > > > > > >
|
|
Re: runqemu - ERROR - Failed to run qemu: qemu-system-aarch64: Virtio VGA not available.
Alexander Kanavin
If you want an aarch guest on x86, then drop the gl option from
toggle quoted messageShow quoted text
runqemu. This will fall back to software rendering. Alex
On Thu, 17 Mar 2022 at 10:33, Edgar Mobile <heideggm@...> wrote:
|
|