We can add the --use-uuid line to the /boot entry if you really think it should be mounted on boot, but we shouldn't use it on the others and cause wic to generate a bad fstab. There are examples of other boards that don't mount /boot by default (raspi for sure, and I think bbb too).
Could the solution be as simple as this? From b8ba56d84fbac53901e5b7ca122498320e51fbf4 Mon Sep 17 00:00:00 2001 From: MarkusVolk <f_l_k@...> Date: Sat, 25 Sep 2021 09:21:15 +0200 Subject: [PATCH] wic:direct.py: improve filter for fstab update Signed-off-by: MarkusVolk <f_l_k@...> --- scripts/lib/wic/plugins/imager/direct.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/lib/wic/plugins/imager/direct.py b/scripts/lib/wic/plugins/imager/direct.py index 9d10ec01d0..15fa47356f 100644 --- a/scripts/lib/wic/plugins/imager/direct.py +++ b/scripts/lib/wic/plugins/imager/direct.py @@ -117,7 +117,7 @@ class DirectPlugin(ImagerPlugin): updated = False for part in self.parts: if not part.realnum or not part.mountpoint \ - or part.mountpoint == "/": + or part.mountpoint == "/" or not part.mountpoint.startswith('/'): continue if part.use_uuid: -- 2.25.1 With this patch wic only adds the /boot mountpoint. The invalid entries get filtered out. We would then only need to set --use-uuid for /boot to avoid the system from crashing if 'no-fstab-update' isn't expicitly given as an argument
Thanks for the patch and the SoB line. I'm going to apply this patch, but I'm going to amend the commit message to capture some of the conversation we've had. There's a chance we'll want to know "why" at some point in the future ;-)
Thanks for applying :) It would be cool if wic had something like an 'exclude-from-fstab-update" option. That would make the 'fstab-update' much more useful.
Hi,there are different possibilities you have with all great tooling you get with yocto project, I think you have made a good choice.
We are starting a new project using Yocto to build a custom Linux image which matches our needs.
We are new to Yocto and still trying to figure out the best way to work with it.
Especially, if it is best to cross-compile or to build on a VM running an image of the target.
Some background:
Our target system is x86_64, and we are all working on x86_64 computers obviously.
For now, we don't have yet a physical target system so we are running the image generated by Yocto in VirtualBox or VMWare.
For practical reasons, since not all developers use the same OS (Windows, macOS) we decided to do all development work on Linux VM (Debian distribution) so everyone has the same system.
For now, we are cross-compiling applications using Yocto SDK in that Debian VM, copying it to the Yocto VM to run it.
Even though copying/deploying and running them could be somehow automated, since we are developing on a Linux VM anyway, I thought it could be best to build a Yocto image (maybe as an additional "dev" image based on the existing one) which contains all tools we need (gcc, cmake, etc.).
This way, we could execute the binaries (in particular the unit tests) locally.
For some of our unit tests in particular that we run at build time, it sounds easier to run them locally, compared to deploying them and running them remotely.
Any thoughts about this?
inherently Yocto project is a cross compiling infrastructure so lot of commonly used workflows will be around cross-compiling, however you can also leverage it in ways you described, where you build a development VM using yocto project itself which includes all the tools your developers would need and use that as build env + devtest env, see core-image-sato-sdk. However, this will be more of less a static env, which means devs wont be able to install packages like they might be doing with debian VM, you will have to either rebuild the VM or publish own feeds, but if you expect this to be static env then this might turn out to be ok. Advantage is that you will use same tools that your final target will use and you have ease of native development and folks not familiar with yocto can be effective as well. However this is not a common workflow that yocto project users might be using, so community support might be scarce.
Other option could be that you do cross builds on your debian VM and use qemux86-64 as target and run your tests using ptest framework so you will be running your target VM in qemu on top of your build running debian which is running on windows/MacOS or Linux baremetal. There might be some quirks to use qemu in VM but I think it should work out well. This also means that in future when you target real hardware ( I assume thats what you want eventually ) then not much changes, you add another MACHINE and workflow remains pretty much same. But this would require your devs to learn a bit of yocto-fu and cross-complation workflows.
In advance, thanks a lot for your help.
Best regards,
Arnaud
5.13 has been removed from core, and we've moved the default
support to 5.14, so we can drop our bbappend.
Signed-off-by: Bruce Ashfield <bruce.ashfield@...>
---
.../linux/linux-yocto_5.13.bbappend | 23 -------------------
1 file changed, 23 deletions(-)
delete mode 100644 meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.13.bbappend
diff --git a/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.13.bbappend b/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.13.bbappend
deleted file mode 100644
index daf5fd2cd6..0000000000
--- a/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.13.bbappend
+++ /dev/null
@@ -1,23 +0,0 @@
-KBRANCH:genericx86 = "v5.13/standard/base"
-KBRANCH:genericx86-64 = "v5.13/standard/base"
-KBRANCH:edgerouter = "v5.13/standard/edgerouter"
-KBRANCH:beaglebone-yocto = "v5.13/standard/beaglebone"
-
-KMACHINE:genericx86 ?= "common-pc"
-KMACHINE:genericx86-64 ?= "common-pc-64"
-KMACHINE:beaglebone-yocto ?= "beaglebone"
-
-SRCREV_machine:genericx86 ?= "7280c93f5599946db3add473eeb05b34c364938d"
-SRCREV_machine:genericx86-64 ?= "7280c93f5599946db3add473eeb05b34c364938d"
-SRCREV_machine:edgerouter ?= "a832a0390e96c4f014d7b2bf9f161ac9477140f7"
-SRCREV_machine:beaglebone-yocto ?= "dbdc921374c057a75b2df92302124994e241ca51"
-
-COMPATIBLE_MACHINE:genericx86 = "genericx86"
-COMPATIBLE_MACHINE:genericx86-64 = "genericx86-64"
-COMPATIBLE_MACHINE:edgerouter = "edgerouter"
-COMPATIBLE_MACHINE:beaglebone-yocto = "beaglebone-yocto"
-
-LINUX_VERSION:genericx86 = "5.13.15"
-LINUX_VERSION:genericx86-64 = "5.13.15"
-LINUX_VERSION:edgerouter = "5.13.15"
-LINUX_VERSION:beaglebone-yocto = "5.13.15"
--
2.19.1
Please see my comments inline
On 24/09/2021 14:10, Monsees, Steven C (US) via lists.yoctoproject.org wrote:
The one solution I found says : Add *LICENSE_PATH += "${LAYERDIR}/custom-licenses"* under conf/layer.conf, *this does not resolve this warning*.I am a bit confused, but can try to show you what I typically do.
This is a new item being added to our Yocto build.
The Data Direct vendor does not submit their code to Yocto because they sell thier code.
We are adding code to Yocto that has a private license and we are attempting to have Yocto accept the license, *is this proper way to handle this ?*
In my custom meta-my-layer I add to layer.conf:
#-->
LICENSE_PATH += " ${LAYERDIR}/custom-licenses"
CUSTOM_COMMON_LICENSE_DIR := '${@os.path.normpath("${LAYERDIR}/custom-licenses")}'
BB_HASHBASE_WHITELIST_append = " CUSTOM_COMMON_LICENSE_DIR"
#<--
underneath the custom-licenses dir in this meta-my-layer I put the custom "hello-license".
*Can you tell me the proper way to add a custom license to a recipe in yocto ?*Once you did something like mentioned above you can add the license to the recipe you use to build the funny component of your supplier.
example_0.1.bb:
LICENSE = "hello-license"
LIC_FILES_CHKSUM = "file://${CUSTOM_COMMON_LICENSE_DIR}/hello-license;beginline=5;endline=12;md5=36e6988a930e054886e6af19372edb07"
If you want to get fancy, since it does not seem to be an open source license, you can mark it also as:
LICENSE_FLAGS = "commercial" in the recipe
but then you need to whitelist e.g. in your local.conf to be able to bitbake it:
# whitelist example recipe, which is under a commercial license
LICENSE_FLAGS_WHITELIST = "commercial_example"
Thanks,Hope this helps,
Steve
Regards,
Robert
Signed-off-by: MarkusVolk <f_l_k@...>Thanks for the patch and the SoB line. I'm going to apply this patch, but I'm
---
conf/machine/include/rockchip-wic.inc | 3 +++
1 file changed, 3 insertions(+)
diff --git a/conf/machine/include/rockchip-wic.inc b/conf/machine/include/rockchip-wic.inc
index 15010a0..30b0d57 100644
--- a/conf/machine/include/rockchip-wic.inc
+++ b/conf/machine/include/rockchip-wic.inc
@@ -26,3 +26,6 @@ WICVARS:append = " \
SPL_BINARY \
UBOOT_SUFFIX \
"
+
+# Do not update fstab file while creating wic images
+WIC_CREATE_EXTRA_ARGS ?= "--no-fstab-update"
going to amend the commit message to capture some of the conversation we've
had. There's a chance we'll want to know "why" at some point in the future ;-)
Â
Hello:
Â
I am running zeus 3.0.4…
Â
A vendor has supplied us with a generic license.txt file, which we were able to add to the acexpci recipe we use to build in their package.
The license provided to us by the vendor is not part of the generic licenses list that yocto recognizes.
Â
We get a warning though which says:
Â
WARNING: aiox-defaultfs-1.0-r0 do_rootfs: The license listed DataDeviceCorporation was not in the licenses collected for recipe acexpci
Â
Though the warning occurs, I can see the license.txt being saved inside the rootfs on and is saved under tmp/deploy/licenses/acexpci.
I’ve been trying to get rid of this warning when the image builds, but I can’t seem to find anything in the manuals or online.
Â
The one solution I found says : Add LICENSE_PATH += "${LAYERDIR}/custom-licenses" under conf/layer.conf, this does not resolve this warning.
Â
This is a new item being added to our Yocto build.
The Data Direct vendor does not submit their code to Yocto because they sell thier code.
We are adding code to Yocto that has a private license and we are attempting to have Yocto accept the license, is this proper way to handle this ?
Â
Can you tell me the proper way to add a custom license to a recipe in yocto ?
Â
Thanks,
Steve
Hi,
We are starting a new project using Yocto to build a custom Linux image which matches our needs.
We are new to Yocto and still trying to figure out the best way to work with it.
Especially, if it is best to cross-compile or to build on a VM running an image of the target.
Some background:
Our target system is x86_64, and we are all working on x86_64 computers obviously.
For now, we don't have yet a physical target system so we are running the image generated by Yocto in VirtualBox or VMWare.
For practical reasons, since not all developers use the same OS (Windows, macOS) we decided to do all development work on Linux VM (Debian distribution) so everyone has the same system.
For now, we are cross-compiling applications using Yocto SDK in that Debian VM, copying it to the Yocto VM to run it.
Even though copying/deploying and running them could be somehow automated, since we are developing on a Linux VM anyway, I thought it could be best to build a Yocto image (maybe as an additional "dev" image based on the existing one) which contains all tools we need (gcc, cmake, etc.).
This way, we could execute the binaries (in particular the unit tests) locally.
For some of our unit tests in particular that we run at build time, it sounds easier to run them locally, compared to deploying them and running them remotely.
Any thoughts about this?
In advance, thanks a lot for your help.
Best regards,
Arnaud
The CIL compiler in SELinux 3.2 has a use-after-free in cil_reset_classpermission
(called from cil_reset_classperms_set and cil_reset_classperms_list).
Reference:
https://nvd.nist.gov/vuln/detail/CVE-2021-36086
Patch from:
https://github.com/SELinuxProject/selinux/commit/c49a8ea09501ad66e799ea41b8154b6770fec2c8
Signed-off-by: Yi Zhao <yi.zhao@...>
---
.../selinux/libsepol/CVE-2021-36086.patch | 46 +++++++++++++++++++
recipes-security/selinux/libsepol_3.2.bb | 3 +-
2 files changed, 48 insertions(+), 1 deletion(-)
create mode 100644 recipes-security/selinux/libsepol/CVE-2021-36086.patch
diff --git a/recipes-security/selinux/libsepol/CVE-2021-36086.patch b/recipes-security/selinux/libsepol/CVE-2021-36086.patch
new file mode 100644
index 0000000..7a2d616
--- /dev/null
+++ b/recipes-security/selinux/libsepol/CVE-2021-36086.patch
@@ -0,0 +1,46 @@
+From 49f9aa2a460fc95f04c99b44f4dd0d22e2f0e5ee Mon Sep 17 00:00:00 2001
+From: James Carter <jwcart2@...>
+Date: Thu, 8 Apr 2021 13:32:06 -0400
+Subject: [PATCH] libsepol/cil: cil_reset_classperms_set() should not reset
+ classpermission
+
+In struct cil_classperms_set, the set field is a pointer to a
+struct cil_classpermission which is looked up in the symbol table.
+Since the cil_classperms_set does not create the cil_classpermission,
+it should not reset it.
+
+Set the set field to NULL instead of resetting the classpermission
+that it points to.
+
+Signed-off-by: James Carter <jwcart2@...>
+
+Upstream-Status: Backport
+[https://github.com/SELinuxProject/selinux/commit/c49a8ea09501ad66e799ea41b8154b6770fec2c8]
+
+CVE: CVE-2021-36086
+
+Signed-off-by: Yi Zhao <yi.zhao@...>
+---
+ cil/src/cil_reset_ast.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/cil/src/cil_reset_ast.c b/cil/src/cil_reset_ast.c
+index 89f91e5..1d9ca70 100644
+--- a/cil/src/cil_reset_ast.c
++++ b/cil/src/cil_reset_ast.c
+@@ -59,7 +59,11 @@ static void cil_reset_classpermission(struct cil_classpermission *cp)
+
+ static void cil_reset_classperms_set(struct cil_classperms_set *cp_set)
+ {
+- cil_reset_classpermission(cp_set->set);
++ if (cp_set == NULL) {
++ return;
++ }
++
++ cp_set->set = NULL;
+ }
+
+ static inline void cil_reset_classperms_list(struct cil_list *cp_list)
+--
+2.17.1
+
diff --git a/recipes-security/selinux/libsepol_3.2.bb b/recipes-security/selinux/libsepol_3.2.bb
index ef5de1e..192f1b3 100644
--- a/recipes-security/selinux/libsepol_3.2.bb
+++ b/recipes-security/selinux/libsepol_3.2.bb
@@ -10,7 +10,8 @@ LIC_FILES_CHKSUM = "file://${S}/COPYING;md5=a6f89e2100d9b6cdffcea4f398e37343"
require selinux_common.inc
SRC_URI += "file://CVE-2021-36084.patch \
- file://CVE-2021-36085.patch "
+ file://CVE-2021-36085.patch \
+ file://CVE-2021-36086.patch "
inherit lib_package
--
2.25.1
We are starting a new project using Yocto to build a custom Linux image which matches our needs.
We are new to Yocto and still trying to figure out the best way to work with it.
Especially, if it is best to cross-compile or to build on a VM running an image of the target.
Some background:
Our target system is x86_64, and we are all working on x86_64 computers obviously.
For now, we don't have yet a physical target system so we are running the image generated by Yocto in VirtualBox or VMWare.
For practical reasons, since not all developers use the same OS (Windows, macOS) we decided to do all development work on Linux VM (Debian distribution) so everyone has the same system.
For now, we are cross-compiling applications using Yocto SDK in that Debian VM, copying it to the Yocto VM to run it.
Even though copying/deploying and running them could be somehow automated, since we are developing on a Linux VM anyway, I thought it could be best to build a Yocto image (maybe as an additional "dev" image based on the existing one) which contains all tools we need (gcc, cmake, etc.).
This way, we could execute the binaries (in particular the unit tests) locally.
For some of our unit tests in particular that we run at build time, it sounds easier to run them locally, compared to deploying them and running them remotely.
Any thoughts about this?
In advance, thanks a lot for your help.
Best regards,
Arnaud
an eSDK would be enough to do everything, however I would demand the entire development system if I were to start a project, but that is my opinion.
I don't know well Node-RED but using devtool add you should be able to create or manage a recipe for any Node application.
Happy hacking!
--
Marco Cavallini | KOAN sas
Bergamo - Italia
embedded software engineering
https://KoanSoftware.com
In your opinion isn't enough to ask for eSDK? for instance if i want to add Node-RED, it would be difficult to build with devtool? because i've seen that it doesn't resolve dependencies automatically.
Thanks.
usually nowadays every honest hardware manufacturer provides all the sources of the BSP and the development system to their customers.
Try asking your supplier for them.
--
Marco Cavallini | KOAN sas
Bergamo - Italia
embedded software engineering
https://KoanSoftware.com
breaks zephyr-qemuboot implementation of adding build dependencies
http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/?id=282d596b8cc81d650b6d20c6131fdc236bad2c20
ERROR: Error for meta-zephyr/recipes-kernel/zephyr-kernel/zephyr-helloworld.bb:
do_bootconf_write[depends], dependency qemu-helper-native:do_addto_recipe_sysroot:do_addto_recipe_sysroot in
' qemu-helper-native:do_addto_recipe_sysroot:do_addto_recipe_sysroot qemu-helper-native:do_addto_recipe_sysroot:do_populate_sysroot' does not contain exactly one ':' character.
Task 'depends' should be specified in the form 'packagename:task'
ERROR: Command execution failed: Exited with 1
Signed-off-by: Naveen Saini <naveen.kumar.saini@...>
---
classes/zephyr-qemuboot.bbclass | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/classes/zephyr-qemuboot.bbclass b/classes/zephyr-qemuboot.bbclass
index c268e9e..b45e6f6 100644
--- a/classes/zephyr-qemuboot.bbclass
+++ b/classes/zephyr-qemuboot.bbclass
@@ -48,7 +48,7 @@ python () {
for dep in (d.getVar('EXTRA_IMAGEDEPENDS') or "").split():
# Make sure we only add it for qemu
if 'qemu-helper-native' in dep:
- deps += " %s:%s" % (dep, task)
+ deps += " qemu-helper-native:%s" % (task)
return deps
d.appendVarFlag('do_bootconf_write', 'depends', extraimage_getdepends('do_addto_recipe_sysroot'))
d.appendVarFlag('do_bootconf_write', 'depends', extraimage_getdepends('do_populate_sysroot'))
--
2.17.1
https://github.com/zephyrproject-rtos/zephyr/issues/35707
Commits included:
2d6322d74a demand_paging: eviction/nru: fix incorrect dirty bit return val
25771e6928 drivers: clock_control: stm32: enable PWR clock unconditionally
92e36185e8 [Backport v2.6-branch] Microchip: XEC GPIO driver interrupt enable part 2
68d33e3834 libc/minimal: locate the memory pool for malloc() to .bss
7f3abab9bf net: tcp: accept [FIN, PSH, ACK] in TCP_FIN_WAIT_2 state
533dcaf374 lib/os/cbprintf_nano.c: avoid sign extension on unsigned formats
ea55ebfa74 tests: schedule_api: use stack array extern macro
95bb8841b8 tests: mem_protect: fix warning on uninitialized variable
1f8c53dfaf tests: kernel/common: avoid using compiler builtin popcount
7bb7454a00 kernel: use proper macro to declare extern interrupt stacks
25fd176014 kernel: add macros to allow declaring extern stack arrays
e1cde092ac kernel: move Z_KERNEL_STACK_LEN higher in thread_stack.h
244049bd71 x86: type cast to uint8_t* for bit ops
5dae0c1bf0 kernel: ignore array bound warnings for generated syscall funcs
5666e4d525 cmake: force GCC to emit DWARF version 4
91a78866ca Bluetooth: Controller: Fix advertising after connections from same peer
0afddb2341 x86/cache: fix issues in arch dcache flush function
9bcf9b6a53 json: fix parsing first array-array element
2595cce714 cmake: oneApi: add oneApi support on windows.
18d314e750 cmake: oneApi: add oneApi support on windows
c8755e0b46 (tag: v2.6.1-rc1) tests/benchmarks: add dynamic memory allocation measurement
a4d35f0a3e doc: 2.6.1 release notes
7094aaee55 release: Bump release to 2.6.1-rc1
585c03a0b6 drivers/clock_control: stm32: Fix macro to get HCLK freq
cacb0a4e59 Bluetooth: L2CAP: Fix missing net_buf_unref()
78ab750540 timer: hpet: convert register access to functions
d9df404d47 timer: hpet: don't force TIMER_READS_ITS_FREQUENCY_AT_RUNTIME
8e80955511 timer: hpet: allow overriding MIN_DELAY
99dc33faaf timer: hpet: extract Counter Clock Period into a macro
02fbe652a5 logging: fs: fix leak of opened directories in check_log_file_exist()
80b406d784 x86: acpi: limit search on where EBDA can be
Signed-off-by: Naveen Saini <naveen.kumar.saini@...>
---
...ephyr-kernel-src-2.6.0.inc => zephyr-kernel-src-2.6.1.inc} | 4 ++--
recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
rename recipes-kernel/zephyr-kernel/{zephyr-kernel-src-2.6.0.inc => zephyr-kernel-src-2.6.1.inc} (90%)
diff --git a/recipes-kernel/zephyr-kernel/zephyr-kernel-src-2.6.0.inc b/recipes-kernel/zephyr-kernel/zephyr-kernel-src-2.6.1.inc
similarity index 90%
rename from recipes-kernel/zephyr-kernel/zephyr-kernel-src-2.6.0.inc
rename to recipes-kernel/zephyr-kernel/zephyr-kernel-src-2.6.1.inc
index 63665bf..109242e 100644
--- a/recipes-kernel/zephyr-kernel/zephyr-kernel-src-2.6.0.inc
+++ b/recipes-kernel/zephyr-kernel/zephyr-kernel-src-2.6.1.inc
@@ -1,5 +1,5 @@
SRCREV_FORMAT = "default_cmsis"
-SRCREV_default = "837ab4a915f7802a6fb02a27e4b024e287ac93c2"
+SRCREV_default = "2d6322d74aaac838ead46bfcba0db619cff4b534"
SRCREV_cmsis = "c3bd2094f92d574377f7af2aec147ae181aa5f8e"
SRCREV_nordic = "574493fe29c79140df4827ab5d4a23df79d03681"
SRCREV_stm32 = "f8ff8d25aa0a9e65948040c7b47ec67f3fa300df"
@@ -10,7 +10,7 @@ SRCREV_tinycrypt = "3e9a49d2672ec01435ffbf0d788db6d95ef28de0"
SRCREV_mbedtls = "5765cb7f75a9973ae9232d438e361a9d7bbc49e7"
ZEPHYR_BRANCH = "v2.6-branch"
-PV = "2.6.0+git${SRCPV}"
+PV = "2.6.1+git${SRCPV}"
SRC_URI:append = " file://0001-cmake-add-yocto-toolchain.patch \
file://0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch \
diff --git a/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc b/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc
index abe755d..458ff1e 100644
--- a/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc
+++ b/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc
@@ -22,5 +22,5 @@ SRC_URI = "\
S = "${WORKDIR}/git"
# Default to a stable version
-PREFERRED_VERSION_zephyr-kernel ??= "2.6.0"
+PREFERRED_VERSION_zephyr-kernel ??= "2.6.1"
include zephyr-kernel-src-${PREFERRED_VERSION_zephyr-kernel}.inc
--
2.17.1
so you can see if its always failing to compile at same file.
or atleast you can get one file where it fails then you can use
preprocessed file to build it in a loop and see if you can get it to
fail more.
On Thu, Sep 23, 2021 at 8:16 AM Rasmus Villemoes via
lists.yoctoproject.org
<rasmus.villemoes=prevas.dk@...> wrote:
I've recently started getting an internal compiler error when building
an aarch64 kernel. It only happens once in a while, and re-running the
task usually just succeeds, so I don't know how to reproduce or trigger
this at will.
Two examples:
===
CC [M] drivers/gpu/drm/nouveau/nvkm/subdev/fb/gm20b.o
CC [M] drivers/net/ethernet/mellanox/mlx5/core/rl.o
CC [M] drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp100.o
*** stack smashing detected ***: <unknown> terminated
In file included from .../kernel-source/arch/arm64/include/asm/atomic.h:15,
from .../kernel-source/include/linux/atomic.h:7,
from
.../kernel-source/include/asm-generic/bitops/atomic.h:5,
from .../kernel-source/arch/arm64/include/asm/bitops.h:26,
from .../kernel-source/include/linux/bitops.h:29,
from .../kernel-source/include/linux/kernel.h:12,
from .../kernel-source/include/linux/uio.h:8,
from .../kernel-source/include/linux/socket.h:8,
from .../kernel-source/include/uapi/linux/if.h:25,
from .../kernel-source/net/mac80211/led.c:7:
.../kernel-source/include/net/inet_sock.h: In function 'inet_sk_state_load':
.../kernel-source/arch/arm64/include/asm/barrier.h:114:8: internal
compiler error: Aborted
114 | union { __unqual_scalar_typeof(*p) __val; char __c[1]; } __u; \
| ^
.../kernel-source/include/asm-generic/barrier.h:142:29: note: in
expansion of macro '__smp_load_acquire'
142 | #define smp_load_acquire(p) __smp_load_acquire(p)
| ^~~~~~~~~~~~~~~~~~
.../kernel-source/include/net/inet_sock.h:312:9: note: in expansion of
macro 'smp_load_acquire'
312 | return smp_load_acquire(&sk->sk_state);
| ^~~~~~~~~~~~~~~~
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://gcc.gnu.org/bugs/> for instructions.
.../kernel-source/scripts/Makefile.build:279: recipe for target
'net/mac80211/led.o' failed
make[3]: *** [net/mac80211/led.o] Error 1
make[3]: *** Waiting for unfinished jobs....
CC [M] drivers/net/ethernet/mellanox/mlx5/core/lag.o
===
CC [M] drivers/gpu/drm/nouveau/nvkm/nvfw/ls.o
CC [M] drivers/gpu/drm/drm_modeset_helper.o
CC [M] drivers/gpu/drm/drm_scdc_helper.o
*** stack smashing detected ***: <unknown> terminated
In file included from
.../kernel-source/include/linux/regulator/consumer.h:35,
from
.../kernel-source/drivers/gpu/drm/nouveau/include/nvif/os.h:27,
from
.../kernel-source/drivers/gpu/drm/nouveau/include/nvkm/core/os.h:4,
from
.../kernel-source/drivers/gpu/drm/nouveau/include/nvkm/core/oclass.h:3,
from
.../kernel-source/drivers/gpu/drm/nouveau/include/nvkm/core/device.h:4,
from
.../kernel-source/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h:4,
from
.../kernel-source/drivers/gpu/drm/nouveau/nvkm/nvfw/ls.c:22:
.../kernel-source/include/linux/suspend.h:364:36: internal compiler
error: Aborted
364 | extern void mark_free_pages(struct zone *zone);
| ^~~~
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://gcc.gnu.org/bugs/> for instructions.
.../kernel-source/scripts/Makefile.build:280: recipe for target
'drivers/gpu/drm/nouveau/nvkm/nvfw/ls.o' failed
make[5]: *** [drivers/gpu/drm/nouveau/nvkm/nvfw/ls.o] Error 1
make[5]: *** Waiting for unfinished jobs....
CC [M] drivers/gpu/drm/drm_gem_framebuffer_helper.o
===
This is with hardknott, aarch64-oe-linux-gcc (GCC) 10.2.0, building
5.10.* kernels (5.10.45 and 5.10.65 in the cases above IIRC). The build
is visiting drivers/gpu/drm/ in both cases, but in the former case it's
not actually a TU in there that fails, but one in net/, so I'm not even
sure it it has to do with something peculiar to the drivers/gpu/drm/
modules.
Has anyone seem something like this, or any ideas for figuring out
what's going on?
Rasmus
IMAGE_INSTALL += "gcov gcov-symlinks"
Hi,
Anybody knows how to enable gcov support for the target in Yocto?
Thanks,
Lijun
________________________________
This transmission (including any attachments) may contain confidential information, privileged material (including material protected by the solicitor-client or other applicable privileges), or constitute non-public information. Any use of this information by anyone other than the intended recipient is prohibited. If you have received this transmission in error, please immediately reply to the sender and delete this information from your system. Use, dissemination, distribution, or reproduction of this transmission by unintended recipients is not authorized and may be unlawful.