Date   

Re: Understanding kernel patching in linux-yocto

Bruce Ashfield
 

On Wed, May 12, 2021 at 10:07 AM Yann Dirson
<yann.dirson@blade-group.com> wrote:

Thanks for those clarifications!

Some additional questions below

Le mer. 12 mai 2021 à 15:19, Bruce Ashfield <bruce.ashfield@gmail.com> a écrit :

On Wed, May 12, 2021 at 7:14 AM Yann Dirson <yann.dirson@blade-group.com> wrote:

I am currently working on a kmeta BSP for the rockchip-based NanoPI M4
[1], and I'm wondering how I should be providing kernel patches, as
just add ing "patch" directives in the .scc does not get them applied
unless the particular .scc gets included in KERNEL_FEATURES (see [2]).

From an old thread [3] I understand that the patches from the standard
kmeta snippets are already applied to the tree, and that to get the
patches from my BSP I'd need to reference it explicitly in SRC_URI
(along with using "nopatch" in the right places to avoid the
already-applied patches to get applied twice).

I have the feeling that I'm lacking the rationale behind this, and
would need to understand this better to make things right in this BSP.
Especially:
- at first sight, having the patches both applied to linux-yocto and
referenced in yocto-kernel-cache just to be skipped on parsing looks
like both information duplication and parsing of unused lines
At least some of this is mentioned in the advanced section of the
kernel-dev manual, but I can summarize/reword things here, and
I'm also doing a presentation related to this in the Yocto summit at
the end of this month.

The big thing to remember, is that the configuration and changes
you see in that repository, are not only for yocto purposes. The
concepts and structure pre-date when they were first brought in
to generate reference kernels over 10 years ago (the implementation
has changed, but the concepts are still the same). To this day,
there still are cases that they are used with just a kernel tree and
cross toolchain.

With that in mind, the meta-data is used for many different things

- It organizes patches / features and their configuration into
reusable blocks. At the same time documenting the changes
that we have applied to a tree
- It makes those patches and configuration blocks available to
other kernel trees (for whatever reason).
- It configures the tree during the build process, reusing both
configuration only and patch + configuration blocks
- It is used to generate a history clean tree from scratch for
each new supported kernel. Which is what I do when creating
new linux-yocto-dev references, and the new <version>/standard/*
branches in linux-yocto.
I'd think (and I take your further remarks about workflow as confirming
this), that when upgrading the kernel the best tool would be git-rebase.
Then, regenerating the linux-yocto branches would only be a akin to a
check that the metadata is in sync with the new tree you rebased ?
The best of anything is a matter of opinion. I heavily use git-rebase and
sure, you could use it to do something similar here. But the result is
the same. There's still heavy use of quilt in kernel circles. Workflows
don't change easily, and as long as they work for the maintainer, they
tend to stay put. Asking someone to change their workflow, rarely goes
over well.


If that conclusion is correct, wouldn't it be possible to avoid using the
linux-yocto branches directly, and let all the patches be applied at
do_patch time ? That would be much more similar to the standard
package workflow (and thus lower the barrier for approaching the
kernel packages).
That's something we did in the past, and sure, you can do anything.
But patching hundreds of changes at build time means constant
failures .. again, I've been there and done that. We use similar patches
in many different contexts and optional stackings. You simply cannot
maintain them and stay sane by whacking patches onto the SRC_URI.
The last impression you want when someone builds your kernel is that
they can't even get past the patch phase. So that's a hard no, to how
the reference kernels are maintained (and that hard no has been around
for 11 years now).

Also, we maintain contributed reference BSPs in that same tree, that
are yanking in SDKs from vendors, etc, they are in the thousands of
patches. So you need the tree and the BSP branches to support that.



So why not just drop all the patches in the SRC_URI ? Been there,
done that. It fails spectacularly when you are managing queues of
hundreds of potentially conflicting patches (rt, yaffs, aufs, ... etc, etc)
and then attempting to constantly merge -stable and other kernel
trees into the repository. git is the tool for managing that, not stacks
of patches. You spend your entire life fixing patch errors and refreshing
fuzz (again, been there, done that).

So why not just keep a history and constantly merge new versions
into it ? Been there, done that. You end up with an absolute garbage
history of octopus merges and changes that are completely hidden,
non-obvious and useless for collaborating with other kernel projects.
Try merging a new kernel version into those same big features, it's
nearly impossible and you have a franken-kernel that you end up trying
to support and fix yourself. All the bugs are yours and yours alone.

So that's why there's a repository that tracks the patches and the
configuration and is used for multiple purposes. Keeping the patches
and config blocks separate would just lead to even more errors as
I update one and forget the other, etc, etc. There have been various
incarnations of the tools that also did different things with the patches,
and they weren't skipped, but detected as applied or not on-the-fly,
so there are other historical reasons for the structure as well.

- kernel-yocto.bbclass does its own generic job of locating a proper
BSP using the KMACHINE/KTYPE/KARCH tags in BSP, it looks like
specifying a specific BSP file would just defeat of this: how should I
deal with this case where I'm providing both "standard" and "tiny"
KTYPE's ?
I'm not quite following the question here, so I can try to answer badly
and you can clarify based on my terrible answer.
The answer is indeed quite useful for a question that may not be that clear :)

The tools can locate your "bsp entry point" / "bsp definition" in
your layer. Either provided by something on the SRC_URI or something
in a kmeta repository (also specified on the SRC_URI). Since
both of those are added to the search paths they check. Those
are just .scc files with a specified KMACHINE/KTYPE that match, and
as you could guess from my first term I used, they are the entry
point into building the configuration queue.

That's where you start inheriting the base configuration(s) and including
feature blocks, etc. Those definitions are exactly the same as the
internal ones in the kernel-cache repository. By default, that located
BSP definition is excluded from inheriting patches .. because as you
noted, it would start trying to re-apply changes to the tree. It is there
to get the configuration blocks, patches come in via other feature
blocks or directly on the SRC_URI.

So in your case, just provide the two .scc file with the proper
defines so they can be located, and you'll get the proper branch
located in the tree, and the base configurations picked up for those
kernel types. You'd supply your BSP specific config by making
a common file and including it in both definitions, and patches by
a KERNEL_FEATURE variable or by specifying them directly on
the SRC_URI (via .patch or via a different .scc file).
That's what I was experimenting with at the same time, and something like
this does indeed produce the expected output:

KERNEL_FEATURES_append = " bsp/rockchip/nanopi-m4-${LINUX_KERNEL_TYPE}.scc"

However, it seems confusing, as that .scc is precisely the one that's
already selected
and used for the .cfg: it really looks like we're overriding the
default "bsp entry point"
with a value that's already the default, but with a different result.
Yes, that's one way that we've structured things as the tools evolved
to balance external BSP definitions being able to pull in the base
configuration but not patches. There are two runs of the tools, one looks
for patches (and excludes that bsp entry point) and one that builds the
config.queue (and uses the entry point). That's the balance of the multi
use nature of the configuration blocks. I could bury something deeper
in the tools to hide a bit of that, but it will break uses cases and time
has shown that it is brittle.


So my gut feeling ATM is that everything should be much more clear if
specifying the default entry point would have the same effect as leaving
the default be used, ie. having patches be applied in both cases.
The variable KMETA_EXTERNAL_BSPS was created as a knob to
allow an external definition to both be used for patches AND configuration.
But that is for fully exernal BSPs that do not include the base kernel
meta-data, since once you turn that on, you are getting all the patches
and all the configuration .. and will have the patches applied twice.

Bruce


Bruce


[1] https://lists.yoctoproject.org/g/yocto/message/53454
[2] https://lists.yoctoproject.org/g/yocto/message/53452
[3] https://lists.yoctoproject.org/g/yocto/topic/61340326

Best regards,
--
Yann Dirson <yann@blade-group.com>
Blade / Shadow -- http://shadow.tech


--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


--
Yann Dirson <yann@blade-group.com>
Blade / Shadow -- http://shadow.tech


--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


Re: Understanding kernel patching in linux-yocto

Yann Dirson
 

Thanks for those clarifications!

Some additional questions below

Le mer. 12 mai 2021 à 15:19, Bruce Ashfield <bruce.ashfield@gmail.com> a écrit :

On Wed, May 12, 2021 at 7:14 AM Yann Dirson <yann.dirson@blade-group.com> wrote:

I am currently working on a kmeta BSP for the rockchip-based NanoPI M4
[1], and I'm wondering how I should be providing kernel patches, as
just add ing "patch" directives in the .scc does not get them applied
unless the particular .scc gets included in KERNEL_FEATURES (see [2]).

From an old thread [3] I understand that the patches from the standard
kmeta snippets are already applied to the tree, and that to get the
patches from my BSP I'd need to reference it explicitly in SRC_URI
(along with using "nopatch" in the right places to avoid the
already-applied patches to get applied twice).

I have the feeling that I'm lacking the rationale behind this, and
would need to understand this better to make things right in this BSP.
Especially:
- at first sight, having the patches both applied to linux-yocto and
referenced in yocto-kernel-cache just to be skipped on parsing looks
like both information duplication and parsing of unused lines
At least some of this is mentioned in the advanced section of the
kernel-dev manual, but I can summarize/reword things here, and
I'm also doing a presentation related to this in the Yocto summit at
the end of this month.

The big thing to remember, is that the configuration and changes
you see in that repository, are not only for yocto purposes. The
concepts and structure pre-date when they were first brought in
to generate reference kernels over 10 years ago (the implementation
has changed, but the concepts are still the same). To this day,
there still are cases that they are used with just a kernel tree and
cross toolchain.

With that in mind, the meta-data is used for many different things

- It organizes patches / features and their configuration into
reusable blocks. At the same time documenting the changes
that we have applied to a tree
- It makes those patches and configuration blocks available to
other kernel trees (for whatever reason).
- It configures the tree during the build process, reusing both
configuration only and patch + configuration blocks
- It is used to generate a history clean tree from scratch for
each new supported kernel. Which is what I do when creating
new linux-yocto-dev references, and the new <version>/standard/*
branches in linux-yocto.
I'd think (and I take your further remarks about workflow as confirming
this), that when upgrading the kernel the best tool would be git-rebase.
Then, regenerating the linux-yocto branches would only be a akin to a
check that the metadata is in sync with the new tree you rebased ?

If that conclusion is correct, wouldn't it be possible to avoid using the
linux-yocto branches directly, and let all the patches be applied at
do_patch time ? That would be much more similar to the standard
package workflow (and thus lower the barrier for approaching the
kernel packages).


So why not just drop all the patches in the SRC_URI ? Been there,
done that. It fails spectacularly when you are managing queues of
hundreds of potentially conflicting patches (rt, yaffs, aufs, ... etc, etc)
and then attempting to constantly merge -stable and other kernel
trees into the repository. git is the tool for managing that, not stacks
of patches. You spend your entire life fixing patch errors and refreshing
fuzz (again, been there, done that).

So why not just keep a history and constantly merge new versions
into it ? Been there, done that. You end up with an absolute garbage
history of octopus merges and changes that are completely hidden,
non-obvious and useless for collaborating with other kernel projects.
Try merging a new kernel version into those same big features, it's
nearly impossible and you have a franken-kernel that you end up trying
to support and fix yourself. All the bugs are yours and yours alone.

So that's why there's a repository that tracks the patches and the
configuration and is used for multiple purposes. Keeping the patches
and config blocks separate would just lead to even more errors as
I update one and forget the other, etc, etc. There have been various
incarnations of the tools that also did different things with the patches,
and they weren't skipped, but detected as applied or not on-the-fly,
so there are other historical reasons for the structure as well.

- kernel-yocto.bbclass does its own generic job of locating a proper
BSP using the KMACHINE/KTYPE/KARCH tags in BSP, it looks like
specifying a specific BSP file would just defeat of this: how should I
deal with this case where I'm providing both "standard" and "tiny"
KTYPE's ?
I'm not quite following the question here, so I can try to answer badly
and you can clarify based on my terrible answer.
The answer is indeed quite useful for a question that may not be that clear :)

The tools can locate your "bsp entry point" / "bsp definition" in
your layer. Either provided by something on the SRC_URI or something
in a kmeta repository (also specified on the SRC_URI). Since
both of those are added to the search paths they check. Those
are just .scc files with a specified KMACHINE/KTYPE that match, and
as you could guess from my first term I used, they are the entry
point into building the configuration queue.

That's where you start inheriting the base configuration(s) and including
feature blocks, etc. Those definitions are exactly the same as the
internal ones in the kernel-cache repository. By default, that located
BSP definition is excluded from inheriting patches .. because as you
noted, it would start trying to re-apply changes to the tree. It is there
to get the configuration blocks, patches come in via other feature
blocks or directly on the SRC_URI.

So in your case, just provide the two .scc file with the proper
defines so they can be located, and you'll get the proper branch
located in the tree, and the base configurations picked up for those
kernel types. You'd supply your BSP specific config by making
a common file and including it in both definitions, and patches by
a KERNEL_FEATURE variable or by specifying them directly on
the SRC_URI (via .patch or via a different .scc file).
That's what I was experimenting with at the same time, and something like
this does indeed produce the expected output:

KERNEL_FEATURES_append = " bsp/rockchip/nanopi-m4-${LINUX_KERNEL_TYPE}.scc"

However, it seems confusing, as that .scc is precisely the one that's
already selected
and used for the .cfg: it really looks like we're overriding the
default "bsp entry point"
with a value that's already the default, but with a different result.

So my gut feeling ATM is that everything should be much more clear if
specifying the default entry point would have the same effect as leaving
the default be used, ie. having patches be applied in both cases.


Bruce


[1] https://lists.yoctoproject.org/g/yocto/message/53454
[2] https://lists.yoctoproject.org/g/yocto/message/53452
[3] https://lists.yoctoproject.org/g/yocto/topic/61340326

Best regards,
--
Yann Dirson <yann@blade-group.com>
Blade / Shadow -- http://shadow.tech


--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


--
Yann Dirson <yann@blade-group.com>
Blade / Shadow -- http://shadow.tech


Re: [bitbake-devel] Git Fetcher Branch Default

Martin Jansa
 

On Wed, May 12, 2021 at 05:51:24AM -0700, Chuck Wolber wrote:
I got a fetcher failure on go-systemd today, which puzzled me. That
recipe has not changed in ages, and the SRCPV hash is clearly visible
in the repository.

After looking at it closer, it seems that github.com/coreos/go-systemd
has changed its default branch from master to main about six days ago.
This appears to break a fundamental assumption on the part of the
fetcher when looking for SRCREV and SRCPV hashes.

Looking at lib/bb/fetch2/git.py makes it evident that this is the
case. I can trivially fix this with a .bbappend, but it seems to me
that the fundamental default branch assumption needs to be updated.

Has anyone discussed adding main to the list of default branches to
try? If not, I may be able to come up with a patch, but the code does
process default branches as a list, so I will need to think a bit on
the best way to approach this. Any guidance would be appreciated.
I agree it's a bit annoying that some projects have chosen to rename the
existing branches instead of just adopting "main" to be the default
branch only for new projects (as the defaults changed e.g. on github).

I've already added explicit branch=main in 10+ recipes in various layers
for dunfell and newer branches (ostree in meta-oe just today or e.g.
https://github.com/ros/meta-ros/pull/846 yesterday).

Luckily it's easy to fix with local bbappend, so even people using
unsupported release (zeus and older) can do so or finally upgrade to
some supported release which might have this fix already.

It was also discussed shortly in #yocto yeasterday:

22:38 < zeddii> RP: did we come up with a policy on anything we can do
with the fetcher to make "main" be a fallback default to master. Or was
the end decision to just add explicit branch statements everywhere ?
23:21 < RP> zeddii: I'm open to patch proposals or we just set the
branch on urls
23:23 < RP> zeddii: being explicit is perhaps the better option that
magic in the fetcher

And I agree with RP, especially because fetcher magic won't get
backported to old bitbake used in zeus and older anyway, so it won't
help there and backporting fetcher change locally will be more risky
than backporting simple SRC_URI change for individual recipes which need
it.

Regards,


Re: [bitbake-devel] Git Fetcher Branch Default

Konrad Weihmann
 



On 12 May 2021 14:51, Chuck Wolber <chuckwolber@...> wrote:

I got a fetcher failure on go-systemd today, which puzzled me. That
recipe has not changed in ages, and the SRCPV hash is clearly visible
in the repository.

After looking at it closer, it seems that github.com/coreos/go-systemd
has changed its default branch from master to main about six days ago.
This appears to break a fundamental assumption on the part of the
fetcher when looking for SRCREV and SRCPV hashes.

Looking at lib/bb/fetch2/git.py makes it evident that this is the
case. I can trivially fix this with a .bbappend, but it seems to me
that the fundamental default branch assumption needs to be updated.

Iirc there has been the same discussion already last year or the year before... and the agreement was to leave it as it is. IMHO most of the used projects still go with master instead of main, so a patch of yours would create a lot of work for the majority of the recipes.
And as there is the branch attribute for the git fetcher already available (and widely used for especially these cases) I personally don't see the need for such a patch, unless it provides full backward compatibility



Has anyone discussed adding main to the list of default branches to
try? If not, I may be able to come up with a patch, but the code does
process default branches as a list, so I will need to think a bit on
the best way to approach this. Any guidance would be appreciated.

..Ch:W..


--
"Perfection must be reached by degrees; she requires the slow hand of
time." - Voltaire






Re: Understanding kernel patching in linux-yocto

Bruce Ashfield
 

On Wed, May 12, 2021 at 7:14 AM Yann Dirson <yann.dirson@blade-group.com> wrote:

I am currently working on a kmeta BSP for the rockchip-based NanoPI M4
[1], and I'm wondering how I should be providing kernel patches, as
just add ing "patch" directives in the .scc does not get them applied
unless the particular .scc gets included in KERNEL_FEATURES (see [2]).

From an old thread [3] I understand that the patches from the standard
kmeta snippets are already applied to the tree, and that to get the
patches from my BSP I'd need to reference it explicitly in SRC_URI
(along with using "nopatch" in the right places to avoid the
already-applied patches to get applied twice).

I have the feeling that I'm lacking the rationale behind this, and
would need to understand this better to make things right in this BSP.
Especially:
- at first sight, having the patches both applied to linux-yocto and
referenced in yocto-kernel-cache just to be skipped on parsing looks
like both information duplication and parsing of unused lines
At least some of this is mentioned in the advanced section of the
kernel-dev manual, but I can summarize/reword things here, and
I'm also doing a presentation related to this in the Yocto summit at
the end of this month.

The big thing to remember, is that the configuration and changes
you see in that repository, are not only for yocto purposes. The
concepts and structure pre-date when they were first brought in
to generate reference kernels over 10 years ago (the implementation
has changed, but the concepts are still the same). To this day,
there still are cases that they are used with just a kernel tree and
cross toolchain.

With that in mind, the meta-data is used for many different things

- It organizes patches / features and their configuration into
reusable blocks. At the same time documenting the changes
that we have applied to a tree
- It makes those patches and configuration blocks available to
other kernel trees (for whatever reason).
- It configures the tree during the build process, reusing both
configuration only and patch + configuration blocks
- It is used to generate a history clean tree from scratch for
each new supported kernel. Which is what I do when creating
new linux-yocto-dev references, and the new <version>/standard/*
branches in linux-yocto.

So why not just drop all the patches in the SRC_URI ? Been there,
done that. It fails spectacularly when you are managing queues of
hundreds of potentially conflicting patches (rt, yaffs, aufs, ... etc, etc)
and then attempting to constantly merge -stable and other kernel
trees into the repository. git is the tool for managing that, not stacks
of patches. You spend your entire life fixing patch errors and refreshing
fuzz (again, been there, done that).

So why not just keep a history and constantly merge new versions
into it ? Been there, done that. You end up with an absolute garbage
history of octopus merges and changes that are completely hidden,
non-obvious and useless for collaborating with other kernel projects.
Try merging a new kernel version into those same big features, it's
nearly impossible and you have a franken-kernel that you end up trying
to support and fix yourself. All the bugs are yours and yours alone.

So that's why there's a repository that tracks the patches and the
configuration and is used for multiple purposes. Keeping the patches
and config blocks separate would just lead to even more errors as
I update one and forget the other, etc, etc. There have been various
incarnations of the tools that also did different things with the patches,
and they weren't skipped, but detected as applied or not on-the-fly,
so there are other historical reasons for the structure as well.

- kernel-yocto.bbclass does its own generic job of locating a proper
BSP using the KMACHINE/KTYPE/KARCH tags in BSP, it looks like
specifying a specific BSP file would just defeat of this: how should I
deal with this case where I'm providing both "standard" and "tiny"
KTYPE's ?
I'm not quite following the question here, so I can try to answer badly
and you can clarify based on my terrible answer.

The tools can locate your "bsp entry point" / "bsp definition" in
your layer. Either provided by something on the SRC_URI or something
in a kmeta repository (also specified on the SRC_URI). Since
both of those are added to the search paths they check. Those
are just .scc files with a specified KMACHINE/KTYPE that match, and
as you could guess from my first term I used, they are the entry
point into building the configuration queue.

That's where you start inheriting the base configuration(s) and including
feature blocks, etc. Those definitions are exactly the same as the
internal ones in the kernel-cache repository. By default, that located
BSP definition is excluded from inheriting patches .. because as you
noted, it would start trying to re-apply changes to the tree. It is there
to get the configuration blocks, patches come in via other feature
blocks or directly on the SRC_URI.

So in your case, just provide the two .scc file with the proper
defines so they can be located, and you'll get the proper branch
located in the tree, and the base configurations picked up for those
kernel types. You'd supply your BSP specific config by making
a common file and including it in both definitions, and patches by
a KERNEL_FEATURE variable or by specifying them directly on
the SRC_URI (via .patch or via a different .scc file).

Bruce


[1] https://lists.yoctoproject.org/g/yocto/message/53454
[2] https://lists.yoctoproject.org/g/yocto/message/53452
[3] https://lists.yoctoproject.org/g/yocto/topic/61340326

Best regards,
--
Yann Dirson <yann@blade-group.com>
Blade / Shadow -- http://shadow.tech


--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II


Git Fetcher Branch Default

Chuck Wolber
 

I got a fetcher failure on go-systemd today, which puzzled me. That
recipe has not changed in ages, and the SRCPV hash is clearly visible
in the repository.

After looking at it closer, it seems that github.com/coreos/go-systemd
has changed its default branch from master to main about six days ago.
This appears to break a fundamental assumption on the part of the
fetcher when looking for SRCREV and SRCPV hashes.

Looking at lib/bb/fetch2/git.py makes it evident that this is the
case. I can trivially fix this with a .bbappend, but it seems to me
that the fundamental default branch assumption needs to be updated.

Has anyone discussed adding main to the list of default branches to
try? If not, I may be able to come up with a patch, but the code does
process default branches as a list, so I will need to think a bit on
the best way to approach this. Any guidance would be appreciated.

..Ch:W..


--
"Perfection must be reached by degrees; she requires the slow hand of
time." - Voltaire


Yocto Zeus : facing error regarding hostapd #zeus

rohit jadhav
 

Facing following issue :
ERROR: core-image-minimal-1.0-r0 do_rootfs: Postinstall scriptlets of ['hostapd'] have failed. If the intention is to defer them to first boot,
then please place them into pkg_postinst_ontarget_${PN} ().
Deferring to first boot via 'exit 1' is no longer supported.
Details of the failure are in /home/tel/imx_yocto_bsp_Zeus/Yocto_setup/build_imx6ull/tmp/work/imx6ull14x14evk-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_rootfs.
ERROR: Logfile of failure stored in: /home/tel/imx_yocto_bsp_Zeus/Yocto_setup/build_imx6ull/tmp/work/imx6ull14x14evk-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_rootfs.31340
ERROR: Task (/home/tel/imx_yocto_bsp_Zeus/Yocto_setup/sources/poky/meta/recipes-core/images/core-image-minimal.bb:do_rootfs) failed with exit code '1'

Please guide me if anyone have any idea to resolve.

Thanks in advance.


Understanding kernel patching in linux-yocto

Yann Dirson
 

I am currently working on a kmeta BSP for the rockchip-based NanoPI M4
[1], and I'm wondering how I should be providing kernel patches, as
just add ing "patch" directives in the .scc does not get them applied
unless the particular .scc gets included in KERNEL_FEATURES (see [2]).

From an old thread [3] I understand that the patches from the standard
kmeta snippets are already applied to the tree, and that to get the
patches from my BSP I'd need to reference it explicitly in SRC_URI
(along with using "nopatch" in the right places to avoid the
already-applied patches to get applied twice).

I have the feeling that I'm lacking the rationale behind this, and
would need to understand this better to make things right in this BSP.
Especially:
- at first sight, having the patches both applied to linux-yocto and
referenced in yocto-kernel-cache just to be skipped on parsing looks
like both information duplication and parsing of unused lines
- kernel-yocto.bbclass does its own generic job of locating a proper
BSP using the KMACHINE/KTYPE/KARCH tags in BSP, it looks like
specifying a specific BSP file would just defeat of this: how should I
deal with this case where I'm providing both "standard" and "tiny"
KTYPE's ?

[1] https://lists.yoctoproject.org/g/yocto/message/53454
[2] https://lists.yoctoproject.org/g/yocto/message/53452
[3] https://lists.yoctoproject.org/g/yocto/topic/61340326

Best regards,
--
Yann Dirson <yann@blade-group.com>
Blade / Shadow -- http://shadow.tech


Re: [qa-build-notification] QA notification for completed autobuilder build (yocto-3.2.4.rc1)

Sangeeta Jain
 

Hello All,

This is the full report for yocto-3.2.4.rc1:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults

======= Summary ========
No high milestone defects.

One new issue found:

Bug 14392 - [QA 3.2.4 RC1] failure in ptest : ptestresult.lttng-tools.tools


======= Bugs ========
https://bugzilla.yoctoproject.org/show_bug.cgi?id=14392

Thanks,
Sangeeta

-----Original Message-----
From: qa-build-notification@lists.yoctoproject.org <qa-build-
notification@lists.yoctoproject.org> On Behalf Of Pokybuild User
Sent: Friday, 7 May, 2021 12:06 AM
To: yocto@lists.yoctoproject.org
Cc: qa-build-notification@lists.yoctoproject.org
Subject: [qa-build-notification] QA notification for completed autobuilder build
(yocto-3.2.4.rc1)


A build flagged for QA (yocto-3.2.4.rc1) was completed on the autobuilder and is
available at:


https://autobuilder.yocto.io/pub/releases/yocto-3.2.4.rc1


Build hash information:

bitbake: e05d79a6ed92c9ce17b90fd5fb6186898a7b3bf8
meta-arm: 39bc4076b2d9a662111beaa0621ee9c1e37f56ea
meta-gplv2: 6e8e969590a22a729db1ff342de57f2fd5d02d43
meta-intel: c325d3e2eab9952dc175a38f31b78fecdcdd0fcc
meta-kernel: 4b288396eff43fe9b1a233aed1ce9b48329a2eb6
meta-mingw: 352d8b0aa3c7bbd5060a4cc2ebe7c0e964de4879
oecore: d47b7cdc3508343349f3bb3eacb2dc3227dd94d2
poky: 60c8482769f38a4db6f38d525405c887794511a9



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.purdie@linuxfoundation.org







Re: Recipe Grep'ing

Chuck Wolber
 

On Sat, May 8, 2021 at 6:25 PM Robert Joslyn
<robert.joslyn@redrectangle.org> wrote:

There is the oe-stylize.py script that attempts to format recipes
according to the style guide:
https://git.openembedded.org/meta-openembedded/tree/contrib/oe-stylize.py

Last time I played with it, I was a bit disappointed with some of the
changes it makes, some of which are different than what devtool does.
When I need to introduce new developers to bitbake, I'd love to be able
to hand them oe-stylize or something similar and just tell them to run
it before committing to make sure everything is formatted consistently.

I've had updating oe-stylize.py on my TODO list for a while, but more
important things always come up.
Given what I have seen so far, I am wondering if that is the right
place to start.

To summarize:

There appears to be general agreement that the idea is a good one, but a large
patch wall is considered rather objectionable by at least Bruce Ashfield.

Khem Raj brought up a good point that a big change like the one I am proposing
needs some sort of tooling to make sure we do not regress.

And the above reply from Robert Joslyn has me wondering if there is already
something in place to do the linting Khem Raj is referring to, or if
oe-stylize.py
would form the basis for such a tool if we added it to an automated system.

So taking a step back, does it make sense to update the guidance on the
styleguide (https://www.openembedded.org/wiki/Styleguide) page first?

If so, I would be happy to make the updates. I requested an account,
but I got an
error - "Error sending mail: Unknown error in PHP's mail() function."

..Ch:W..

--
"Perfection must be reached by degrees; she requires the slow hand of
time." - Voltaire


[meta-zephyr][PATCH 3/3] intel-x86-32.conf: add common MACHINE for x86 (32-bit) BOARDS

Naveen Saini
 

User need to specify board value to ZEPHYR_BOARD in local.conf
ZEPHYR_BOARD = "minnowboard"

By default it set to MinnowBoard Max 'minnowboard'

Currently 32-bit supported boards:
* up_squared_32
* minnowboard

Ref:
https://docs.zephyrproject.org/latest/boards/x86/index.html

Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
---
conf/machine/include/tune-core2-common.inc | 6 ++++++
conf/machine/intel-x86-32.conf | 12 ++++++++++++
2 files changed, 18 insertions(+)
create mode 100644 conf/machine/include/tune-core2-common.inc
create mode 100644 conf/machine/intel-x86-32.conf

diff --git a/conf/machine/include/tune-core2-common.inc b/conf/machine/include/tune-core2-common.inc
new file mode 100644
index 0000000..012f078
--- /dev/null
+++ b/conf/machine/include/tune-core2-common.inc
@@ -0,0 +1,6 @@
+DEFAULTTUNE ?= "core2-32"
+require conf/machine/include/tune-core2.inc
+require conf/machine/include/x86-base.inc
+
+# Add x86 to MACHINEOVERRIDES
+MACHINEOVERRIDES =. "x86:"
diff --git a/conf/machine/intel-x86-32.conf b/conf/machine/intel-x86-32.conf
new file mode 100644
index 0000000..06f6da5
--- /dev/null
+++ b/conf/machine/intel-x86-32.conf
@@ -0,0 +1,12 @@
+#@TYPE: Machine
+#@NAME: intel-x86-32
+#@DESCRIPTION: common MACHINE for 32-bit x86 boards. User must set ${ZEPHYR_BOARD}. By default is set to 'minnowboard' board.
+
+require conf/machine/include/tune-core2-common.inc
+
+ARCH_intel-x86-32 = "x86"
+
+# Supported Boards:
+# ZEPHYR_BOARD ?= "minnowboard"
+# ZEPHYR_BOARD ?= "up_squared_32"
+ZEPHYR_BOARD ?= "minnowboard"
--
2.17.1


[meta-zephyr][PATCH 2/3] intel-x86-64.conf: add common MACHINE for x86 (64-bit) BOARDS

Naveen Saini
 

User need to specify board value to ZEPHYR_BOARD in local.conf
ZEPHYR_BOARD = "ehl_crb"

By default it set to Elkhart Lake CRB 'ehl_crb'

Currently 64-bit supported boards:
* up_squared
* ehl_crb_sbl
* ehl_crb

Ref:
https://docs.zephyrproject.org/latest/boards/x86/index.html

Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
---
conf/machine/include/tune-corei7-common.inc | 3 +++
conf/machine/intel-x86-64.conf | 12 ++++++++++++
2 files changed, 15 insertions(+)
create mode 100644 conf/machine/intel-x86-64.conf

diff --git a/conf/machine/include/tune-corei7-common.inc b/conf/machine/include/tune-corei7-common.inc
index 7ad9516..509d190 100644
--- a/conf/machine/include/tune-corei7-common.inc
+++ b/conf/machine/include/tune-corei7-common.inc
@@ -1,3 +1,6 @@
DEFAULTTUNE ?= "corei7-64"
require conf/machine/include/tune-corei7.inc
require conf/machine/include/x86-base.inc
+
+# Add x86 to MACHINEOVERRIDE
+MACHINEOVERRIDES =. "x86:"
diff --git a/conf/machine/intel-x86-64.conf b/conf/machine/intel-x86-64.conf
new file mode 100644
index 0000000..15e3ad8
--- /dev/null
+++ b/conf/machine/intel-x86-64.conf
@@ -0,0 +1,12 @@
+#@TYPE: Machine
+#@NAME: intel-x86-64
+#@DESCRIPTION: common MACHINE for 64-bit x86 boards. User must set ${ZEPHYR_BOARD}. By default is set to 'ech_crb' board.
+
+require conf/machine/include/tune-corei7-common.inc
+
+ARCH_intel-x86-64 = "x86"
+
+# Supported Boards:
+# ZEPHYR_BOARD ?= "up_squared"
+# ZEPHYR_BOARD ?= "ehl_crb_sbl"
+ZEPHYR_BOARD ?= "ehl_crb"
--
2.17.1


[meta-zephyr][PATCH 0/3] Fix efi generation and add x86 MACHINE confs (cover letter)

Naveen Saini
 

(1) zephyr-kernel-src: fix efi generation failure for x86 boards

With zephyr v2.5.0, EFI binary generation support has been added for x86 board (64-bit mode).
To achieve this, an python tool[1] has been added to convert zephyr EFL file
into an EFI appliable. But unfortunately at current this does not work with Yocto cross-compilation env.
This patch fix this issue and allow to build zephyr.efi for ehl_crb and up_squared boards.


(2)
Instead of creating machine configuration for each
supported boards, I would like to have common machine configurations for
supported boards. One for 64-bit (intel-x86-64.conf) and one for 32-bit
(intel-x86-32.conf).

User need to specify board value to ZEPHYR_BOARD in local.conf based on
targeted board i.e
ZEPHYR_BOARD = "ehl_crb"

64-bit supported boards:
* up_squared
* ehl_crb_sbl
* ehl_crb (default)

32-bit supported boards:
* up_squared_32
* minnowboard (default)



Naveen Saini (3):
zephyr-kernel-src: fix efi generation failure for x86 boards
intel-x86-64.conf: add common MACHINE for x86 (64-bit) BOARDS
intel-x86-32.conf: add common MACHINE for x86 (32-bit) BOARDS

conf/machine/include/tune-core2-common.inc | 6 ++
conf/machine/include/tune-corei7-common.inc | 3 +
conf/machine/intel-x86-32.conf | 12 +++
conf/machine/intel-x86-64.conf | 12 +++
...ry-generation-issue-in-cross-compila.patch | 80 +++++++++++++++++++
.../zephyr-kernel/zephyr-kernel-src.inc | 1 +
6 files changed, 114 insertions(+)
create mode 100644 conf/machine/include/tune-core2-common.inc
create mode 100644 conf/machine/intel-x86-32.conf
create mode 100644 conf/machine/intel-x86-64.conf
create mode 100644 recipes-kernel/zephyr-kernel/files/0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch

--
2.17.1


[meta-zephyr][PATCH 1/3] zephyr-kernel-src: fix efi generation failure for x86 boards

Naveen Saini
 

With zephyr v2.5.0, EFI binary support has been added for x86 board (64-bit mode).

To achieve this, an python tool[1] has been added to convert zephyr ELF file
into an EFI appliable. But currently this does not work with Yocto
cross-compilation env.
This patch fix this issue and allow to build zephyr.efi

Ref:
[1]https://github.com/zephyrproject-rtos/zephyr/commit/928d31125f0b4eb28fe1cf3f3ad02b0ae071d7fd

Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
---
...ry-generation-issue-in-cross-compila.patch | 80 +++++++++++++++++++
.../zephyr-kernel/zephyr-kernel-src.inc | 1 +
2 files changed, 81 insertions(+)
create mode 100644 recipes-kernel/zephyr-kernel/files/0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch

diff --git a/recipes-kernel/zephyr-kernel/files/0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch b/recipes-kernel/zephyr-kernel/files/0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch
new file mode 100644
index 0000000..fd6fc6b
--- /dev/null
+++ b/recipes-kernel/zephyr-kernel/files/0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch
@@ -0,0 +1,80 @@
+From cfde3b1018c3151b6cc1fbe3e9e163d0aaf16954 Mon Sep 17 00:00:00 2001
+From: Naveen Saini <naveen.kumar.saini@intel.com>
+Date: Tue, 11 May 2021 13:46:39 +0800
+Subject: [PATCH] x86: fix efi binary generation issue in cross compilation env
+
+Set root directory for headers.
+
+Upstream-Status: Inappropriate [Cross-compilation specific]
+
+Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
+---
+ arch/x86/zefi/zefi.py | 6 +++++-
+ boards/x86/ehl_crb/CMakeLists.txt | 1 +
+ boards/x86/qemu_x86/CMakeLists.txt | 1 +
+ boards/x86/up_squared/CMakeLists.txt | 1 +
+ 4 files changed, 8 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/zefi/zefi.py b/arch/x86/zefi/zefi.py
+index d3514391a8..b9eccbfa10 100755
+--- a/arch/x86/zefi/zefi.py
++++ b/arch/x86/zefi/zefi.py
+@@ -106,7 +106,10 @@ def build_elf(elf_file):
+ # + We need pic to enforce that the linker adds no relocations
+ # + UEFI can take interrupts on our stack, so no red zone
+ # + UEFI API assumes 16-bit wchar_t
+- cmd = [args.compiler, "-shared", "-Wall", "-Werror", "-I.",
++
++ # Pass --sysroot path for cross compilation
++ sysrootarg = "--sysroot=" + args.sysroot
++ cmd = [args.compiler, "-shared", "-Wall", "-Werror", "-I.", sysrootarg,
+ "-fno-stack-protector", "-fpic", "-mno-red-zone", "-fshort-wchar",
+ "-Wl,-nostdlib", "-T", ldscript, "-o", "zefi.elf", cfile]
+ verbose(" ".join(cmd))
+@@ -145,6 +148,7 @@ def parse_args():
+ parser.add_argument("-o", "--objcopy", required=True, help="objcopy to be used")
+ parser.add_argument("-f", "--elf-file", required=True, help="Input file")
+ parser.add_argument("-v", "--verbose", action="store_true", help="Verbose output")
++ parser.add_argument("-s", "--sysroot", required=True, help="Cross compilation --sysroot=path")
+
+ return parser.parse_args()
+
+diff --git a/boards/x86/ehl_crb/CMakeLists.txt b/boards/x86/ehl_crb/CMakeLists.txt
+index 0d572eff30..6a228107dc 100644
+--- a/boards/x86/ehl_crb/CMakeLists.txt
++++ b/boards/x86/ehl_crb/CMakeLists.txt
+@@ -5,6 +5,7 @@ set_property(GLOBAL APPEND PROPERTY extra_post_build_commands
+ -c ${CMAKE_C_COMPILER}
+ -o ${CMAKE_OBJCOPY}
+ -f ${PROJECT_BINARY_DIR}/${CONFIG_KERNEL_BIN_NAME}.elf
++ -s ${SYSROOT_DIR}
+ $<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
+ WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
+ )
+diff --git a/boards/x86/qemu_x86/CMakeLists.txt b/boards/x86/qemu_x86/CMakeLists.txt
+index 1131a5c7ce..489f17192b 100644
+--- a/boards/x86/qemu_x86/CMakeLists.txt
++++ b/boards/x86/qemu_x86/CMakeLists.txt
+@@ -4,6 +4,7 @@ set_property(GLOBAL APPEND PROPERTY extra_post_build_commands
+ -c ${CMAKE_C_COMPILER}
+ -o ${CMAKE_OBJCOPY}
+ -f ${PROJECT_BINARY_DIR}/${CONFIG_KERNEL_BIN_NAME}.elf
++ -s ${SYSROOT_DIR}
+ $<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
+ WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
+ )
+diff --git a/boards/x86/up_squared/CMakeLists.txt b/boards/x86/up_squared/CMakeLists.txt
+index 0eaa9753fc..2e8ce7cfbc 100644
+--- a/boards/x86/up_squared/CMakeLists.txt
++++ b/boards/x86/up_squared/CMakeLists.txt
+@@ -5,6 +5,7 @@ set_property(GLOBAL APPEND PROPERTY extra_post_build_commands
+ -c ${CMAKE_C_COMPILER}
+ -o ${CMAKE_OBJCOPY}
+ -f ${PROJECT_BINARY_DIR}/${CONFIG_KERNEL_BIN_NAME}.elf
++ -s ${SYSROOT_DIR}
+ $<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
+ WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
+ )
+--
+2.17.1
+
diff --git a/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc b/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc
index 8c987bb..8d5f176 100644
--- a/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc
+++ b/recipes-kernel/zephyr-kernel/zephyr-kernel-src.inc
@@ -21,5 +21,6 @@ SRC_URI = "\
git://github.com/zephyrproject-rtos/libmetal.git;protocol=https;destsuffix=git/modules/hal/libmetal;name=libmetal \
git://github.com/zephyrproject-rtos/tinycrypt.git;protocol=https;destsuffix=git/modules/crypto/tinycrypt;name=tinycrypt \
file://0001-cmake-add-yocto-toolchain.patch \
+ file://0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch \
"
S = "${WORKDIR}/git"
--
2.17.1


Re: [zeus] python3-dlib #yocto #zeus #python

Bel Hadj Salem Talel
 

Thanks for the suggestion, but they are using the C++ API as well. I already created a recipe for the C++ API of dlib and it is working, the only thing needed is compitling the Python API.


Re: [zeus] python3-dlib #yocto #zeus #python

Khem Raj
 

On Tue, May 11, 2021 at 1:00 PM Bel Hadj Salem Talel <bhstalel@gmail.com> wrote:

Hi All,

Did anyone manage to create a recipe for python dlib from the official site https://github.com/davisking/dlib ?
They provide C++ and Python API, (CMakeLists + setup.py). All recipes found for dlib are inheriting cmake for C++.
But when inheriting setuptools3 error occurs.

did you look into http://layers.openembedded.org/layerindex/recipe/135534/


Thanks,
Talel


[zeus] python3-dlib #yocto #zeus #python

Bel Hadj Salem Talel
 

Hi All,

Did anyone manage to create a recipe for python dlib from the official site https://github.com/davisking/dlib ?
They provide C++ and Python API, (CMakeLists + setup.py). All recipes found for dlib are inheriting cmake for C++. 
But when inheriting setuptools3 error occurs.

Thanks,
Talel


Re: Improving NPM recipe build speed

Alessandro Tagliapietra
 

Hi Nicolas,

Thank you for the advice, that would work for files outside the npm packages!

Hopefully there's a way to improve npm install speed too, I was hoping there was a way to just run npm install instead of going through the yocto npm process (which takes 30 mins vs < 1 min that npm takesi

--
Alessandro Tagliapietra


On Mon, May 10, 2021 at 2:18 AM Nicolas Jeker <n.jeker@...> wrote:
On Mon, 2021-04-26 at 16:29 -0700, Alessandro Tagliapietra wrote:
> Hi everyone,

Hi Alessandro,

> I'm making an image that includes the node-red recipe from meta-iot-
> cloud.
> The whole process takes about 30+ minutes for that recipe alone (most
> of the time spent in do_configure).
> Now I want to override the recipe systemd service file and create a
> nodered user. Every time I change my bbappend file I have to wait 30+
> minutes to have the result even for a small systemd file change.
>
> Is it possible to speed up the process somehow?
>

I never worked with node-red in yocto, so I can't speak specifically
for that, but I encountered similar situations before. Here is what I
usually do when I need to change a file in a recipe that takes a really
long time to compile or triggers a rebuild of a ton of other recipes.

This only works for files that don't need to be compiled, like
configuration files, systemd service files, udev rules etc. I usually
replace the file in the rootfs directly on the device (or boot from NFS
and edit the file in the NFS export). For example if I need to change a
systemd service file, I change the file on my host, copy it with scp to
the device and check if everything is working as expected. When I'm
finished, I reintegrate my edits with a bbappend file and check again
if it works.

> Thanks in advance


[meta-rockchip][PATCH] trusted-firmware-a: Fix rk3399 build with gcc11

Khem Raj
 

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Ross Burton <ross.burton@arm.com>
---
.../files/0001-Fix-build-with-gcc-11.patch | 34 ++++++++++++++++++
.../0001-dram-Fix-build-with-gcc-11.patch | 34 ++++++++++++++++++
...-Use-compatible-.asciz-asm-directive.patch | 31 ++++++++++++++++
...rk-already-defined-functions-as-weak.patch | 35 +++++++++++++++++++
.../trusted-firmware-a_%.bbappend | 4 +++
5 files changed, 138 insertions(+)
create mode 100644 recipes-bsp/trusted-firmware-a/files/0001-Fix-build-with-gcc-11.patch
create mode 100644 recipes-bsp/trusted-firmware-a/files/0001-dram-Fix-build-with-gcc-11.patch
create mode 100644 recipes-bsp/trusted-firmware-a/files/0001-plat_macros.S-Use-compatible-.asciz-asm-directive.patch
create mode 100644 recipes-bsp/trusted-firmware-a/files/0001-pmu-Do-not-mark-already-defined-functions-as-weak.patch

diff --git a/recipes-bsp/trusted-firmware-a/files/0001-Fix-build-with-gcc-11.patch b/recipes-bsp/trusted-firmware-a/files/0001-Fix-build-with-gcc-11.patch
new file mode 100644
index 0000000..7956717
--- /dev/null
+++ b/recipes-bsp/trusted-firmware-a/files/0001-Fix-build-with-gcc-11.patch
@@ -0,0 +1,34 @@
+From d4c60a312271e000e8339f0b47a302c325313758 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 11 May 2021 11:46:30 -0700
+Subject: [PATCH] Fix build with gcc 11
+
+Fixes
+plat/rockchip/rk3399/drivers/dram/dram.c:13:22: error: ignoring attribute 'section (".pmusram.data")' because it conflicts with previous 'section (".sram.data")' [-Werror=attributes]
+
+See [1]
+
+[1] https://developer.trustedfirmware.org/T925
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plat/rockchip/rk3399/drivers/dram/dram.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/plat/rockchip/rk3399/drivers/dram/dram.h b/plat/rockchip/rk3399/drivers/dram/dram.h
+index 0eb12cf29..5572b1612 100644
+--- a/plat/rockchip/rk3399/drivers/dram/dram.h
++++ b/plat/rockchip/rk3399/drivers/dram/dram.h
+@@ -149,7 +149,7 @@ struct rk3399_sdram_params {
+ uint32_t rx_cal_dqs[2][4];
+ };
+
+-extern __sramdata struct rk3399_sdram_params sdram_config;
++extern struct rk3399_sdram_params sdram_config;
+
+ void dram_init(void);
+
+--
+2.31.1
+
diff --git a/recipes-bsp/trusted-firmware-a/files/0001-dram-Fix-build-with-gcc-11.patch b/recipes-bsp/trusted-firmware-a/files/0001-dram-Fix-build-with-gcc-11.patch
new file mode 100644
index 0000000..14defed
--- /dev/null
+++ b/recipes-bsp/trusted-firmware-a/files/0001-dram-Fix-build-with-gcc-11.patch
@@ -0,0 +1,34 @@
+From a09a1de53aba422249a8376b0d95024200021317 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 11 May 2021 11:55:31 -0700
+Subject: [PATCH] dram: Fix build with gcc 11
+
+This is a redundant assignment which GCC warns about.
+
+Fixes
+
+plat/rockchip/rk3399/drivers/dram/dram_spec_timing.c:781:11: error: explicitly assigning value of variable of type 'uint32_t' (aka 'unsigned int') to itself [-Werror,-Wself-assign]
+ twr_tmp = twr_tmp;
+ ~~~~~~~ ^ ~~~~~~~
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plat/rockchip/rk3399/drivers/dram/dram_spec_timing.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/plat/rockchip/rk3399/drivers/dram/dram_spec_timing.c b/plat/rockchip/rk3399/drivers/dram/dram_spec_timing.c
+index 3cdb7a296..76bc5ee96 100644
+--- a/plat/rockchip/rk3399/drivers/dram/dram_spec_timing.c
++++ b/plat/rockchip/rk3399/drivers/dram/dram_spec_timing.c
+@@ -778,7 +778,7 @@ static void lpddr3_get_parameter(struct timing_related_config *timing_config,
+ else if (twr_tmp <= 8)
+ twr_tmp = 8;
+ else if (twr_tmp <= 12)
+- twr_tmp = twr_tmp;
++ ; /* do nothing */
+ else if (twr_tmp <= 14)
+ twr_tmp = 14;
+ else
+--
+2.31.1
+
diff --git a/recipes-bsp/trusted-firmware-a/files/0001-plat_macros.S-Use-compatible-.asciz-asm-directive.patch b/recipes-bsp/trusted-firmware-a/files/0001-plat_macros.S-Use-compatible-.asciz-asm-directive.patch
new file mode 100644
index 0000000..8807fca
--- /dev/null
+++ b/recipes-bsp/trusted-firmware-a/files/0001-plat_macros.S-Use-compatible-.asciz-asm-directive.patch
@@ -0,0 +1,31 @@
+From 5f78ce7eb9ab6bf5af682a715a9264d2a5ee7666 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 11 May 2021 12:06:34 -0700
+Subject: [PATCH] plat_macros.S: Use compatible .asciz asm directive
+
+clang asm does not like two strings to .asciz therefore make it single
+string which works on clang too.
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plat/rockchip/common/include/plat_macros.S | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/plat/rockchip/common/include/plat_macros.S b/plat/rockchip/common/include/plat_macros.S
+index 691beeb44..c07be9ca9 100644
+--- a/plat/rockchip/common/include/plat_macros.S
++++ b/plat/rockchip/common/include/plat_macros.S
+@@ -23,8 +23,7 @@ icc_regs:
+
+ /* Registers common to both GICv2 and GICv3 */
+ gicd_pend_reg:
+- .asciz "gicd_ispendr regs (Offsets 0x200 - 0x278)\n" \
+- " Offset:\t\t\tvalue\n"
++ .asciz "gicd_ispendr regs (Offsets 0x200 - 0x278)\n Offset:\t\t\tvalue\n"
+ newline:
+ .asciz "\n"
+ spacer:
+--
+2.31.1
+
diff --git a/recipes-bsp/trusted-firmware-a/files/0001-pmu-Do-not-mark-already-defined-functions-as-weak.patch b/recipes-bsp/trusted-firmware-a/files/0001-pmu-Do-not-mark-already-defined-functions-as-weak.patch
new file mode 100644
index 0000000..bd4d2b5
--- /dev/null
+++ b/recipes-bsp/trusted-firmware-a/files/0001-pmu-Do-not-mark-already-defined-functions-as-weak.patch
@@ -0,0 +1,35 @@
+From 9d963cd69faf94bdcb80624132fd10392f57875b Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 11 May 2021 12:11:51 -0700
+Subject: [PATCH] pmu: Do not mark already defined functions as weak
+
+These functions are already defined as static functions in same header
+Fixes
+
+| plat/rockchip/common/drivers/pmu/pmu_com.h:35:14: error: weak identifier 'pmu_power_domain_ctr' never declared [-Werror] | #pragma weak pmu_power_domain_ctr | ^
+| plat/rockchip/common/drivers/pmu/pmu_com.h:36:14: error: weak identifier 'check_cpu_wfie' never declared [-Werror]
+| #pragma weak check_cpu_wfie
+| ^
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plat/rockchip/common/drivers/pmu/pmu_com.h | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/plat/rockchip/common/drivers/pmu/pmu_com.h b/plat/rockchip/common/drivers/pmu/pmu_com.h
+index 5359f73b4..3f9ce7df9 100644
+--- a/plat/rockchip/common/drivers/pmu/pmu_com.h
++++ b/plat/rockchip/common/drivers/pmu/pmu_com.h
+@@ -32,8 +32,6 @@ enum pmu_pd_state {
+ };
+
+ #pragma weak plat_ic_get_pending_interrupt_id
+-#pragma weak pmu_power_domain_ctr
+-#pragma weak check_cpu_wfie
+
+ static inline uint32_t pmu_power_domain_st(uint32_t pd)
+ {
+--
+2.31.1
+
diff --git a/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend b/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend
index 1942c17..c90673e 100644
--- a/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend
+++ b/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend
@@ -8,4 +8,8 @@ COMPATIBLE_MACHINE_append_rk3328 = "|rk3328"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += "\
file://serial-console-baudrate.patch \
+ file://0001-Fix-build-with-gcc-11.patch \
+ file://0001-dram-Fix-build-with-gcc-11.patch \
+ file://0001-plat_macros.S-Use-compatible-.asciz-asm-directive.patch \
+ file://0001-pmu-Do-not-mark-already-defined-functions-as-weak.patch \
"
--
2.31.1


Re: [zeus] python3-numpy: No module named 'numpy.core._multiarray_umath' #yocto #zeus #python

Konrad Weihmann
 

On 11.05.21 18:14, Bel Hadj Salem Talel wrote:
Hi All,
I integrated python3-numpy in my image and when trying to import it I get this error: (python3 version: 3.7.7)
------
>>> import cv2
OpenCV bindings requires "numpy" package.
Install it via command:
    pip install numpy
Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/numpy/core/__init__.py", line 17, in <module>
    from . import multiarray
  File "/usr/lib/python3.7/site-packages/numpy/core/multiarray.py", line 14, in <module>
    from . import overrides
  File "/usr/lib/python3.7/site-packages/numpy/core/overrides.py", line 7, in <module>
    from numpy.core._multiarray_umath import (
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.7/site-packages/cv2/__init__.py", line 8, in <module>
    import numpy
  File "/usr/lib/python3.7/site-packages/numpy/__init__.py", line 142, in <module>
root@menzu-media:~# python3
Python 3.7.7 (default, Apr 22 2021, 09:42:29)
[GCC 9.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
OpenCV bindings requires "numpy" package.
Install it via command:
    pip install numpy
Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/numpy/core/__init__.py", line 17, in <module>
    from . import multiarray
  File "/usr/lib/python3.7/site-packages/numpy/core/multiarray.py", line 14, in <module>
    from . import overrides
  File "/usr/lib/python3.7/site-packages/numpy/core/overrides.py", line 7, in <module>
    from numpy.core._multiarray_umath import (
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.7/site-packages/cv2/__init__.py", line 8, in <module>
    import numpy
  File "/usr/lib/python3.7/site-packages/numpy/__init__.py", line 142, in <module>
    from . import core
  File "/usr/lib/python3.7/site-packages/numpy/core/__init__.py", line 47, in <module>
    raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
  1. Check that you expected to use Python3.7 from "/usr/bin/python3",
     and that you have no directories in your PATH or PYTHONPATH that can
     interfere with the Python and numpy version "1.17.0" you're trying to use.
  2. If (1) looks fine, you can open a new issue at
     https://github.com/numpy/numpy/issues.  Please include details on:
     - how you installed Python
     - how you installed numpy
     - your operating system
     - whether or not you have multiple versions of Python installed
     - if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
  (removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: No module named 'numpy.core._multiarray_umath'
----------
Did any one encounter this issue?
According to [1] yes - you might want to consider back porting one of the more recent versions of numpy from [2], as you are using a pretty outdated one

[1] https://stackoverflow.com/questions/54665842/when-importing-tensorflow-i-get-the-following-error-no-module-named-numpy-cor
[2] http://layers.openembedded.org/layerindex/recipe/51338/

Also the github issue tracker of numpy contains at least one issue of the same sort

Thanks,
Talel

41 - 60 of 53484