Cross compiler which runs on the target architecture


Stefan Herbrechtsmeier
 

Hi Peter and Richard,

does a follow up of this old topic (thread) regarding a cross compiler which runs on the target architecture exists?
https://www.yoctoproject.org/pipermail/yocto/2014-December/022782.html

Kind regards
  Stefan


Khem Raj
 

On 12/9/20 4:38 AM, Stefan Herbrechtsmeier wrote:
Hi Peter and Richard,
does a follow up of this old topic (thread) regarding a cross compiler which runs on the target architecture exists?
https://www.yoctoproject.org/pipermail/yocto/2014-December/022782.html
I dont think there were further discussions. However, I think the cross-canadian approach is perhaps step in right direction. We would need to enhance it to be able to build multiple cross-canadian tuples instead of only one that we build today which is based on SDK_MACHINE and TARGET_ARCH,

Kind regards
  Stefan


Stefan Herbrechtsmeier
 

Hi Khem,

Am 09.12.20 um 19:23 schrieb Khem Raj:
On 12/9/20 4:38 AM, Stefan Herbrechtsmeier wrote:
Hi Peter and Richard,

does a follow up of this old topic (thread) regarding a cross compiler which runs on the target architecture exists?
https://www.yoctoproject.org/pipermail/yocto/2014-December/022782.html
I dont think there were further discussions. However, I think the cross-canadian approach is perhaps step in right direction. We would need to enhance it to be able to build multiple cross-canadian tuples instead of only one that we build today which is based on SDK_MACHINE and TARGET_ARCH
The cross-canadian changes the HOST variables and not the TARGET_ variables. I need to change the TARGET variables but keep the HOST variables. It looks like this isn't really intended because the HOST_PREFIX use the TARGET_PREFIX for example.

Regards
Stefan


Peter
 

I dont think there were further discussions.
The proof of concept layer is http://layers.openembedded.org/layerindex/branch/master/layer/meta-exotic/ - https://github.com/peteasa/meta-exotic/wiki.  I created a set of cross compilers for the Epiphany processor http://www.adapteva.com/epiphanyiii/ using this layer and the layer https://github.com/peteasa/meta-epiphany that builds the specific cross compilers for that processor.  It worked quite well with yocto-1.7.1 so it's quite an old bit of code, however the ideas could work in a more recent versions of Yocto.

In summary the yocto-1.7.1 code had three variables of interest - BUILD: for example the system of the build machine; HOST: for example the host system on which the executable will run; TARGET: for example the system that the compiler output will be created for.  The
meta-exotic layer introduces a fourth EXOTIC_TARGET and this allows us to differentiate between the build machine, host system and the system that the compiled Linux kernel runs on.  I called it EXOTIC because if the compiled Linux kernel and applications needs to communicate with a remote accelerator with a different system then as far as most of the Yocto build is concerned this fourth system is foreign to the embedded product (ie exotic) that the Linux kernel and applications are built on.

I call it a proof of concept because at the time it seemed to me that the sensible "fix" would be to introduce these four macro's into Yocto itself so that Yocto had 4 rather than 3 selectable system (build machine, host machine, target machine and exotic machine).  In the absence of full Yocto support for these 4 I used the Yocto parser rules to update the --target, --host variables in cross compilers to match the appropriate specification.  See https://github.com/peteasa/meta-exotic/wiki/Introducing-EXOTIC-defines for a more detailed discussion with examples.

I know that Yocto has moved on a lot since I produced the EXOTIC layer so if there is a good way to handle the creation of the various cross compilers that that is a good thing.  In my case my Exotic target allowed me to create both code that ran directly on the Epiphany processor (an accelerator attached to the
arm-poky-linux-gnueabi embedded system) and a cross compiler environment that allowed me to build code for both the Epiphany accelerator and the arm-poky-linux-gnueabi embedded system from the same SDK running on the SDK machine of my choice.  Use of the four variables allowed me to make gcc / applications for any of the following combinations:

--host=build_machine --target=build_machine

--host=build_machine --target=SDK_machine

--host=arm_target --target=arm_target

--host=SDK_machine --target=arm_target

--host=build_machine --target=Epiphany_accelerator

--host=SDK_machine --target=Epiphany_accelerator


--host=arm_target --target=Epiphany_accelerator

Yes it complex, but these are the necessary combinations if you want to create a complete system that provides support for an embedded product with an accelerator and provides an SDK environment so that you can build locally and then download the binary to the embedded systems.

All I can say is it worked for me!


Stefan Herbrechtsmeier
 

Hi Peter,

Am 17.12.20 um 19:59 schrieb Peter:
I dont think there were further discussions.
The proof of concept layer is http://layers.openembedded.org/layerindex/branch/master/layer/meta-exotic/ - https://github.com/peteasa/meta-exotic/wiki.  I created a set of cross compilers for the Epiphany processor http://www.adapteva.com/epiphanyiii/ using this layer and the layer https://github.com/peteasa/meta-epiphany that builds the specific cross compilers for that processor.  It worked quite well with yocto-1.7.1 so it's quite an old bit of code, however the ideas could work in a more recent versions of Yocto.
In summary the yocto-1.7.1 code had three variables of interest - BUILD: for example the system of the build machine; HOST: for example the host system on which the executable will run; TARGET: for example the system that the compiler output will be created for.  The meta-exotic layer introduces a fourth EXOTIC_TARGET and this allows us to differentiate between the build machine, host system and the system that the compiled Linux kernel runs on.  I called it EXOTIC because if the compiled Linux kernel and applications needs to communicate with a remote accelerator with a different system then as far as most of the Yocto build is concerned this fourth system is foreign to the embedded product (ie exotic) that the Linux kernel and applications are built on.
I call it a proof of concept because at the time it seemed to me that the sensible "fix" would be to introduce these four macro's into Yocto itself so that Yocto had 4 rather than 3 selectable system (build machine, host machine, target machine and exotic machine).  In the absence of full Yocto support for these 4 I used the Yocto parser rules to update the --target, --host variables in cross compilers to match the appropriate specification.  See https://github.com/peteasa/meta-exotic/wiki/Introducing-EXOTIC-defines for a more detailed discussion with examples.
I know that Yocto has moved on a lot since I produced the EXOTIC layer so if there is a good way to handle the creation of the various cross compilers that that is a good thing.  In my case my Exotic target allowed me to create both code that ran directly on the Epiphany processor (an accelerator attached to the arm-poky-linux-gnueabi embedded system) and a cross compiler environment that allowed me to build code for both the Epiphany accelerator and the arm-poky-linux-gnueabi embedded system from the same SDK running on the SDK machine of my choice.  Use of the four variables allowed me to make gcc / applications for any of the following combinations:
--host=build_machine --target=build_machine
--host=build_machine --target=SDK_machine
--host=arm_target --target=arm_target
--host=SDK_machine --target=arm_target
--host=build_machine --target=Epiphany_accelerator
--host=SDK_machine --target=Epiphany_accelerator
--host=arm_target --target=Epiphany_accelerator
Yes it complex, but these are the necessary combinations if you want to create a complete system that provides support for an embedded product with an accelerator and provides an SDK environment so that you can build locally and then download the binary to the embedded systems.
All I can say is it worked for me!
Thanks for the detail explanation of your solution.

The problem I see is that your solution doesn't scale. What happens if your system have a second not Linux system beside the accelerator like a power management unit?

Why you doesn't use multilib? It looks like the only missing part is the combination of host=machine and target=multilib.

Regards
Stefan


Peter
 

Why you doesn't use multilib?
Once the framework is in place adding addtional compile flags like   --enable-multilib is relatively easy.. see https://github.com/peteasa/meta-exotic/blob/master/exotic-gc/exotic-gcc-configure-common.inc#L14 where I actually have used multilib.  In addition it appears that this option is on by default - see https://gcc.gnu.org/onlinedocs/libstdc++/manual/configure.html
The problem I see is that your solution doesn't scale
Interesting comment.  I would expect that for 2 or more exotic systems you could create a separate build environment / SDK for each of the exotic systems - not ideal but would scale because each would produce executables that are bundled into a distribution that would published and loaded in the normal way onto embedded system.  Don't forget my initial challenge was that I could not easily create an build environment / SDK that would create packages for my exotic processor.  My solution addressed that challenge and re-uses a lot of the code that is provided by yocto.. my intention was to find out if there was community interest in multi system building by providing a proof of concept to spark peoples interest.  The proposed "specialist recipe" idea, whilst simpler, does not produce the environment that I wanted, I would not have been able to compile simple applications for my accellerator on my target arm processor for example.