[yocto] [EXTERNAL] Re: Issues adding bare meta toolchain to yocto build

William Mills wmills at ti.com
Thu Oct 24 06:43:06 PDT 2019


Hi Richard,

On 10/24/19 7:01 AM, richard.purdie at linuxfoundation.org wrote:
> On Thu, 2019-10-24 at 10:03 +0000, Westermann, Oliver wrote:
>>> On Wed, 2019-10-23 at 15:34 +0000, Richard Purdie wrote:
>>>> On Wed, 2019-10-23 at 11:23 +0000, Westermann, Oliver wrote:
>>>> Hey,
>>>> [...]
>>>> Any suggestions on what I'm doing wrong or how to debug this
>>>> further?
>>> Sounds like the sysroot filtering code doesn't know about this
>>> directory and therefore doesn't pass it through to the recipe
>>> sysroot?
>>
>> Sorry to ask dumb questions, but what do you mean by "this
>> directory"?
>> The toolchain directory created by the TI recipe? Shouldn't that be
>> handled by FILES_${PN}?
> 
> If its a target recipe, FILES makes sense.
> 
> If its a native recipe, there are no packages and therefore FILES
> doesn't make sense.
> 
>>> The recipe you link to is for an on target compiler, not one in the
>>> sysroot.
>>
>> Again, might be a stupid missunderstanding on my side: The recipe I
>> linked extracts a precompiled toolchain that is suitable only for
>> x86_64 systems and  allows a "native" package. From my current
>> understanding, a native package is to be used on the build host,
>> which is what I'm intending to do here.
> 
> You are correct, that recipe disables the target version and appears to
> provide a native binary. I can't actually see how it can work though.
> Can you confirm that recipe does work?
> 

Denys can you confirm or add context?

>> I managed to sucessfully add a native recipe for the NXP code signing
>> tool (which is only provided as a precompiled binary as well) which
>> works as expected.
> 
> If you install the binaries into ${bindir} they will. If you place them
> somewhere else which the system doesn't know about, they probably
> won't.
> 
> There are ways to make alternative locations work but I don't see any
> of that in the above recipe.
> 
>>> I'm actually a little surprised you can't use our standard cross
>>> compiler assuming this M4 core is on an ARM chip? It should be a
>>> case
>>> of passing the right options to the compiler to target the M4?
>>
>> I'm totally up for any infos on how to do this! I did google around
>> for recipes for examples that build a non-linux binary using yocto,
>> but I couldn't really find anything or any documentation. But maybe
>> my searchterms (along "build baremetal  arm binary using yocto") were
>> off target. Can you point me at documentation, examples,
>> searchterms..?

I started looking at this thread because I had the same questions.  Is
it possible to make a recipe depend on another version of GCC and
restart the whole GCC build process again with a different config.

Or does this need multi-config?

This is the question I was trying to ask the other day.

> 
> I'm saying I don't think you need a second toolchain. I think you could
> use your existing arm toolchain with the right compiler options.
> 
> See https://stackoverflow.com/questions/28514507/what-makes-bare-metal-tool-chains-special
> 
> So you'd just add the right flags to the compiler, something like:
> 
> XXX-gcc -mcpu=cortex-m4 -march=armv7e-m -ffreestanding -nostdlib -nostdinc
> 
> i.e. tell it which processor to target and not to use standard
> libraries/includes.
> 
> There isn't anything that special about a baremetal compiler except it
> sets some different default flags and is missing the library support.
> 

Really??

Lets start with the fact that the ARM binary toolchain has been tested
with M4 cores.

Then understand that you will need to supply your own gcc compiler
helpers and all stdc functions even the ones that port well like strlen
and memcpy.  (Because everyone should embedded their own unoptimized
memcpy in their projects.)

This works for the kernel and u-boot but they were designed for this and
have many eyes on their version of memcpy etc.  They are also running on
the same CPU core as the Linux user space.

Given this I can believe this would work for CLang/LLVM if M4 support
was enabled at build time.  I am not convinced for GCC.

If the above really works then why does gcc insist on being compiled
again after the c library has been compiled?  I have asked and never got
an answer that I understood.  I have been told that it looks at the c
library headers and changes what it builds into the compiler.  Does all
that magic go away with just -nostdinc and -nostdlib?

I know this works for the kernel but targeting an RTOS or bare-metal for
a very different core would make me nervous.  Especially true if the M4
would need to link libraries that were built outside of OE.

I know I am pushing back hard but I am also hoping you will convince me
I am wrong.

Thanks,
Bill


More information about the yocto mailing list