[meta-virtualization] LXD anyone?

Bruce Ashfield bruce.ashfield at gmail.com
Fri Apr 21 11:03:25 PDT 2017


On Fri, Apr 21, 2017 at 12:32 PM, Massimiliano Malisan <
max.malisan.uk at gmail.com> wrote:

> Hi,
>
> well, it's a bit complicated: being a newbie of both Yocto and go-lang
> when I started all this I wasn't aware of the introduction of go in the
> openembedded-core meta master (I am using morty). As I couldn't sort out
> quickly how to set up the recipe with the meta-virtualization go compiler I
> switched all the meta to use meta-golang (quite an easy task), which seemed
> to me much more easy to use and directly produced a building tree structure
> similar to the one I got from "go get" of LXD source (which I used as a
> reference to understand all the package's dependencies and go building
> structure).
>

I'm not so concerned about the variant of the go compiler used to build, as
long
as you found something that worked, it is something that we can merge.

But yes, anything we merge now should work with oe-core's go variant, since
that
is what we are standardizing on.


>
> Besides this:
> * I created a bunch of go recipes for all the dependencies (following the
> example of docker's recipe dependencies)
>

These can be submitted to meta-virt, but packaging too many support packages
isn't necessarily the right thing, since they can conflict with other go
packages that
need a similar dependency, but on another version.

The technique that I've been using to deal with this lately is to put
multiple upstream
repos on the SRC_URI and allow the fetcher to pull them into the SRC
directory, and
have them available to the build.

i.e. look at what I did for libnetwork in docker, you can do something
similar per-dependency
and use destsuffix= to get them in the right place (i.e. destsuffix=git/src/
github.com/openSUSE/umoci)


> * I added and/or updated few other recipes (some are added+updated from
> meta-cube)
>

These can be sent here, or sent as a pull request on the OverC github
pages, if you send
them here, make sure to have [meta-cube] in the subject. Or go right for
the github pull
request.


> * I added/changed some system configuration files
>

We'd have to look at those closely, system level config is something that
either needs to
be controlled by packageconfigs, or is a distro policy. So they may not be
something
that you need to send at all.


> * I added some kernel modules - but without a real knowledge of which are
> strictly needed: I just checked those needed by docker (using the checking
> script from coreos docker's repo -- https://github.com/coreos/d
> ocker/blob/master/contrib/check-config.sh) and created corresponding
> kernel scc and cfg files (I also updated the docker's ones but this is a
> collateral and I don't know if it's needed).
>

That would be fine, following what we do for docker makes sense.


>
> Now I got a bit more insight on the overall process I may try to update
> the recipes to use the new go compiler structure (but for the moment I
> still need to keep it working for morty.. and I wouldn't like to keep two
> different versions of the recipes).
>
> By the way, now LXD seems to be working on my test image; there are still
> few thing I have to do manually after the installation (which should be
> automatized within the building process and I haven't been able to, yet)
> and few "side" things missing:
> * I could not build criu (as per previous message to the list) which is
> indicated as a tool to be used together with LXD on the build instructions
>

unless you are doing container migration or snapshots, you can ignore it,
and even make
it an optional dependency.


> * I have not been able to use it with apparmor (I didn't manage to build
> it from the meta-security)
>

this would be optional anyway, apparmor is a distro type config, and
building it in, or out, of
LXD would be a packageconfig or distro feature.


> * In his blog, LXD main developer suggests to use ZFS as ideal container
> fs but it's currently missing from Yocto...
>

I wouldn't worry about this either, we can't dictate the FS that people
use, so the recipes need
to be generic and that's up to the distro or image assembler.


>
> If all this seems reasonable (and if you give me some links where to read
> how to upload or send the recipes and patches I did) I will be happy to
> contribute..
>
>
See above. You can send them via git send-email, or github pull requests.
Instructions on how to
send the patches should be in the README of meta-virt.

Cheers,

Bruce


> Cheers,
>     Max
>
>
> On 20 April 2017 at 16:10, Bruce Ashfield <bruce.ashfield at gmail.com>
> wrote:
>
>>
>>
>> On Thu, Apr 20, 2017 at 10:00 AM, Massimiliano Malisan <
>> max.malisan.uk at gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> is anyone currently working on creating a recipe for LXD?
>>>
>>
>> We started creating one about a year ago (and it was discussed, but I
>> can't find a reference now) .. but that effort never did make it into the
>> tree and has bit rotted now.
>>
>> So if you have something underway, and are looking for input, feel free
>> to post RFC patches and we can see what's working (or not!).
>>
>> Cheers,
>>
>> Bruce
>>
>>
>>>
>>> I have been working on one in the last days but I still have some issues
>>> to have it run properly; being quite new to Yocto I am not sure if they
>>> depend on something in my recipe or some other missing dependency or
>>> configuration, so any other voice is more than welcome.
>>>
>>> Cheers,
>>>     Max
>>>
>>> --
>>> _______________________________________________
>>> meta-virtualization mailing list
>>> meta-virtualization at yoctoproject.org
>>> https://lists.yoctoproject.org/listinfo/meta-virtualization
>>>
>>>
>>
>>
>> --
>> "Thou shalt not follow the NULL pointer, for chaos and madness await thee
>> at its end"
>>
>
>


-- 
"Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/meta-virtualization/attachments/20170421/0add41da/attachment-0001.html>


More information about the meta-virtualization mailing list