[meta-virtualization] LXD anyone?

Bruce Ashfield bruce.ashfield at gmail.com
Tue Apr 25 05:55:33 PDT 2017


On Tue, Apr 25, 2017 at 3:56 AM, Massimiliano Malisan <
max.malisan.uk at gmail.com> wrote:

>
>
> On 21 April 2017 at 19:03, Bruce Ashfield <bruce.ashfield at gmail.com>
> wrote:
>
>>
>>
>> On Fri, Apr 21, 2017 at 12:32 PM, Massimiliano Malisan <
>> max.malisan.uk at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> well, it's a bit complicated: being a newbie of both Yocto and go-lang
>>> when I started all this I wasn't aware of the introduction of go in the
>>> openembedded-core meta master (I am using morty). As I couldn't sort out
>>> quickly how to set up the recipe with the meta-virtualization go compiler I
>>> switched all the meta to use meta-golang (quite an easy task), which seemed
>>> to me much more easy to use and directly produced a building tree structure
>>> similar to the one I got from "go get" of LXD source (which I used as a
>>> reference to understand all the package's dependencies and go building
>>> structure).
>>>
>>
>> I'm not so concerned about the variant of the go compiler used to build,
>> as long
>> as you found something that worked, it is something that we can merge.
>>
>> But yes, anything we merge now should work with oe-core's go variant,
>> since that
>> is what we are standardizing on.
>>
>
> Is the current/future standard the version in openembedded-core/meta or
> the one addressed in meta-virtualization README (git://github.com/error
> developer/oe-meta-go.git -- maybe superseded by github.com/mem/oe-meta-go
> )?
>

oe-core/meta is where standardization is happening.


>
> As I see some work to be done here (I have to switch all my repos to the
> master branch to do everything in a coherent environment), would it be fine
> if submit now the morty version and in the next future the master-compliant
> one?
>
>
>>
>>> Besides this:
>>> * I created a bunch of go recipes for all the dependencies (following
>>> the example of docker's recipe dependencies)
>>>
>>
>> These can be submitted to meta-virt, but packaging too many support
>> packages
>> isn't necessarily the right thing, since they can conflict with other go
>> packages that
>> need a similar dependency, but on another version.
>>
>> The technique that I've been using to deal with this lately is to put
>> multiple upstream
>> repos on the SRC_URI and allow the fetcher to pull them into the SRC
>> directory, and
>> have them available to the build.
>>
>> i.e. look at what I did for libnetwork in docker, you can do something
>> similar per-dependency
>> and use destsuffix= to get them in the right place (i.e.
>> destsuffix=git/src/github.com/openSUSE/umoci)
>>
>
> I see the point here, as I had started from morty versions I missed this
> approach; I just have to check if everything runs with this approach as
> some dependencies are taken from gopkg.in which is a kind of versioned
> git repository.. (but then, maybe, those can be kept as standalone recipes)
>

Some runtime dependencies and other support applications do make sense as
separate
packages, but when something is being pulled in to simply process command
line args, it
really doesn't make sense as a separate package. It just needs to be around
for the build
and then it is removed.


>
>
>
>> * I added and/or updated few other recipes (some are added+updated from
>>> meta-cube)
>>>
>>
>> These can be sent here, or sent as a pull request on the OverC github
>> pages, if you send
>> them here, make sure to have [meta-cube] in the subject. Or go right for
>> the github pull
>> request.
>>
>
> I think they are more appropriate here; just not to drag in another
> (almost unrelated) layer.
>

Sounds good. We can take them on a case by case basis. i.e. I'd rather not
move recipes
from between layers, but update them where they currently are. But
broadcasting the
work to this list definitely works.


>
>
>> * I added/changed some system configuration files
>>>
>>
>> We'd have to look at those closely, system level config is something that
>> either needs to
>> be controlled by packageconfigs, or is a distro policy. So they may not
>> be something
>> that you need to send at all.
>>
>
> Some are the various init config files (I tested the systemd one and
> quickly the sysvinit one) and most important one is the dnsmasq.conf, which
> locally (due to a meta-virt append) conflicts with a LXD dnsmasq setting
> taken from (again) the Ubuntu install package
>

Everyone (people and packages) love to poke with dnsmasq. Mark Asselstine
sent
changes to meta-oe that enabled the use of the dnsmasq.d configuration
directory
versus everything needing to modify dnsmasq.conf directly.

Regardless, we'd have to look at those changes carefully since there's no
one right
configuration for dnsmasq.

For init system files, they shouldn't be an issue.


>
>
>>
>> * I added some kernel modules - but without a real knowledge of which are
>>> strictly needed: I just checked those needed by docker (using the checking
>>> script from coreos docker's repo -- https://github.com/coreos/d
>>> ocker/blob/master/contrib/check-config.sh) and created corresponding
>>> kernel scc and cfg files (I also updated the docker's ones but this is a
>>> collateral and I don't know if it's needed).
>>>
>>
>> That would be fine, following what we do for docker makes sense.
>>
>
> My only concert is that I may have added some unnecessary kernel
> configurations here
>

When I see the patch, I can comment further.


>
>
>>
>>> Now I got a bit more insight on the overall process I may try to update
>>> the recipes to use the new go compiler structure (but for the moment I
>>> still need to keep it working for morty.. and I wouldn't like to keep two
>>> different versions of the recipes).
>>>
>>> By the way, now LXD seems to be working on my test image; there are
>>> still few thing I have to do manually after the installation (which should
>>> be automatized within the building process and I haven't been able to, yet)
>>> and few "side" things missing:
>>> * I could not build criu (as per previous message to the list) which is
>>> indicated as a tool to be used together with LXD on the build instructions
>>>
>>
>> unless you are doing container migration or snapshots, you can ignore it,
>> and even make
>> it an optional dependency.
>>
>>
>>> * I have not been able to use it with apparmor (I didn't manage to build
>>> it from the meta-security)
>>>
>>
>> this would be optional anyway, apparmor is a distro type config, and
>> building it in, or out, of
>> LXD would be a packageconfig or distro feature.
>>
>>
>>> * In his blog, LXD main developer suggests to use ZFS as ideal container
>>> fs but it's currently missing from Yocto...
>>>
>>
>> I wouldn't worry about this either, we can't dictate the FS that people
>> use, so the recipes need
>> to be generic and that's up to the distro or image assembler.
>>
>
> Yes, I named the above as "side" missing parts as I know they are not
> strictly necessary but it was to clarify that atm this package is not as
> complete as an Ubuntu user may expect it to be.
>
> If is ok that I start with the morty packages, I will submit the pull
> requests in the next few days and then look after the master recipes after
> that.
>

Sounds ok to me.

Bruce


>
> Cheers,
>     Max
>
>


-- 
"Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/meta-virtualization/attachments/20170425/c5be15c7/attachment-0001.html>


More information about the meta-virtualization mailing list