Replies: 0 comments 2 replies
-
I like this a lot. The idea I had at some point was to let each stage be responsible for the setup, so the image file would be the only file in the working tree, and all the mount options / partition table would be passed in the options to each individual stage. I see that the benefit of your approach is that we would not need to implement the same thing repeatedly, and I also sort of see that this belongs on the "host" (with the kernel), not in the build-root. Out of curiosity, are there other differences between the two approaches that I'm missing? |
Beta Was this translation helpful? Give feedback.
-
I like the granularity more than anything. It'll be useful for letting us expose options to the user. Maybe not everything, but having the option to give the user that kind of power is good. I have nothing against the devices/mounts being available as inputs (sources) types. It seems it will simplify things so I'm for it. |
Beta Was this translation helpful? Give feedback.
-
The introduction of the new (version 2) image format resulted in the generalisation of the pipeline data structure. One side effect of this was the removal of specialised
assembler
, which now are normal stages. Thetar
and theoci-archive
assembler have so far been ported to the new manifest format. Missing are theraw
and most importantly theqemu
assembler (producing theqcow2
images). The main reason that did not yet happen is that the new format enables breaking up theqemu
assembler into different stages. This would be very desirable because currently the assembler does a lot of things, and is not easily maintainable anymore:truncate
)sfdisk
)mkfs
on a loop devices with start, size)mount
on loop devices with start, size)cp
)grub2
orzip
)qcow2
or other "could" formats such asvmdk
(qemu-img
)The straight forward way would be to create a stage for each of the aforementioned steps. Alternatively some of those steps could be combined, like one
image
stage could do combine steps 3. to 6., which all need the various partitions to be mapped to a loop device and mounted. The downside is that thatimage
stage would also be quite big and would certainly grow in case we addLUKS2
andLVM2
support.If split up into separate stages, the question is how do stages request devices, which are managed by the host (since we do not mount
/dev
into the container). Currently only loop back devices are provided to the stages, in an on-demand fashion via an API endpoint provided byLoopServer
, butLUKS2
andLVM2
will also require similar host interactions. An alternative to additional API endpoints for those could be to adopt a more declarative model as has been done for sources and pipeline inputs via the newinputs
section in stages. A sketch of how such a model would look like follows:Let's assume an encrypted LVM disk layout:
Partiton scheme:
Which would have to following
/etc/fstab
entry:The pipeline that creates this could look like:
In this model,
osbuild
(the host process) would, very much like with inputs, setup the devices and mounts listed indevices
andmounts
before creating the container and then providing the devices and the mount tree together with some JSON describing the devices and the mounts to the stage. The individual "devices" (and probably "mount") would be binaries very much likestages
,inputs
andsources
currently are.WIP branch:
gicmo/raw_images_v2
Beta Was this translation helpful? Give feedback.
All reactions