I work with both the rump kernel/rumprun unikernel stack and Solo5. As part of my work at Unikernel Systems (now Docker) I ported Mirage to run on rumprun and, together with Dan Williams @ IBM, am currently completing a Mirage/Solo5 port.
A few points that may help you to decide on which direction(s) to take for HalVM:
You are correct in your analysis that rumprun provides more functionality, including drivers to run directly on bare metal without a hypervisor. However, this comes with a size cost (a couple of megabytes) and a complexity cost in integration, especially with regard to integrating the rump kernel network interface drivers with a non-rump-kernel TCP stack (which I presume you would want to do for HalVM).
Regarding Solo5, it is much simpler, but also does much less. Some of this is due to the code still being very young and evolving, some is by design. The idea with Solo5 is to provide a minimal base needed to run unikernels such as Mirage on top of virtio-based hypervisors and also the experimental
ukvm hypervisor that the team at IBM research is working on.
A notable consequence of this is that unlike rumprun, Solo5 does not attempt to provide a POSIX environment or full
libc, neither does it provide a scheduler (since Mirage already has its own). Also, Solo5 is not aiming to target bare metal (i.e. run without a hypervisor). The benefit of this is reduced complexity and a much smaller codebase (~110kB for the
You can take a look at the minimalist C glue layer reqiured to run the OCaml runtime on Solo5 at https://github.com/mato/ocaml-freestanding, and the work in progress Solo5 platform bindings for Mirage at https://github.com/djwillia/mirage-platform/tree/solo5.
I’d be interested to know exactly which interfaces you need from the base layer to run the HaLVM. It may be possible to add this functionality to Solo5 or build a glue layer analogous to
ocaml-freestanding for HalVM.
Hope this helps,