They call it Parker because it’s almost, but not quite, the right thing.
I know that Square you’re talking about!
Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?
Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.
If this works out, it’s likely something that container engines would take advantage of as well. It may take more resources to do (we’ll have to see), but adding kernel isolation would make for a much stronger sandbox. Containers are just a collection of other isolation tools like this anyway.
gvisor already exists for environments like this, where the extra security at the cost of some performance is welcome. But having support for passing processes an isolated, hardened kernel from the primary running Linux kernel would probably make a lot of that performance gap disappear.
I’m also thinking it could do wonders for compatibility too, since you could bundle abandoware apps with an older kernel, or ship new apps that require features from the latest kernel to places that wouldn’t normally have those capabilities.
I remember partitioned systems being a big thing in like the '90s '00s since those were the days you would pour $$$$ into large systems. But I thought the “cattle not pets” movement did away with that? Are we back to the days of “big iron”?
And the wheel of reincarnation forever keeps turning.
What do you think all those cattle run on?
Just big ass servers with tons of cores and ram.
I figured it was cattle all the way down. Even if they’re big. Especially when you have thousands of them.
Though maybe these setups can be scripted/automated to be easy to replicate and reproduce?
In essence yes, for example VMware ESXi hosts can be managed by a single image with customizations made at the cluster level. Give me pxe and I can provision you n hosts in about the same time as 1 host
Constant back and forth. Moving things closer increases efficenicy moving them apart increases resillency.
So we are constantly shuffling between the two for different workloads to optimize for the given thing.
That said i see this as an extension too the cattle idea by making even the kernel a thing to raised and culled on demand. This matter a lot more with heavy workloads like HPC and AI stuff where a process can be measure in days or weeks and stable uptime is paramount, vs the stateless work of intended k8s stuff (i say intended because you can k8s all the things now but it needs extensions to handle the new lifecycles).
If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.
I always thought that Minix was a superior architecture to be honest.
How is this better than a hypervisor OS running multiple VM’s?
I imagine there’s some overhead savings but I don’t know what. I guess with classic hypervisor there’s still calls going through the host kerbel whereas with this they’d go straight to the hardware without special passthrough features?
Saving on some overhead, because the hypervisor is skipped. Things like disk IO to physical disks can be more efficient using multikernel (with direct access to HW) than VMs (which have to virtualize at least some components of HW access).
With the proposed “Kernel Hand Over”, it might be possible to send processes to another kernel entirely. This would allow booting a completely new kernel, moving your existing processes and resources over, then shutting down the old kernel, effectively updating with zero downtime.
It will definitely take some time for any enterprises to transition over (if they have a use for this), and consumers will likely not see much use in this technology.
I recently heard this great phrase:
“A VM makes an OS believe that it has the machine to itself; a container makes a process believe that it has the OS to itself.”
This would be somewhere between that, where each container could believe it has the OS to itself, but with different kernels.
More transparent hardware sharing, less over head by not needing to virtualize hardware.
And they said k8s was overengineered!
I mean isn’t this just Xen revisited? I don’t understand why this is necessary.
_
GTFO, you’re the brainrot ai slop hosting TikTok company.
Code is code. If it’s good Free code, I’ll use it. I also don’t like Microsoft and Facebook but I run their kernel code too.
Why should i trust them with this multi-kernel thingy if they let the dumpster fire that is tiktok, exist? And, they’re probably trying to embrace-extend-extinguish Linux just like microsoft and apple with their WSL and Containers.app respectively.
Because it’s Free and reviewed by kernel maintainers, what do you mean?
the only brainrot here is your own