> So as you said a bug in the system would be a kernel exploit. How would this be any different than a exploit in today's kernels?
Because now you moved a massive amount of software (namely, the engine implementation, which includes a dynamic compiler, memory management, runtime system, etc--850,000 lines of code for V8) into the kernel, and you just eschewed the simplest of hardware mechanisms (which have been very carefully designed and tests for 50 years, plus verified and proved formally correct by hardware designers) for a very complex set of software checks that are part of a rapidly changing software system that has had dozens upon dozens of security bugs.
The whole point of defense is depth is to add additional layers of security. E.g in a browser, if software checks fail in, (ring 3 userspace) the sandboxing of system calls still doesn't allow a rogue process to even access the filesystem or make kernel calls. Then, on top of that, hardware address translation means that a compromised process cannot attack other processes. If it's all in one giant address space in ring 0, that means a single vulnerability compromises the entire system.
> My point is while user running programs in ring 3 protect from bugs and exploits from trashing the system, but they do not protect from bugs in the kernel from trashing the system.
The whole point is to reduce the TCB (trusted computing base). Bugs in the kernel are rarer because it's smaller, tested more thoroughly, has a clearer and simpler contract, changes slower, and is written by a smaller set of experts than, e.g. random userspace software.
> I have just woke up and feel I am doing a terrible job of conveying this.
In general, you want to minimize the trusted computing base (i.e. that running in ring 0), and you don't typically want to put a Turing machine inside it!
I don't think there is a massive amount of software required. The linked module above is just over 2k lines and compiles to around ~250kb in size. Clearly not all of those things you mentioned. All you need is the a implementation of the wasm state machine. Keep in mind this is WASM not javascript, and not asm.js. Its a entirely new platform independent specification for a byte code.
I would wager to say a implementation in rust is far safer than all of the code that goes into today's linux kernel to run non wasm binaries.
You linked me to TCB, the entire point of the the OPs link to nebullit and cervus is that with this new strategy a entire new security module and way to think about and run un-trusted code has come about. So linking to old ideals and papers explaining how computer security works today only can be used as how things are done now. When the talk about running in ring 0 comes up they are challenging those very ideas. And doing a good job of showing how it can safely be done.
Baring bugs -- yes bugs can happen -- in both the old way and in the new ideas being tossed around with wasm they both are able to provide a level of security. One relies on hardware that can't be easily changed (microcode, new cpu...) , is hard to audit -- and in some cases impossible to audit.
The new notion of compile time checking with wasm allows for a clean approach to ensuring bad programs don't crash the system. Because it does not rely on hardware, but code it can be updated and audited.
I am not arguing that somebodies show HN fun toy is going to be better than 50 years of progress. But I am arguing that a few good years of investment can jump us past those 50 years into a new age of computing, not bogged down by legacy cpu architectures.
Because now you moved a massive amount of software (namely, the engine implementation, which includes a dynamic compiler, memory management, runtime system, etc--850,000 lines of code for V8) into the kernel, and you just eschewed the simplest of hardware mechanisms (which have been very carefully designed and tests for 50 years, plus verified and proved formally correct by hardware designers) for a very complex set of software checks that are part of a rapidly changing software system that has had dozens upon dozens of security bugs.
The whole point of defense is depth is to add additional layers of security. E.g in a browser, if software checks fail in, (ring 3 userspace) the sandboxing of system calls still doesn't allow a rogue process to even access the filesystem or make kernel calls. Then, on top of that, hardware address translation means that a compromised process cannot attack other processes. If it's all in one giant address space in ring 0, that means a single vulnerability compromises the entire system.
> My point is while user running programs in ring 3 protect from bugs and exploits from trashing the system, but they do not protect from bugs in the kernel from trashing the system.
The whole point is to reduce the TCB (trusted computing base). Bugs in the kernel are rarer because it's smaller, tested more thoroughly, has a clearer and simpler contract, changes slower, and is written by a smaller set of experts than, e.g. random userspace software.
> I have just woke up and feel I am doing a terrible job of conveying this.
No worries. Here's some background that might be useful for the discussion: https://en.wikipedia.org/wiki/Trusted_computing_base
In general, you want to minimize the trusted computing base (i.e. that running in ring 0), and you don't typically want to put a Turing machine inside it!