Success of VSCode plugins, microservices, containers, and out of process VSTs, have proven that on modern hardware people favour stability and improved security, over the in process plugins.
.NET also dropped their version of SecurityManager when they did the Core rewrite.
All of those plugins have shit latency. This model is not suitable for games, at all. The plugins need to be able to render their own graphics, which happens at 60~120fps. Also have you ever tried running ~200 JVMs on the same machine?
> Also have you ever tried running ~200 JVMs on the same machine?
This is one of my pet peeves with the "garbage collection" model of memory management: it does not play well with other processes in the same machine, especially when these other processes are also using garbage collection.
With manual memory management (and also with reference counting), whenever an object is no longer being used, its memory will be immediately released (that is, the memory use of a process is always at the minimum it needs, modulo some memory allocator overhead). With garbage collection, it will be left around as garbage, and its memory will only be released once the process decides that there's too much garbage; but that decision does not take into account that other processes (and even the kernel for its page cache) might have a better use for that memory.
This works fine when there's a single process using most of the memory on the machine, and its garbage collection limits have been tuned to leave enough for the kernel to use for its caches (I have seen in practice what happens when you give too much memory to the JVM, leaving too little for the kernel caches); but once you have more than a couple processes using garbage collection, they'll start to fight over the memory, unless you carefully tune their garbage collection limits.
It would be really great if there were some kernel API which allowed for multiple processes (and the kernel caches) to coordinate their garbage collection cycles, so that multiple garbage collectors (and in-process caches) would cooperate instead of fighting each other for memory, but AFAIK such API does not exist (the closest I know of is MADV_FREE, which is good for caches, but does not help with garbage collection).
Contrary to common culture, if memory strain is an issue with GC, it is even worse with algorithms that cannot cope with fragmentation, or have to keep going down into the OS for memory management.
Optimizations to avoid fragmentation, locking contention or stop the world domino effect in reference counting algorithms, eventually end up being a poor implementation of a proper GC.
Finally, just because a language has a GC, doesn't mean it also doesn't offer language features to do manually memory management and reference counting if one feels like it.
While Java failed to build up on the learnings from Eiffel, Oberon, Modula-3, there are others that did, like D, Nim, C#.
I'm building this JEP for automatic heap sizing right now to address this when using ZGC: https://openjdk.org/jeps/8329758
I did in fact run exactly 200 JVMs, running a heterogeneous set of applications, and it ran totally fine. By totally fine I mean that the machine got rather starved of CPU and the programs run slowly due to having 12x more JVMs than cores, but they could all share the memory equally without blowing up anyway. I think it's looking rather promising.
> With manual memory management (and also with reference counting), whenever an object is no longer being used, its memory will be immediately released
Well, this is a fundamental space vs time tradeoff — reclaiming memory takes time, usually on the very same thread that would be doing useful work we care about. This is especially prominent with reference counting, which is the slowest of these all.
Allocators can make reclamation cheap/free, but not every usage pattern fits nicely, and in other cases you are fighting fragmentation.
> Well, this is a fundamental space vs time tradeoff — reclaiming memory takes time, usually on the very same thread that would be doing useful work we care about.
Precisely. Which is fine if you don't have to share that space with anyone else; the example which started this sub-thread ("running ~200 JVMs on the same machine") is one in which that tradeoff goes badly.
But it wouldn't be as much of an issue if the JVMs could coordinate between themselves (and with other processes on the same machine), so that whenever one JVM (or other things like the kernel itself) felt too much memory pressure, the other JVMs could clean some garbage and release it back to the common pool of the operating system.
It might even be a problem without garbage collection - linux might be a big culprit here with its tendency to over-allocate. Some signal would be welcome that says “try to free some memory” - I believe OSX has something like that.
> Which games are you shipping in Java that depend on Security Manager's existence?
None, because it didn't pan out like I described above. No sense continuing developing something using a technology that is due to be removed. The project was abandoned.
This was for a third-party Old School RuneScape client which supports client side Java plugins. The current approach is to manually vet each plugin and its update before it is made available for users to install.
> Most games are deployed as services nowadays, they have a full network between their rendering and game logic.
Networked games do not communicate at 60~120fps. That's just not how it works, writing efficient netcode with client-side prediction is important for a reason.
> Yes, that is what Cloud Services do all the time across the globe in Kubernetes.
Yeah, on the servers they pay several grand a month for. Not on end user craptops which is where games obviously run.
.NET also dropped their version of SecurityManager when they did the Core rewrite.