It's hard to take the idea seriously, but in the interest of honing my steel-manning skills:
You'd only want to rate-limit writes to existing files, since reads and creates are harmless. You'd have to rate-limit deletes too, because otherwise ransomware could simply read files, create an encrypted copy, and delete the original to bypass the throttle. You might be tempted to exempt certain .exes, but that opens you to process hijacking attempts (as is already done for LPE and UAC bypass.) You might also want to exempt certain folders, but then ransomware could find those, make a symlink inside and use the symlinked paths to bypass. The limit would need to be machine-wide, since ransomware could otherwise run multiple processes, or use ephemeral processes, etc. And you'd have to rate-limit the NT variant of the call, not the Win32 wrapper, since otherwise the malware could abuse WSL to bypass the limit - and speaking of, are you going to rate-limit WSL2 writes to Linux files?
The reduced I/O would ruin user experience (unless the limits were so high as to be ineffective.) It'd be a cat and mouse game, too - ransomware would get smarter about picking valuable files to encrypt first. Encrypting a single database journal is worth way more than a bunch of .lnks, after all. And app developers would be incentivized to consolidate files together to improve their app's perf, which reduces the efficacy of the safeguard.
This sort of proposal is exactly the kind of myopic child-proofing that can rot the foundation of a system.
> Microsoft should rate-limit the CreateFile() API
---
A friend of mine worked on (published?) a tool that puts canary files on the system in various places. When one of them is overwritten or removed, you know what's up. That seems more reliable than letting ransomware just trickle through the system instead of rushing through, more clear than a warning about some software touching all your files, and much less invasive to other software
The article also kinda glosses over that the criminals, in business cases, typically have domain admin. The whitelisting feature that the author proposes to use for backup software is going to make the limit ineffective
I realize you meant the above as a joke, but a configurable throttle on CreateFile would probably only be part of a Pro SKU. Ransomware is a much bigger threat in a business setting than for personal devices
But don't worry, Windows S2 has you covered! Now that you are in our walled garden, we don't limit things like that at all here; it's 110% safe and trustworthy and never ever ever gets compromised. You won't be able to install anything outside of our app store again, but who cares!
This article is pretty short and doesn't seem very well thought through. What should Create file do if the rate is reached? Hang? Error out? The author admits this is a can of worms so implying it could be done "tomorrow" seems like clickbait. This would surely break many applications and testing all of them against this change wouldn't be possible in a reasonable amount of time.
Nice concept, doesn't work like they think it will. Contention, tracking rates for individual programs versus a collective whole, etc. will make this a nightmare to implement.
We already have an equivalent today for remote API calls, and it by no means solves the problem of bad actors. Plus, this will seriously piss off users for installations, unpacking compressed archives, etc. now become incredibly slow.
If processes have a rate limit, why don't I create create more processes to bypass the limit? Why don't I add some mutation code to my virus, making it polymorphic, to appear to the OS as separate processes that are unrelated?
Now we need to add a methodology to rate-limit how often a process can start another process or open another application, from any API or method that could have been used to accomplish that. And if you think that limiting CreateFile() was a breaking change...
Right off the top of my head, why not use the scheduling tool in Windows to run a theoretical SpawnSeperateProcess.exe once every second?
Don’t forget that processes can inject code into other processes on the same session, so you can hijack other “innocent” processes to further bypass this rate limit.
My initial thought: figure out what constitutes a high value file, then target those. It is probably safe to state that virtually everyone is swamped in low value files, things like web browser caches, that any rate limit that doesn't affect the legitimate use of the computer would also allow ransomware to encrypt all of the user's data. As an example, there are currently 195 open files in my home directory. (Okay, this is Linux, but I suspect Windows is similar.) It is quite easy to trim my home directory down to about 2000 potential high value files with simple criteria I came up with off the top of my head, based on directory names or file types, without ever opening a file. Someone who knew more about the behavior of users and software could trim more.
Not only is the level of protection questionable, the possibility of breaking existing software is real, and file access on Windows seems to be slow to begin with.
Malware might adapt, but “low and slow” isn’t viable for their business model. They need the “shock and awe” of everything being encrypted all at once.
A low-and-slow attack would need to transparently encrypt files over a long period, and pulling out they key material all at once. Something like Windows EFS could probably be leveraged to do that kind of attack, but stock EFS would show up in the UI. A malicious EFS replacement that hides in the filesystem filter stack would definitely do it.
That would be a ton more work and would probably be easier to detect (and have tons of compatibility issues, I’m sure).
For several years I’ve been meaning to write some code to use ETW on a file server to profile calls to CreateFile, associate them to an SMB client, and ultimately blackhole connectivity for that client if an anomalous “velocity” of calls is reached.
It would take some significant baseline measurement to determine thresholds. Determining what’s “normal” for a client would be a fun exercise itself.
I’ve never done the PoC work to see if it’s feasible to even do this. It touches APIs I’m not familiar with (ETW looks like a dark and twisty maze) so it wasn’t something I could quickly knock together.
Maybe somebody else could run with this. I was going to do this as Free software but there’s probably money to be made with it, too.
Software on Microsoft Windows uses an application programming interface (API) called "CreateFile" to access files. Somewhat confusingly, CreateFile not only creates files but is also the primary way to open them. Microsoft should rate-limit the CreateFile() API.
Going with scorched earth policy on everything & everyone, because it could slow down a chance ransomware event for a few, doesn't sounds particularly a bright idea.
Started off thinking the idea was stupid because it would break everything, but the ability to manual trust something makes it work. Most applications would come signed by Microsoft (a la MacOS Gatekeeper), and the ones that don't trigger a pop-up asking the user if it's okay that they're accessing so many files.
That means you can both make your non-signed apps work, _and_ be alerted about the ones that aren't legit.
What organization exists that allows running unsigned code if they can help it, if they have even a moderate semblance of a security team?
None. Windows Defender makes it difficult to run unsigned code in Windows out-of-the-box. It already exists and doesn’t stop all ransomware, or at least is not used properly.
Also, it’s getting even stronger in recent updates. It’s called “Smart App Control.”
I've never worked anywhere that _doesn't_ allow their engineers to run unsigned code. It's good practice for non-techs, but if your engineers need to be babysat like that, get better engineers.
Ransomware will work around this limit easily by spawning more processes, injecting itself into other valid processes, etc. System wide limit will just slow overall I/O and make everyone super angry. This suggestion is an utter bullshit. Ransomware does not play fair.
The author seems to be under the misimpression that CreateFile() is only used whenever the user opens a file. "I opened a file in Photoshop, that called CreateFile() once, right?"
Have fun waiting about 1.9 hours for gimp to start up, let alone doing something useful with the software :)
For those who aren't familiar (like the author :P) and are wondering wtf Gimp opens seven thousand files for, here's a random sample:
$ strace -e trace=file gimp |& shuf | head
openat(AT_FDCWD, "/usr/share/gimp/2.0/tool-presets/Paint/Bristles.gtp", O_RDONLY) = 12
access("/usr/share/themes/Adwaita/gtk-2.0/assets/scrollbar-horz-slider.png", F_OK) = 0
lstat("/usr/share/gimp/2.0/patterns/Food/java.pat", {st_mode=S_IFREG|0644, st_size=12317, ...}) = 0
openat(AT_FDCWD, "/usr/share/gimp/2.0/icons/Symbolic/scalable/apps/gimp-tool-perspective-clone.svg", O_RDONLY) = 12
access("/etc/fonts/~/.fonts.conf.d", R_OK) = -1 ENOENT (No such file or directory)
lstat("/usr/share/gimp/2.0/icons/Symbolic/scalable/apps/gimp-reset.svg", {st_mode=S_IFREG|0644, st_size=7762, ...}) = 0
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/libwebp.so.6", O_RDONLY|O_CLOEXEC) = 4
openat(AT_FDCWD, "/usr/share/gimp/2.0/patterns/Sky/starfield.pat", O_RDONLY) = 12
openat(AT_FDCWD, "/usr/share/gimp/2.0/patterns/Legacy/pastel.pat", O_RDONLY) = 12
stat("/usr/share/gimp/2.0/icons/Symbolic/32x32/stock/form", 0x7ffcc15bcea0) = -1 ENOENT (No such file or directory)
So a combination of tools built into gimp, the icon to use for the window, font files, button icons, you name it
(To be fair, not all of these are "read this file" calls. A lot of these are "does this file even exist" ($PATH searches) and such. But I think the point about one per second still stands. How many still-relevant documents do you have? A few thousand? Any rate limit that is usable is going to allow encrypting that before you open your next document tomorrow morning and notice the problem. Heck, at one per second you're going to be screwed after a night of encrypting!)
He accounts for this with signed applications, and user overrides for non-signed applications. Just like how MacOS allows signed applications to run, and (more difficultly than necessary) non-signed ones to run if you approve them.
> There will need to be a way to exempt programs (like compilers and backup tools), and maybe that needs to be issued globally, which means a process for software creators to get a special certificate
None of the “sha512” hashes in the integrity attribute match the content of the subresource.
The computed hash is “z4PhNX7vuL3xVChQ1m2AB9Yg5AULVxXcg/SpIdNs6c5H0NE8XYXysP+DGNKHfuwvY7kxvUdBeoGlODJ6+SfaPg==”. microsoft-can-fix-ransomware-tomorrow
None of the “sha512” hashes in the integrity attribute match the content of the subresource.
The computed hash is “z4PhNX7vuL3xVChQ1m2AB9Yg5AULVxXcg/SpIdNs6c5H0NE8XYXysP+DGNKHfuwvY7kxvUdBeoGlODJ6+SfaPg==”. microsoft-can-fix-ransomware-tomorrow
You'd only want to rate-limit writes to existing files, since reads and creates are harmless. You'd have to rate-limit deletes too, because otherwise ransomware could simply read files, create an encrypted copy, and delete the original to bypass the throttle. You might be tempted to exempt certain .exes, but that opens you to process hijacking attempts (as is already done for LPE and UAC bypass.) You might also want to exempt certain folders, but then ransomware could find those, make a symlink inside and use the symlinked paths to bypass. The limit would need to be machine-wide, since ransomware could otherwise run multiple processes, or use ephemeral processes, etc. And you'd have to rate-limit the NT variant of the call, not the Win32 wrapper, since otherwise the malware could abuse WSL to bypass the limit - and speaking of, are you going to rate-limit WSL2 writes to Linux files?
The reduced I/O would ruin user experience (unless the limits were so high as to be ineffective.) It'd be a cat and mouse game, too - ransomware would get smarter about picking valuable files to encrypt first. Encrypting a single database journal is worth way more than a bunch of .lnks, after all. And app developers would be incentivized to consolidate files together to improve their app's perf, which reduces the efficacy of the safeguard.
This sort of proposal is exactly the kind of myopic child-proofing that can rot the foundation of a system.