The pro-swap stance has never made sense to me because it feels like a logical loop.
There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.
For instance, if I have 8 GB of RAM, people recommend adding 8 GB of swap. But since I like having plenty of memory, I install 16 GB of RAM instead—and yet, people still tell me to use swap. Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.
Then, if I upgrade to 24 GB of RAM, the advice doesn’t change—they still insist on enabling swap. I could install an absurd amount of RAM, and people would still tell me to set up swap space.
It seems that for some, using swap has become dogma. I just don’t see the reasoning. Memory is limited either way; whether it’s RAM or RAM + swap, the total available space is what really matters. So why insist on swap for its own sake?
You're mashing together two groups. One claims having swap is good actually. The other claims you need N times ram for swap. They're not the same group.
> Memory is limited either way; whether it’s RAM or RAM + swap
For two reasons: usage spikes and actually having more usable memory. There's lots of unused pages on a typical system. You get free ram for the price of cheap storage, so why wouldn't you?
The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.
That's not useful as a rule of thumb, since you can't know the size of "all inactive anonymous pages" without doing extensive runtime analysis of the system under consideration. That's pretty much the opposite of what a rule of thumb is for.
You are right, it is not a rule of thumb, and you can't determine optimal swap size right away. But you don't need "extensive runtime analysis". Start with a small swap - a few hundred megabytes (assuming the system has GBs of RAM). Check its utilization periodically. If it is full, add a few hundred megabytes more. That's all.
It's not like it's easy to shuffle partitions around. Swap files are a pain, so you need to reserve space at the end of the table. By the time you need to increase swap the previous partition is going to be full.
Better overcommit right away and live with the feeling you're wasting space.
Yeah, until you need to hibernate to one. I understand that calculating file offsets is not rocket science, but still, all the dance required is not exactly uninvolved and feels a bit fragile.
Exactly opposite. Don't use swap partitions, and use swap files, even multiple if necessary. Never allocate too much swap space. It is better to get OOM earlier then to wait for unresponsive system.
Swap partition is set and forget. Can be detected by label automatically, never fails.
Swap file means fallocating, setting extended attributes (like `nocow`), finding file offset and writing it to kernel params, and other gotchas, like btrfs not allowing snapshotting a subvolume with an active swap file.
Technically it's preferable, won't argue with that.
> There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.
That rule came about when RAM was measured in a couple of MB rather than GB, and hasn't made sense for a long time in most circumstances (if you are paging our a few GB of stuff on spinning drives your system is likely to be stalling so hard due to disk thrashing that you hit the power switch, and on SSDs you are not-so-slowly killing them due to the excess writing).
That doesn't mean it isn't still a good idea to have a little allocated just-in-case. And as RAM prices soar while IO throughput & latency are low, we may see larger Swap/RAM ratios being useful again as RAM sizes are constrained by working-sets aren't getting any smaller.
In a theoretical ideal computer, which the actual designs we have are leaky-abstraction laden implementations of, things are the other way around: all the online storage is your active memory and RAM is just the first level of cache. That ideal hasn't historically ended up being what we have because the disparities in speed & latency between other online storage and RAM have been so high (several orders of magnitude), fast RAM has been volatile, and hardware & software designs or not stable & correct enough such that regular complete state resets are necessary.
> Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.
Because your need for fast immediate storage has increased, so 8-quick-8-slow is no longer sufficient. You are right in that this doesn't mean you need 16-quick-16-slow is sensible, and 128-quick-128-slow would be ridiculous. But no swap at all doesn't make sense either: on your machine imbued with silly amounts of RAM are you really going to miss a few GB of space allocated just-in-case? When it could be the difference between slower operation for a short while and some thing(s) getting OOM-killed?
Swap is not a replacement for RAM. It is not just slow. It is very-very-very slow. Even SSDs are 10^3 slower at random access with small 4K blocks. Swap is for allocated but unused memory. If the system tries to use swap as active memory, it is going to become unresponsive very quickly - 0.1% memory excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.
What is allocated but unused memory? That sounds like memory that will be used in the near future and we are scheduling in an annoying disk load when it is needed
You are of course highlighting the problem that virtual addressing was intended to over abstract memory resource usage, but it provides poor facilities for power users to finely prioritize memory usage.
The example of this is game consoles, which didn't have this layer. Game writers had to reserve parts of ram fur specific uses.
You can't do this easily in Linux afaik, because it is forcing the model upon you.
Unused or Inactive memory is memory that hasn't been accessed recently. The kernel maintains LRU (least recently used) lists for most of its memory pages. The kernel memory management works on the assumption that the least recently used pages are least likely to be accessed soon. Under memory pressure, when the kernel needs to free some memory pages, it swaps out pages at the tail of the inactive anonymous LRU.
Cgroup limits and OOM scores allow to prioritize memory usage on a per-process and per-process group basis. madvise(2) syscall allows to prioritize memory usage within a process.
There is too much focus in this discussion about low memory situations. You want to avoid those as much as possible. Set reasonable ulimit for your applications.
The reason you want swap is because everything in the Linux (and all of UNIX really) is written with virtual memory in mind. Everything from applications to schedulers will have that use case in mind. That's the short answer.
Memory is expensive and storage is cheap. Even if you have 16 GB RAM in your box, and perhaps especially then, you will have some unused pages. Paging out those and utilizing more memory to buffer I/O will give you higher performance under most normal circumstances. So having a little bit of swap should help performance.
It's true that if you always have free RAM, you don't need swap. But most people don't have that it can always be used as a disk cache. Even if you are just web browsing, the browser is writing to disk stuff fetched from the internet in the hopes it won't change, the OS is will be keeping all of that in RAM until no more will fit.
Once the system has used all available RAM if has for disk cache it has a choice if it has swap. It can write write modified RAM to swap, and use the space it freed for disk cache. There is invariably some RAM where that tradeoff works - RAM use by login programs, and other servers that haven't been accessed in hours. Assuming the system is tuned well, that is all that goes to swap. The freed RAM is then used for disk cache, and your system runs faster - merely because you added swap.
There is no penalty for giving a system too much swap (apart from disk space), as the OS will just use it up until the tradeoff doesn't make sense. If your system is running slow because swap is being overused the fix isn't removing swap (if you did you system may die because of lack of RAM), it's to add RAM until swap usage goes down.
So, the swap recipe is: give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop. Monitor it occasionally, particularly if your system slows down. If swap usage ever goes above 1Gb, you probably need to add RAM.
On servers swap can be used to handle DDOS from malicious logins. I've seen 1000's of ssh attempts happen at once, in an attempt to break in. Eventually the system will notice and firewall the IP's doing it. If you don't have swap, those login's will kill the system unless you have huge amounts of RAM that isn't normally used. With swap it slows to a crawl, but then recovers when the firewall kicks in. So both provisioning swap and having loads of RAM prevent DDOS's from killing your system, but this is in a VM, one costs me far more per month than the other, and I'm trying fix to a problem that happens very rarely.
> There is no penalty for giving a system too much swap (apart from disk space)
There is a huge penalty for having too much swap - swap thrashing. When the active working set exceeds physical memory, performance degrades so much that the system becomes unresponsive instead of triggering OOM.
> Monitor it occasionally, particularly if your system slows down.
Swap doesn't slow down the system. It either improves performance by freeing unused memory, or it is a completely unresponsive system when you run out of memory. Gradual performance degradation never happens.
> give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop.
Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.
> There is a huge penalty for having too much swap - swap thrashing.
Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.
Although trashing is not something you want happening, if your system is thrashing with swap the alternative without having it available is the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.
> Gradual performance degradation never happens.
Where on earth did you get that from? It's wrong most of the time. The subject was very well researched in the late 1960's and 1970's. If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff. This is a modern example, but there are lots of papers from that era showing the usual gradual response, followed by falling off a cliff: https://yeet.cx/r/ayNHrp5oL0. A seminal paper on the subject: https://dl.acm.org/doi/pdf/10.1145/362342.362356
The underlying driver for that behaviour is the disk system being overwhelmed. Say you have 100 web workers that that spend a fair chunk of their time waiting for networked database requests. If they all fit in memory the response is as fast as it can be. Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database. Eventually the increasing swapping hits the disk's IOPS limit, active memory is swapped out and performance crashes.
The only reason I can think the gradual slow down is not obvious to you is that modern SSD's are so fast, the initial degradation it's not noticeable to desktop user.
> Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.
A you seem to recognise having lots of swap on hand and unused, even it it's terabytes of it does not effect performance. The question then becomes: what would you prefer to happen in those rare times when swap usage exceeds the optimal few hundred megabytes? Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app. When that happens it seems it's popular to blame the swap system for slowing their system down because they temporarily exceeded the capacity of their computer.
> Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.
Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.
> the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.
In a swap thrashing event, the system isn't just running slowly but totally unresponsive, with an unknown chance of recovery. The majority of people prefer OOM killer to an unresponsive system. That's why we got OOM killer in the first place.
> If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff.
Random access latency difference between RAM and SSD is 10^3. When the active working set spills out into swap, liner increase of swap utilization leads to exponential performance degradation. Assuming random access, simple math gives that 0.1% excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.
WTF is this graph supposed to demonstrate? Some workload went from 0% to 100% of swap utilization in 30 seconds and got OOM-killed. This is not going to happen with a large swap.
> Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database
In practice, you never see constant or gradually increasing swap I/O in such systems. You either see zero swap I/O with occasional spikes due to incoming traffic or total I/O saturation from swap thrashing.
> Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app.
You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap. It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.
> Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.
You seem to be very certain about that inevitable memory leak. I guess people can make their own judgements about how inevitable they are. I can't say I've seen a lot of them myself.
But the next bit is total rubbish. A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there. It doesn't thrash. What actually happens if the leak continues is swap eventually fills up, and then the OOM killer comes out to play. Fortunately it will likely kill the process that is leaking memory.
I've used this behaviour to find which process had a slow leak (it had to be running for months). This has only happened once in decades mind you - these leaks aren't that common. You allocate a lot of swap, and gradually it is filled by the process that has the leak. Because swap is so large once the process leaking memory fills it, it stands out like dogs balls because it's memory consumption is huge.
You notice all of this because, like all good sysadmins, you monitor swap usage and receive alerts when it gets beyond what is normal. But you have time - the swap is large, the system slows down during peaks but recovers when they are over. It's annoying, but not a huge issue.
> In a swap thrashing event, the system isn't just running slowly but totally unresponsive
Again, you are seem to be very certain about this. Which is odd, because I've logged into systems that were thrashing which means they didn't meet my definition of "totally unresponsive". In fact I could only log in because the OOM killer had freed some memory. The first couple of times the OOM killer took out sshd and I had to each for the reset button, but I got lucky one day and could log in. The system was so slow it was unusable for most purposes - but not for the one thing I needed, which was to find out why it had run out of memory. Maybe we have different definitions of "totally", but to me that isn't "totally". In fact if you catch it before the OOM killer fires up and kills god knows what, these "totally unresponsive systems" are salvageable without a reboot.
> This paper discusses measuring stable working sets and says nothing about performance degradation when your working set increases.
Fair enough. Neither link was good.
> You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap.
Perhaps some of them are, but for me it wasn't the swapping that did the system in. It is always the OOM killer.
> It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.
The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly. Despite what you say, the reset button won't corrupt modern journaled filesystems as they are pretty well debugged. But applications are a different story. If they get hit by a reset or the OOM killer while they are saving your data and aren't using sqlite as their "fopen()", they can wipe the file you are working on. You don't just lose the changes. The entire document is gone. This has happened to me.
I'd take the system taking a few minutes to respond to my request to kill a misbehaving application over the OOM killer any day.
> You seem to be very certain about that inevitable memory leak.
It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event. Read other comments.
> A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there.
You assume that leaked memory is inactive and goes to swap. This is not true. Chrome, Gnome, whatever modern Linux desktop apps leak a lot, and it stays in RSS, pushing everything else into swap.
> if the leak continues is swap eventually fills up, and then the OOM killer comes out to play
You assume that the OOM killer comes out to play in time. The larger the swap, the longer it takes for the OOM killer to trigger, if ever, because the kernel OOM-killer is unreliable, so we have a collection of other tools like earlyoom, Facebook oomd and systemd-oomd.
> I've logged into systems that were thrashing
It means that the system wasn't out of memory yet. When it is unresponsive, you won't be able to enter commands into an already open shell. See other comments here for examples.
> The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly.
This is not true. By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system. By default, systemd, ssh and other socket-activated systemd units are protected from OOM.
> It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event.
If they disable swap they will get hit by the OOM killer. You seem to prefer it over slowing down. I guess that's a personal preference. However, I think it is misleading to say people are being bitten by a swap thrashing event. The "event" was them running out of RAM. Unpleasant things will happen as a consequence. Blaming thrashing or the OOM killer for the unpleasant things is misleading.
> You assume that leaked memory is inactive and goes to swap. This is not true.
At best, you can say "it's not always true". It's definitely gone to swap in every case I've come across.
> It means that the system wasn't out of memory yet.
Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!
> When it is unresponsive, you won't be able to enter commands into an already open shell.
Again that's just plain wrong. I have entered commands into a system is trashing. It must work eventually if thrashing is the only thing going on, because when the system thrashes the CPU utilization doesn't go to 0. The CPU is just waiting for disk I/O after all, and disk I/O is happening at a furious pace. There's also a finite amount of pending disk I/O. Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.
If the system does die other things have happened. Most likely the OOM killer if they follow your advice, but network timeouts killing ssh and networked shares are also a thing. If you are using Windows or MacOS, the swap file can grow to fill most of free disk space, so you end up with a double whammy.
Which brings me to another observation. In desktop OS's, the default is to provide it, and lots of it. In Windows swap will grow to 3 times RAM. This is pretty universal - even Debian will give you twice RAM for small systems. The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber. They've used real data, they've observed when swapping starts being used systems do slow down giving the user some advance warning, when thrashing starts systems can recover rather than die which gives the user opportunity to save work. It is the right design tradeoff IMO.
> By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system.
Yes, it does. And if it is a single large process hogging memory you are in luck - the OOM killer will likely do the right thing. But Chrome (and now Firefox) is not a single large process. Worse if the out of memory is caused by say someone creating zillions of logins, they are so small they are the last thing the OOM killer chooses. Shells, daemons, all sorts of critical things go first. The "largest" process first is just a heuristic, one which can be and in my case has been wrong. Badly wrong.
An unresponsive system is not a slowdown. You keep ignoring that.
>> You assume that leaked memory is inactive and goes to swap. This is not true.
> At best, you can say "it's not always true".
You skipped my sentence that was specifying the scope when "it's not always true", and now you pretend that I'm making a categorical generalized statement. This is a silly attempt at a "strawman".
>> It means that the system wasn't out of memory yet.
> Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!
Swap is not RAM. When the free RAM is below the low watermark, the kernel switches to direct reclaim and blocks tasks that require free memory pages. Blocking of tasks happens regardless of swap. If you are able to log in and fork a new process, the system is not below the low watermark.
>> When it is unresponsive, you won't be able to enter commands into an already open shell.
> Again that's just plain wrong.
You are in denial.
> Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.
This is false. A system can stay unresponsive much longer than a cup of coffee. There is no guarantee that the thrashing will end in a reasonable time.
> even Debian will give you twice RAM for small systems.
> The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber.
That 2x RAM rule is exactly that - an old folk law. You can find it in SunOS/AIX/etc manuals or Usenet FAQs from the 80s and early 90s, before Linux existed.
> They've used real data.
You're hallucinating like an LLM. No one did any research or measurements to justify that 2x rule in Linux.
Another factor other commenters haven't mentioned, although the article does bring it up: you may disable swap and you will still get paging behavior regardless, because in a pinch the kernel will reclaim pages that are mmapped to files. Most typically binaries and librairies. Which means the process in question will incur a map page read next time it schedules. But of course you're out of memory, so the kernel will need to page out another process's code page to make room, and when that process next schedules... Etc.
This has far worse degradation behavior than normal swapping of regular data pages. That at least gives you the breathing space to still schedule processes when under memory pressure, such as whichever OOM killer you favor.
Binaries and libraries are not paged out. Being read-only, they are simply discarded from the memory. And I'll repeat, actively used executable pages are explicitly excluded from reclaim and never discarded.
The reason you're supposed to have swap equal in size to your RAM is so that you can hibernate, not to make things faster. You can easily get away with far less than that because swap is rarely needed.
The “paging space needs to be X*RAM” and “paging space needs to be RAM+Y” predate hibernate being a common thing (even a thing at all), with hibernate being an extra use for that paging space not the reason it is there in the first place. Some OSs have hibernate space allocated separately from paging/swap space.
I do wish there was a way to reserve swap spaces for hibernation that don't contribute to the virtual memory. Else by construction the hibernation space is not sufficient for the entire virtual memory space, and hibernation will fail when the virtual memory is getting full.
this. i don't even want swap for my apps. they allocate to much memory as it is. i'd rather they be killed when the memory runs out or simply be prevented from allocating memory that's not there. the kind of apps that can be safely swapped out are rarely using much memory anyways.
You're implying that people are telling you to set up swap without any reason, when in fact there are good reasons - namely dealing with memory pressure. Maybe you could fit so much RAM into your computer that you never hit pressure - but why would you do that vs allocating a few GB of disk space for swap?
Also, as has been pointed out by another commenter, 8GB of swap for a system with 8GB of physical memory is overkill.
I'm also in the GP's camp; RAM is for volatile data, disk is for data persistence. The first "why would you do that" that needs to be addressed is why volatile data should be written to disk. And "it's just a few % of your disk" is not a sufficient answer to that question.
You can ask your favourite search engine or language fabricator about the differences between RAM and disk storage, they will all tell you the same thing. Frankly, it's kind of astonishing that this needs to be explained on a site like HN.
I have no idea where on those slides it says non-volatile storage should not be used for non-permanent, temporary data.
It does note main differences (speed, latency, permanence). How does that limit what data disk can be used for?
What would one use optane DIMMs for?
Also, if my program requires huge working set to process the data, why would I spend the effort and implement my own paging to templrary working files, instead of allocating ridiculous amount of memory and letting OS manage it for me? What is the benefit?
Because of cost - particularly given the current state of the RAM market. In order to have so much memory that you never hit memory spikes, you will deliberately need to buy RAM to never be used.
Note that simply buying more RAM than what you expect to use is not going to help. Going back to my post from earlier, I had a laptop with 8GB of RAM at a time where I would usually only need about 2-4GB of RAM for even relatively heavy usage. However, every once in a while, I would run something that would spike memory usage and make the system unresponsive. While I have much more than 8GB nowadays, I'm not convinced that it's enough to have completely outrun the risk of this sort of behaviour re-occuring.
how much swap do you have? i have 16GB now, and 16GB ram. i had a machine before with 48GB ram. obviously having more ram and no swap should perform better than the same amount of memory split into ram and swap.
Me, Austrian and two Austrian friends were doing a road trip through western Canada. We had a rental car with a remote key fob, and forgot the key fob on the cars roof when driving off for a multi-hundred kms trip. It obviously got lost and when stopping the engine at some random town along the way, we couldn't start the car anymore. (Luckily we had the trunk open when realizing that.)
An elderly lady we met at the parking lot offered us, three random strangers in their 30s stay at her place for the night. Her nephew even drove to the camping area where we headed off and probably lost the key. It was heart-warming.
After returning home we sent her a huge Christmas packet with typical specialties from Austria. (Pumpkin seed oil and others. :-) )
I as an European get the feeling people usually hate on the EU just because it dares to interfere with local legislation. But that's its job. And usually the EU interferes for a good reason. Usually because member countries falling back to only thinking about themselves and forgetting that we Europeans are in this shit together.
> you can't do that
It's good that you can't call sparkling wine that's not from the Champagne "Champagne".
It's good that you can't screw over flight passengers the way they do in the US.
It's good that you can't annoy customers with phone power sockets that change with every model.
When I hear about actual examples of excess bureaucracy, it's usually on the country-level.
When people talk about the EU, they don't necessarily mean the EU proper, just like many "US" problems are more at the state or local level. People often mean "within the EU", including national regulations that may be widespread.
Doesn't he just say that Romanian Jews overperform?
I don't think there's a need to read the authors words in the worst possible way.
> I understand this as ‘it makes sense for a group with skin color X and hair color Y to be better at school’
It wouldn't make sense for the author to mean it that way, because the "white skin black hair" classification likely includes more non-Jewish than Jewish Romanians.
> followed by ‘it must be Jews’
I mean, it's not that hard to believe. If you look at (Eastern) European history, A LOT of the people of scientific or cultural significance were Jews. Don't pin me on the numbers, but if you check the Wikipedia article of a random Easter European person that left a mark in science, there is an 33-50% chance that person has some sort of Jewish background.
Estonia was part of the USSR, and the USSR put a lot of money into Estonia to make communism look like it was working. At the expense of other parts of the USSR.
I've attended 4 of them (3 in person in Chicago, 1 remote during Covid), and I took the raft algo implementation course twice (second time remote). Half of the reason I take them is bc the content is good and the other half is because Dave is just genuinely an amazing instructor and human being. It takes someone of tremendous talent to present such complicated ideas with such simplicity and humility and lack of jargon, he talks about implementing consensus algorithms the same way someone might patiently teach you to tie your shoe. In the era of Big Ego in Big Tech, it's a truly refreshing difference in perspective.
Each course I took was accompanied shortly after by a significant increase in career, compensation, and understanding. Like anything else, it is what you make of it. All I can say is it was a very good experience for me.
Wow this is bringing back youth-hood memories. Great to see this page is still around. Thanks for bringing this up. SelfHTML has been an invaluable tool for my younger self.
And wouldn't it be semantically more correct to put the <style> tag into the <head> and turning the <span> into a <p>aragraph?
Since this is targeting beginners I'd think it's important to convey such fundamentals. But it might as well be that the HTML rules have changed since I left the game. It has been many years for me!
I would say <style> in the body is so widely used that one can call it "generally accepted". Google, Amazon, Wikipedia ... look at the source of any website, and you will see a lot of <style> elements in the body.
The reason is that it often makes the structure of websites much more logical. Often you want to define a piece of content, its layout and behavior in one specific place, not spread out across multiple places. Then the best solution is to have a <style> element right before the other DOM elements for that piece of content.
And for the html editor, it saves the user from thinking about one more element (<head>) which does not do anything that is visible to the user.
This post seems very well researched. It's great that OP has brought this up. If we could push further the state of the art in 3D printing, by simply no longer adhering to an (now proven to be invalid) patent, it's a no-brainer to do so.
There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.
For instance, if I have 8 GB of RAM, people recommend adding 8 GB of swap. But since I like having plenty of memory, I install 16 GB of RAM instead—and yet, people still tell me to use swap. Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.
Then, if I upgrade to 24 GB of RAM, the advice doesn’t change—they still insist on enabling swap. I could install an absurd amount of RAM, and people would still tell me to set up swap space.
It seems that for some, using swap has become dogma. I just don’t see the reasoning. Memory is limited either way; whether it’s RAM or RAM + swap, the total available space is what really matters. So why insist on swap for its own sake?