unix – Origin of the rule that swap size should be 2x of the physical memory

🚀 Discover this awesome post from Hacker News 📖

📂 **Category**:

✅ **What You’ll Learn**:

In short: The intent was never to fill it; the intent was to always have a contiguous free swap range for a a contiguous memory range.

Today it is often weird. My laptop has only 256GB SSD hard drive with 32GB RAM. Following this, a fourth of my disk would be swap.

At the time, case was different. In 1997, a machine with 8 MB physical RAM and 320 MB disk was typical. That was only 1/20 of the disk.

The difference of this ratio is because RAM became very big recently, being needed by the latest Windows. But disks did not grow so much, because everything was “in cloud”. Beside that, everyone switched to SSD because the NTFS / Windows use cases were always suboptimal. But SSD in the same size is still more costly than HDD; although the difference is ceasing because the SSD gets much more development investment.

The reason of the 2x was swap fragmentation. Those disks were yet HDDs, with a seek time. Thus, it was essentially important to write everything in them in consecutive chunks.

That was no goal to fill it; the goal was that the system always has a place to find a consecutive empty block, to write into swap a consecutive memory range.

The guys essentially not understanding how system paging and block cache works existed even at the time. I have even heard university sysadmins saying, that “we can solve the memory need of our local server without forced compromises, like swap”. I understood only decades later, that he was not an expert knowing something what I do not, but he did not know even what I already knew (at the time). Many people had the very stupid concept of that “swap is slow so I turn it off”. But it happened only among “Linuxers” (incl. FreeBSD / Solaris etc) guys, because the Windows guys mostly did not know what is swap at all. They also did not know how to turn it off.

My impression is also that the software behaved a bit differently. At the time, if you had 8 MB RAM, your processes used, for example, 14 MB, thus you had a minimal block cache and used 8 MB swap; that was slow but fine. Today it would be an nearly unusable system. My impression is that it is because today the processes are more happy to regularly touch all their memory pages, particularly the VMs/interpreters (like a JVM garbage collection regularly walks over the whole heap of a process).

The swap == 2x of your physical ram came from that you need to be able to swap out all your physical RAM, but you need to do it in a way that you can do it always into contiguous empty disk ranges.


To your extension: FreeBSD was special, early FreeBSD had no paging only segmentation. As far I remember, it could swap out only whole processes (or segments). Note, same is for the win3x. Obviously the need to contiguous block allocation was far more important in this case, although the x2 remained.

First, it was never a rigid rule. It was always a rule of thumb. Although in the practice, it was very often used as a rigid rule. The reason is very simply: we had no better idea.

Now the question is, why was that this rigid rule. Then imagine, what happens in a swap space. Randomly coming random page ranges to swap in, and swap out. Your primary interest is to allocate always contiguous block ranges for them on the disk. Disk is cheap (by size), memory prices are astronomical.

Sometimes it happens that a factory burns in Taiwan, and in the next 2 years, the new computers have half the ram on the same price.

Also disk is costly, but the seriousness of the situation is nowhere to the RAM.

Back to the 2x. It is a rule of thumb and the reason behind it is partly psychological and partly technological, but it is a partly psychological decision of the guys who really understand, what is going on in their machine.

Now imagine, play it in your mind, what a swap algorithm is doing. You have a queue of tasks, write out ranges, read back ranges. They are coming randomly (for you). Play this story in your mind. I would happily build a video for that but I want you that you are capable to imagine it.

Then say something, what is the overcommit what you would say for the disk usage. You do nothing about the user, but your little brother comes and asks, what should he say to the linux installer about the swap size.

No you won’t say 1.5x and not 3x. The background logic, on which you will say the 2x, is that you estimate, in the maximal or close-maximal overload, roughly the same amount of ram should lie on the swap as your physical RAM. That is coming from that roughly that was the case (at the time) where the computers became unusably slow.

Now you want this to be allocatable in contiguous ranges. It is again partly psychological, because anything can happen on the swap partition (swap file). But you know, these are rare. Here the heuristics is that you want roughly the same amount of free swap space as allocated swap space.

Thus, this was the combination of 3 other rule of thumbs:

  1. In the maximal load you want to handle, is that you have swap about the same size as your physical RAM.
  2. To get nearly always contiguous ranges, at most about half of your swap space should have been allocated.

Pack these two together. You have 8 MB RAM, so you have 16 MB swap.

🔥 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#unix #Origin #rule #swap #size #physical #memory**

🕒 **Posted on**: 1772065925

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *