> It needs to be less willing to evict recently used buffers even under pressure, more willing to let processes OOM
I've had similar experiences. On Windows, where either through bugs or poor coding, an application requests way too much memory, leading to an unresponsive system while the kernel is busy paging away.
On Linux I've experienced the system killing system processes when under memory pressure, leading to crashes or an unusable system.
I don't understand why the OS would allow a program to allocate more than available physical memory, at least without asking the user, given the severe consequences.
Overcommit is a very deliberate feature, but its time may have passed. Keep in mind this is all from a time when RAM was so expensive swapping to spinning disks was a requirement just to run programs taking advantage of a 32-bit address space.
You can tune the overcommit ratio on Linux, but if memory serves (no pun intended) the last time I played with eliminating overcommit, a bunch of programs that liked to allocate big virtual address spaces ceased functioning.
Yeah, I know it was a feature at one point... but at least the OS should punish the program overcommitting, rather than bringing the rest of the system down (either by effectively grinding to a halt or killing important processes).
I've had similar experiences. On Windows, where either through bugs or poor coding, an application requests way too much memory, leading to an unresponsive system while the kernel is busy paging away.
On Linux I've experienced the system killing system processes when under memory pressure, leading to crashes or an unusable system.
I don't understand why the OS would allow a program to allocate more than available physical memory, at least without asking the user, given the severe consequences.