This came up in a separate topic, but started me wondering about what might have happened to reserved memory in recent Linode kernels. I've generally run my Linodes with 2.6.18, but have been doing some testing recently with paravirt kernels - mostly for IPv6 filtering support - and am a little discouraged by some of the numbers I'm seeing, as it seems moving to the later kernels significantly reduces available memory.
It seems like there's a rather significant growth in the amount of memory being reserved by the paravirt kernels, certainly compared to the latest stable 2.6.18, but notably quite significant between somewhat close recent paravirt releases.
For example, some comparisons of dmesg output (all on Linode 512s):
Code:
2.6.18.8-linode22
Memory: 511908k/532480k available (3989k kernel code, 12360k reserved, 1102k data, 224k init, 0k highmem)
2.6.38.3-linode32
Memory: 509416k/532480k available (5378k kernel code, 22616k reserved, 1570k data, 424k init, 0k highmem)
2.6.39-linode33 #5
Memory: 480264k/4202496k available (5700k kernel code, 43576k reserved, 1666k data, 412k init, 0k highmem)
Now I realize some growth over time might be expected, and the kernel/data increases seem reasonable, but the jump in reserved memory (especially between 2.6.38 and 2.6.39) seems excessive. Does anyone know what changed in that jump that could suddenly need almost twice as much reserved memory? I do know that vm.min_free_bytes was tweaked a few times for 2.6.39 (at least in part leading to build #5), but between my 2.6.18 and 2.6.39 nodes it's only changing from 2918K to 4096K so that's relatively minor.
I'm not that familiar with kernel memory management, but based on what I've been able to find, I don't think (but would love to be corrected) the kernel is going to use that memory for caching or apps, but only hold it ready for internal structures. So it seems to me that moving from latest stable to latest paravirt effectively decreases my working memory on a Linode 512 by ~30MB or 5% (almost 4% of which is just a change from 2.6.38 to 2.6.39).
One thing that I found interesting was the total memory shown in only the 2.6.39 log ... I guess it has different visibility into the host environment. But if the kernel is basing any calculations on that different figure, perhaps it explains some of the increase?
Can anyone shed better light on what might be going on and/or if this increase in reserved memory does in fact have a real impact on system memory usage. Any thoughts on how to get some of it back (assuming it is in fact largely being wasted)?
-- David