Committed memory question

So, I was adding some hi-res photos to my Zen Photo installation, and the Linode ground to a halt fairly quickly when creating thumbnails. I discovered that hi-res photos require a lot of memory (even when heavily compressed) when manipulated by ImageMagick.

Anyway, the strange thing was this: the actual apps memory usage stayed level, but the committed memory went from 333MB to 613MB. Swap usage was over 50MB and climbing and, obviously, things slowed severely. Restarting Apache sorted things, since the VIRT memory usage for some processes was about 125MB, even though RES was 90MB.

Here's my questions: Why did swap get thrashed when the commit went high, even though the apps appeared not to use it (according to munin)? Or was munin giving me false info, and the apps was indeed using the full 512MB plus swap?

4 Replies

Post your top, i run alot of imagemagick processes, and my memory never goes over 230mb total, and it never swaps…

@hobbes7:

So, I was adding some hi-res photos to my Zen Photo installation, and the Linode ground to a halt fairly quickly when creating thumbnails. I discovered that hi-res photos require a lot of memory (even when heavily compressed) when manipulated by ImageMagick.
Compressed storage formats are too inefficient for actual processing, so an image program is pretty certain to expand an image for processing, and sometimes even waste some memory in an internal format (e.g., using 4 bytes per pixel for alignment rather than 3 even if it's just an RGB image with no alpha channel) for efficiency.

Some tools (such as Gimp's tile cache, and I think ImageMagick has some limit options) provide a way to use disk as their own backing storage - at a significant cost in terms of performance, but controlling their impact on over system working set. Of course, whether swap or just application I/O, it's all running in to the same disk I/O bottleneck we have here, but it might help contain the impact to the image application.

> Anyway, the strange thing was this: the actual apps memory usage stayed level, but the committed memory went from 333MB to 613MB. Swap usage was over 50MB and climbing and, obviously, things slowed severely. Restarting Apache sorted things, since the VIRT memory usage for some processes was about 125MB, even though RES was 90MB.
What reference were you using to judge "actual apps memory usage" versus the committed memory? Did they have different snapshot cycles? Depending on how the images were being processed, could there have been processing coming and going without being caught in one or both of the stats?

> Here's my questions: Why did swap get thrashed when the commit went high, even though the apps appeared not to use it (according to munin)? Or was munin giving me false info, and the apps was indeed using the full 512MB plus swap?
Historically this would be a simpler answer - "committed" memory would be the system working set (actual current requirements) so if higher than physical memory you would have to swap.

I think Linux's representation of committed memory is a little different, in part because it doesn't actually commit memory upon request, but only on use, so there's an implicit unknown factor. As I understand it, the committed stat is more a probabilistic report as to how much memory you would need to ensure you can't OOM, but I don't know that it requires all of that memory be actively in use, or even have been touched yet.

I think for most practical purposes the difference can be ignored though, when it comes to sizing the needs of a system. And given that you were, in fact, swapping, I think a classical definition is close enough for your purposes. It seems safe to assume your aggregate instantaneous memory usage was exceeding physical memory.

As for not accounting for it with apps, Munin by default (I think) is using a 5 minute cycle, and just taking snapshots at those points of process metrics (unlike, say, network I/O which is an increasing counter it can delta from the prior run). So there's a lot it can miss (including very large spikes in resource usage) especially with short lived applications. It's just a guess but perhaps you had a lot of ImageMagick processes starting and stopping as the thumbnails were being built, and on average you were overcommitted, but the instantaneous snapshots taken by Munin couldn't show it. In such a case I don't think I'd call the Munin information "false", just that the metric you were looking for was beyond it's capabilities at its sampling rate.

You can see something similar if you get caught in a problem that constantly forks processes. A system can be brought to a dead crawl but even attempting to constantly monitor with something like top might not show why since the processes are being created and destroyed too quickly for the monitoring frequency.

– David

Thanks for the replies.
> Historically this would be a simpler answer - "committed" memory would be the system working set (actual current requirements) so if higher than physical memory you would have to swap.
I thought swap would only be used if the VPS actually used the committed memory when over the physical limits. AFAIK, committed memory does not necessarily equal actual memory usage, but is only what would be used if a process used all memory it has reserved.

I've used lower-res images now, and so everything is fine. I thought my zenphoto is using ImageMagick (it is ticked in the options), but it seems to be using GD to process the images. ZenPhoto gives the following disturbing memory requirements:
> * VGA Image, 640 x 480 pixels => needs ~4.1 MB Memory

  • SVGA Image, 800 x 600 pixels => needs ~4.8 MB Memory

  • 1 MP Image, 1024 x 798 pixels => needs ~6.3 MB Memory

  • 2 MP Image, 1600 x 1200 pixels => needs ~11.7 MB Memory

  • 6 MP Image, 2816 x 2112 pixels => needs ~22.6 MB Memory

  • 8.2 MP Image, 3571 x 2302 pixels => needs ~41.7 MB Memory
    I've reduced the images to be 1200 wide on the longest side, and things are fine.

@hobbes7:

Thanks for the replies.
> Historically this would be a simpler answer - "committed" memory would be the system working set (actual current requirements) so if higher than physical memory you would have to swap.
I thought swap would only be used if the VPS actually used the committed memory when over the physical limits. AFAIK, committed memory does not necessarily equal actual memory usage, but is only what would be used if a process used all memory it has reserved.
One bit of terminology to be clear on - to me, swap being "used" just means data stored in it, while "swapping" is actually moving data pages back and forth among swap and physical memory. Using swap need not be disastrous to a system but heavy swapping (e.g., thrashing) is, and of course in the Linode VPS environment, "heavy" has a very low threshold.

In other systems I'm accustomed to having committed memory being any memory given (committed) to a process, and therefore must either exist in swap or physical memory (or occasionally an alternate backing store to the swap - like Windows uses the executable for its pages). More committed than physical memory ensures swap being "used", though it's possible the working set might not be causing actual swapping. You can't commit more memory than your total virtual space.

My understanding with Linux is that the reported committed value is more usage based than assignment based, since Linux will give out more memory than it has to processes, hoping they don't use it all. So the committed value is still tied to actual memory pages touched by a process, and if larger than physical memory swap is still going to be used. As above, it's of course possible to over-commit physical memory, but have a small working set resident size, so while "using" swap space you might not actively be swapping pages in and out.

Of course, the problem also is that Linux may have handed out far more than all virtual memory available, but you won't know it until enough gets touched and the OOM killer kicks in.

In your scenario I think the committed memory was representative of your working set (since it was largely tied up in image processing) so you both exceeded physical memory - requiring swap usage - but were also then actively swapping since most of that committed memory was in active use.

> I've used lower-res images now, and so everything is fine. I thought my zenphoto is using ImageMagick (it is ticked in the options), but it seems to be using GD to process the images. ZenPhoto gives the following disturbing memory requirements:
Yeah, images just need space … I think that you can put some limits on GD memory usage too, through an ImageAPI setting in PHP, but I've never done it myself.

-- David

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct