Backups cause disk I/O warning email

I get this email regularly:
> …exceeded the notification threshold (1300) for disk io rate by averaging 1329.47 for the last 2 hours

This is what triggered it:

~~![](<URL url=)http://n4te.com/x/1262-edop.png" />

Pretty obvious it's the daily backup:

~~![](<URL url=)http://n4te.com/x/1263-AzHq.png" />

Is the right thing to do to raise the warning threshold? The default is 1000 so I think I already raised it to 1300. I would like to know about excessive IO, but I'd like backups excluded. If I get an email every day, I would have to either signore it and assume it's a backup (bad!) or check it every day (annoying!).~~~~

6 Replies

@Nate:

Is the right thing to do to raise the warning threshold?
Yes.

The Linode Backups Service itself should not create disk IO. If you're implementing your own backups solution, or are doing a daily cron task to dump/compress data before the service runs, than this makes sense. Otherwise, you may want to actually find out what's causing the spike.

In any case, the default notifications are set pretty low and in general I agree that raising the threshold is probably the way to go.

P.S. To clarify, Linode's backup service doesn't create disk IO that's metered in the manager. I'm not trying to say that it pulls the data magically, but it shouldn't affect your performance or be visible in your graphs.

Thanks. I raised it to 1400 for now.

I have some cron scripts that run every night, but I just ran them manually and they don't cause a spike. I was just assuming it was the Linode backup service, since it happens every day in the backup time window. Are you sure the Linode backup service won't cause an IO spike?

Edit: actually it seems the cron jobs do cause a spike, just took a while for the chart to update. Seems the mystery is solved. Maybe I shouldn't optimize a multi million row table every night.

I don't think linode backup service will cause IO spike because that runs on node server. Add some cron to save details of process every minute and see what process is running at that time.

@Nate: Do you have a solution for your nightly cronjob now? One problem could be that you started the next cronjob before the last one is finished - this leads to massive cpu and memory usage. A solution for your problem could be to split the massive job into multiple smaller jobs as the database seems to be the bottleneck.

Thanks. I raised the threshold and run the script less often, it's been OK for a while now.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct