You can't protect yourself from a really good attack. Large corporate sites are taken down to DDOS attacks and they have full time staff. So don't stress and be happy.
Now on the other hand... if I were to do this to your site:
Code:
ab -n 1000000 -k http://yoursite.com
is your site going to keep trying to answer 1000000 requests on one connection from one IP as fast as it can using all possible resources? You can set nginx limit_req to 20 requests per second so that no legitimate user could possibly hit your cap. Modern browsers only send 2-8 parallel requests simultaneously for resources and then wait til those resources are finished downloading and then request 2-8 more. So depending on your latency with a particular client and the sizes of the files and transfer speed, that's probably only a max of 10 requests per second. Unless of course you're a mile from the server requesting 0 byte files.
On the other hand, as we mentioned, modern browsers download 2-8 files at a time. The number has been going up. Especially if we use subdomains to increase the speed our page renders. That means that your server is opening 2-8 connections to a client at once. If I do this to your site:
Code:
ab -n 1000000 -k -c 1000 http://yoursite.com
is your server going to allow me to open 1000 connections to your server? nginx happens to be really good and handles thousands of connections quite well with low resources, but it still takes resources. If you're using apache2-prefork with php_mod then each connection is going to use more resources. Again though, ask yourself the question, what is the maximum number of open connections a legitimate user might open with my site and then pad it. A limit_conn of 20 or 30 is pretty safe. (see
https://calomel.org/nginx.html for their explanation of why they set this at 5) And one guy with ab won't be able to totally mess with your site.
A special case might be if a particular page will take huge resources. For example your site has a search system that uses mysql. Searching your whole site on mysql is pretty taxing, since mysql sucks at search. (see apache solr) That's much different resource-wise than downloading your css and js files. So you might want to limit_req to 0.333 requests per second but only on /search. One IP can make 1 search every 3 seconds, or they get a 503 page. (Hopefully a nice customized pretty and apologetic 503 page.) You can do that within nginx location definitions.
Both of those directives involve coming from 1 IP. If you piss off some guy with a botnet of thousands of nodes and each hits your server for the maximum number of connections coming from thousands of IPs, you're still going to be screwed.
Set them up, give them nice generous limits. Get some sleep.