
A 429 Too Many Requests error means you've sent too many requests in a given time period and the server is rate-limiting you. It's a protective mechanism — the server is saying "slow down" to prevent abuse, resource exhaustion, or service degradation.
Rate limiting is a fundamental part of modern web infrastructure. Understanding how it works and how to handle 429 errors properly is essential whether you're consuming an API, running a web scraper, or managing your own server.
A 429 status code means the user (or client) has exceeded the rate limit set by the server. The response usually includes a Retry-After header telling you how long to wait before sending another request:
curl -i https://api.example.com/data
# HTTP/1.1 429 Too Many Requests
# Retry-After: 60
# X-RateLimit-Limit: 100
# X-RateLimit-Remaining: 0
# X-RateLimit-Reset: 1679616000
The rate limit headers tell you:
Every major API has rate limits. If you're making too many calls too quickly, you'll get throttled. This is the most common cause of 429 errors.
# Example: hitting an API in a tight loop will trigger rate limiting
for i in $(seq 1 200); do
curl -s -o /dev/null -w "%{http_code}\n" https://api.example.com/data
done
# Eventually you'll start seeing 429s
Scraping a website too aggressively triggers rate limiting. Most sites will start returning 429s if you're hitting them faster than a reasonable human would browse.
Login endpoints and authentication APIs often have strict rate limits to prevent credential stuffing and brute force attacks. Too many failed login attempts from the same IP will trigger a 429.
Services like Cloudflare, AWS WAF, and Nginx's built-in rate limiting can return 429s when they detect traffic patterns that look like an attack — even if the traffic is legitimate.
If you're the server operator, your rate limiting rules might be too aggressive, blocking legitimate users. A rate limit of 10 requests per minute might be fine for an API but far too low for a website where a single page load triggers multiple asset requests.
If you're behind a NAT, corporate proxy, or shared hosting, many users share the same IP address. The server sees all those requests as coming from one source and may rate-limit the entire group.
The most important rule: when you get a 429, wait the amount of time specified in the Retry-After header before sending another request.
# Check the Retry-After header
curl -s -D - https://api.example.com/data -o /dev/null | grep -i retry-after
# Retry-After: 30
# Wait 30 seconds, then retry
sleep 30
curl https://api.example.com/data
For programmatic access, implement exponential backoff — wait increasingly longer between retries:
# Simple backoff test
DELAY=1
for i in 1 2 3 4 5; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://api.example.com/data)
if [ "$STATUS" = "429" ]; then
echo "Rate limited. Waiting ${DELAY}s..."
sleep $DELAY
DELAY=$((DELAY * 2))
else
echo "Success: $STATUS"
break
fi
done
If you're scraping or polling, slow down your request rate:
# Add a delay between requests
for url in $(cat urls.txt); do
curl -s "$url" > /dev/null
sleep 2 # Wait 2 seconds between requests
done
# Define a rate limit zone
http {
# 10 requests per second per IP, with a burst of 20
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
server {
location /api/ {
limit_req zone=api burst=20 nodelay;
limit_req_status 429;
proxy_pass http://localhost:3000;
}
}
}
sudo nginx -t && sudo systemctl reload nginx
# Using mod_ratelimit (Apache 2.4+)
<Location /api/>
SetOutputFilter RATE_LIMIT
SetEnv rate-limit 400
</Location>
For more advanced rate limiting on Apache, consider using mod_evasive:
sudo apt-get install libapache2-mod-evasive
sudo a2enmod evasive
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 5
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 60
</IfModule>
If legitimate users are getting 429s, your limits are too tight. Analyse your traffic to find the right balance:
# Count requests per IP in the last hour
awk -v d="$(date -d '1 hour ago' '+%d/%b/%Y:%H')" '$4 ~ d {print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20
For known API partners or internal services, bypass rate limiting:
geo $rate_limit_key {
default $binary_remote_addr;
10.0.0.0/8 ""; # Internal network - no rate limit
203.0.113.5 ""; # Trusted partner - no rate limit
}
limit_req_zone $rate_limit_key zone=api:10m rate=10r/s;
When your website starts returning 429 errors to real users, it means your rate limiting is either misconfigured or your server is under genuine stress. Either way, your visitors are being blocked from accessing your content. The problem is that 429 errors might only affect some users (those hitting the rate limit) while your site appears fine from your own browser.
Domain Monitor checks your site every minute from multiple locations worldwide. If your server starts rate-limiting Domain Monitor's checks, that's a strong signal your limits are too aggressive or your server is overwhelmed. You'll get an alert via email, SMS, or Slack so you can investigate immediately. Set up downtime alerts for your key pages and API endpoints, and use continuous website monitoring to catch rate-limiting issues before they drive away real users.
| Cause | Fix |
|---|---|
| Exceeded API rate limit | Respect Retry-After, implement backoff |
| Aggressive scraping | Slow down request frequency |
| Brute force protection | Expected behaviour — reduce attempts |
| Server rate limits too strict | Increase limits, whitelist trusted IPs |
| Shared IP triggering limits | Use API keys instead of IP-based limiting |
| DDoS protection false positive | Whitelist legitimate traffic sources |
A 429 is the server protecting itself. If you're the client, slow down and respect the limits. If you're the server operator, make sure your limits are sensible for your traffic patterns and that legitimate users aren't getting caught in the crossfire.
Wildcard, SAN (multi-domain), and single-domain SSL certificates cover different use cases. Here's a clear comparison to help you pick the right type — and avoid paying for coverage you don't need.
Read moreDNS resolves correctly from your office but fails for users in other countries or on different ISPs. Here's why geographic DNS inconsistency happens and how to diagnose which layer is causing it.
Read moreRegistrar lock and transfer lock are often confused — and disabling the wrong one leaves your domain vulnerable. Here's a clear breakdown of what each does and when to use them.
Read moreLooking to monitor your website and domains? Join our platform and start today.