Browser showing a 429 Too Many Requests HTTP error
# website errors# troubleshooting

429 Too Many Requests: What It Means and How to Fix It

A 429 Too Many Requests error means you've sent too many requests in a given time period and the server is rate-limiting you. It's a protective mechanism — the server is saying "slow down" to prevent abuse, resource exhaustion, or service degradation.

Rate limiting is a fundamental part of modern web infrastructure. Understanding how it works and how to handle 429 errors properly is essential whether you're consuming an API, running a web scraper, or managing your own server.

What Does a 429 Too Many Requests Mean?

A 429 status code means the user (or client) has exceeded the rate limit set by the server. The response usually includes a Retry-After header telling you how long to wait before sending another request:

curl -i https://api.example.com/data
# HTTP/1.1 429 Too Many Requests
# Retry-After: 60
# X-RateLimit-Limit: 100
# X-RateLimit-Remaining: 0
# X-RateLimit-Reset: 1679616000

The rate limit headers tell you:

  • X-RateLimit-Limit: The maximum number of requests allowed in the time window.
  • X-RateLimit-Remaining: How many requests you have left.
  • X-RateLimit-Reset: When the rate limit window resets (usually a Unix timestamp).
  • Retry-After: How many seconds to wait before trying again.

Common Causes of a 429 Error

1. Exceeding API Rate Limits

Every major API has rate limits. If you're making too many calls too quickly, you'll get throttled. This is the most common cause of 429 errors.

# Example: hitting an API in a tight loop will trigger rate limiting
for i in $(seq 1 200); do
    curl -s -o /dev/null -w "%{http_code}\n" https://api.example.com/data
done
# Eventually you'll start seeing 429s

2. Aggressive Web Scraping or Crawling

Scraping a website too aggressively triggers rate limiting. Most sites will start returning 429s if you're hitting them faster than a reasonable human would browse.

3. Brute Force Protection

Login endpoints and authentication APIs often have strict rate limits to prevent credential stuffing and brute force attacks. Too many failed login attempts from the same IP will trigger a 429.

4. DDoS Protection Kicking In

Services like Cloudflare, AWS WAF, and Nginx's built-in rate limiting can return 429s when they detect traffic patterns that look like an attack — even if the traffic is legitimate.

5. Misconfigured Rate Limits on Your Server

If you're the server operator, your rate limiting rules might be too aggressive, blocking legitimate users. A rate limit of 10 requests per minute might be fine for an API but far too low for a website where a single page load triggers multiple asset requests.

6. Shared IP Address Issues

If you're behind a NAT, corporate proxy, or shared hosting, many users share the same IP address. The server sees all those requests as coming from one source and may rate-limit the entire group.

How to Fix a 429 Too Many Requests

If You're the Client (Receiving 429s)

Respect the Retry-After Header

The most important rule: when you get a 429, wait the amount of time specified in the Retry-After header before sending another request.

# Check the Retry-After header
curl -s -D - https://api.example.com/data -o /dev/null | grep -i retry-after
# Retry-After: 30

# Wait 30 seconds, then retry
sleep 30
curl https://api.example.com/data

Implement Exponential Backoff

For programmatic access, implement exponential backoff — wait increasingly longer between retries:

# Simple backoff test
DELAY=1
for i in 1 2 3 4 5; do
    STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://api.example.com/data)
    if [ "$STATUS" = "429" ]; then
        echo "Rate limited. Waiting ${DELAY}s..."
        sleep $DELAY
        DELAY=$((DELAY * 2))
    else
        echo "Success: $STATUS"
        break
    fi
done

Reduce Request Frequency

If you're scraping or polling, slow down your request rate:

# Add a delay between requests
for url in $(cat urls.txt); do
    curl -s "$url" > /dev/null
    sleep 2  # Wait 2 seconds between requests
done

If You're the Server Operator (Sending 429s)

Configure Rate Limiting in Nginx

# Define a rate limit zone
http {
    # 10 requests per second per IP, with a burst of 20
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;

    server {
        location /api/ {
            limit_req zone=api burst=20 nodelay;
            limit_req_status 429;

            proxy_pass http://localhost:3000;
        }
    }
}
sudo nginx -t && sudo systemctl reload nginx

Configure Rate Limiting in Apache

# Using mod_ratelimit (Apache 2.4+)
<Location /api/>
    SetOutputFilter RATE_LIMIT
    SetEnv rate-limit 400
</Location>

For more advanced rate limiting on Apache, consider using mod_evasive:

sudo apt-get install libapache2-mod-evasive
sudo a2enmod evasive
<IfModule mod_evasive20.c>
    DOSHashTableSize 3097
    DOSPageCount 5
    DOSSiteCount 50
    DOSPageInterval 1
    DOSSiteInterval 1
    DOSBlockingPeriod 60
</IfModule>

Tune Your Rate Limits

If legitimate users are getting 429s, your limits are too tight. Analyse your traffic to find the right balance:

# Count requests per IP in the last hour
awk -v d="$(date -d '1 hour ago' '+%d/%b/%Y:%H')" '$4 ~ d {print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

Whitelist Trusted IPs

For known API partners or internal services, bypass rate limiting:

geo $rate_limit_key {
    default $binary_remote_addr;
    10.0.0.0/8 "";     # Internal network - no rate limit
    203.0.113.5 "";    # Trusted partner - no rate limit
}

limit_req_zone $rate_limit_key zone=api:10m rate=10r/s;

How Domain Monitor Can Help

When your website starts returning 429 errors to real users, it means your rate limiting is either misconfigured or your server is under genuine stress. Either way, your visitors are being blocked from accessing your content. The problem is that 429 errors might only affect some users (those hitting the rate limit) while your site appears fine from your own browser.

Domain Monitor checks your site every minute from multiple locations worldwide. If your server starts rate-limiting Domain Monitor's checks, that's a strong signal your limits are too aggressive or your server is overwhelmed. You'll get an alert via email, SMS, or Slack so you can investigate immediately. Set up downtime alerts for your key pages and API endpoints, and use continuous website monitoring to catch rate-limiting issues before they drive away real users.

Quick Summary

CauseFix
Exceeded API rate limitRespect Retry-After, implement backoff
Aggressive scrapingSlow down request frequency
Brute force protectionExpected behaviour — reduce attempts
Server rate limits too strictIncrease limits, whitelist trusted IPs
Shared IP triggering limitsUse API keys instead of IP-based limiting
DDoS protection false positiveWhitelist legitimate traffic sources

A 429 is the server protecting itself. If you're the client, slow down and respect the limits. If you're the server operator, make sure your limits are sensible for your traffic patterns and that legitimate users aren't getting caught in the crossfire.

More posts

Wildcard vs SAN vs Single-Domain SSL Certificates: Which Do You Need?

Wildcard, SAN (multi-domain), and single-domain SSL certificates cover different use cases. Here's a clear comparison to help you pick the right type — and avoid paying for coverage you don't need.

Read more
Why DNS Works in One Location but Fails in Another

DNS resolves correctly from your office but fails for users in other countries or on different ISPs. Here's why geographic DNS inconsistency happens and how to diagnose which layer is causing it.

Read more
Registrar Lock vs Transfer Lock: What's the Difference?

Registrar lock and transfer lock are often confused — and disabling the wrong one leaves your domain vulnerable. Here's a clear breakdown of what each does and when to use them.

Read more

Subscribe to our PRO plan.

Looking to monitor your website and domains? Join our platform and start today.