
An HTTP 200 means the server received the request and returned a response without a protocol-level error. It says nothing about whether the response is correct, complete, or contains real data.
False positives — monitors showing green while users experience failures — are one of the most dangerous monitoring blind spots. They give you confidence in a system that's actually broken.
// Status: 200 OK
{
"status": "success",
"data": [],
"count": 0
}
The endpoint is responding. Your monitor sees 200 and marks it as up. Your users see no data because a database query is failing silently and returning an empty result set rather than an error.
Some APIs (particularly older REST APIs and some GraphQL implementations) return errors inside a 200 response:
// Status: 200 OK — but the application failed
{
"success": false,
"error": "Database connection failed",
"data": null
}
A status-code-only monitor would mark this as healthy.
Your CDN or application cache is serving a response from 3 hours ago. The endpoint returns 200 with old data. The database behind it has been unavailable for 3 hours. Users see outdated data; your monitor sees 200.
In a microservices or composite API, one service fails and the aggregating endpoint returns partial data with a 200. The response looks valid but is missing critical fields.
// Status: 200 OK — but user profile data is missing
{
"user": { "id": 1, "email": "[email protected]" },
"profile": null, // Profile service timed out
"permissions": null // Permissions service unreachable
}
Configure your monitor to check that the response body contains text that only appears in a successful response:
# Example: check that the API returns actual user data
curl https://api.yourdomain.com/health \
| grep -q '"status":"ok"' && echo "OK" || echo "FAIL"
Most uptime monitors support keyword matching — specify a string that must be present in the response for the check to pass. Use something specific to a healthy response: "database":"connected", "status":"operational", not just "success" which might appear in error messages too.
Rather than monitoring your API endpoints directly, build a health check that explicitly tests each dependency and reports status in a structured way:
@app.route('/health/deep')
def deep_health():
results = {}
overall = 'ok'
# Database
try:
count = db.execute('SELECT COUNT(*) FROM users').scalar()
results['database'] = {'status': 'ok', 'user_count': count}
except Exception as e:
results['database'] = {'status': 'error', 'error': str(e)}
overall = 'degraded'
# Cache
try:
cache.set('health_check', 'ok', ex=10)
val = cache.get('health_check')
results['cache'] = {'status': 'ok' if val else 'error'}
except Exception as e:
results['cache'] = {'status': 'error', 'error': str(e)}
overall = 'degraded'
# External API dependency
try:
resp = requests.get('https://api.third-party.com/ping', timeout=3)
results['third_party_api'] = {
'status': 'ok' if resp.status_code == 200 else 'error',
'response_ms': int(resp.elapsed.total_seconds() * 1000)
}
except Exception as e:
results['third_party_api'] = {'status': 'error', 'error': str(e)}
overall = 'degraded'
return jsonify({'status': overall, 'checks': results}), \
200 if overall == 'ok' else 503
This endpoint returns 200 only when everything is genuinely working. It returns 503 when any dependency fails — giving your monitor something real to act on.
A response that takes 8 seconds is functionally broken even if it eventually returns 200. Set response time thresholds alongside status code checks:
Slow responses are often the precursor to complete failures — a database under heavy load takes 5 seconds before it starts refusing connections entirely.
For critical endpoints, validate that the response contains expected fields:
def validate_api_response(response_json):
required_fields = ['user', 'permissions', 'session']
for field in required_fields:
if field not in response_json or response_json[field] is None:
raise ValueError(f'Missing or null field: {field}')
return True
Build this validation into your health check or use it in a scheduled synthetic test.
False positives are more dangerous than false negatives (alerts that fire when nothing's wrong) because:
See why website monitoring misses downtime sometimes for the broader pattern of monitoring gaps, including this one.
Domain Monitor supports keyword matching on responses — your monitor can verify that the response body contains expected content, not just that a 200 was returned. Combined with response time thresholds and multi-location checks, this catches the class of failures that status-code-only monitoring misses. Create a free account.
Wildcard, SAN (multi-domain), and single-domain SSL certificates cover different use cases. Here's a clear comparison to help you pick the right type — and avoid paying for coverage you don't need.
Read moreDNS resolves correctly from your office but fails for users in other countries or on different ISPs. Here's why geographic DNS inconsistency happens and how to diagnose which layer is causing it.
Read moreRegistrar lock and transfer lock are often confused — and disabling the wrong one leaves your domain vulnerable. Here's a clear breakdown of what each does and when to use them.
Read moreLooking to monitor your website and domains? Join our platform and start today.