
Not every team wants to rely solely on a monitoring dashboard. Engineering teams increasingly want to integrate uptime data into their own systems — custom dashboards, CI/CD pipelines, deployment scripts, runbooks, and internal tooling.
Website availability APIs make this possible. Instead of manually checking a monitoring dashboard, you query an API and get structured data about your monitors, current status, uptime percentages, and incident history.
A well-designed monitoring API exposes:
Your team may have an internal operations dashboard or a NOC (network operations centre) screen that needs to show uptime data alongside other metrics (server CPU, request rates, deployment status). Rather than embedding an iframe from your monitoring tool, you can pull data via API and style it consistently with your internal design system.
Before deploying to production, you may want to verify that all monitored services are healthy. A pre-deployment check using the monitoring API:
#!/bin/bash
# Check all monitors are up before deployment
RESPONSE=$(curl -s -H "Authorization: Bearer $MONITOR_API_KEY" \
"https://api.domain-monitor.io/v1/monitors")
DEGRADED=$(echo $RESPONSE | jq '[.monitors[] | select(.status != "up")] | length')
if [ "$DEGRADED" -gt "0" ]; then
echo "Deployment blocked: $DEGRADED monitor(s) are not healthy"
echo $RESPONSE | jq '.monitors[] | select(.status != "up") | .name'
exit 1
fi
echo "All monitors healthy — proceeding with deployment"
This prevents deploying into an already-degraded environment, where the deployment might be blamed for an existing issue.
After deploying, verify your monitors return to green within your expected recovery window:
import requests
import time
def wait_for_monitors_healthy(api_key, max_wait_seconds=300):
headers = {"Authorization": f"Bearer {api_key}"}
start = time.time()
while time.time() - start < max_wait_seconds:
response = requests.get(
"https://api.domain-monitor.io/v1/monitors",
headers=headers
)
monitors = response.json()["monitors"]
unhealthy = [m for m in monitors if m["status"] != "up"]
if not unhealthy:
print("All monitors healthy")
return True
print(f"Waiting... {len(unhealthy)} monitors not yet healthy")
time.sleep(30)
print("Timeout: monitors did not recover within expected window")
return False
If you offer SLAs to customers, you need uptime data to generate SLA reports. The monitoring API provides uptime percentages over specified time periods:
const response = await fetch(
'https://api.domain-monitor.io/v1/monitors/my-monitor/uptime?period=30d',
{ headers: { 'Authorization': `Bearer ${API_KEY}` } }
);
const data = await response.json();
const uptimePercent = data.uptime_percent;
const totalDowntimeMinutes = data.downtime_minutes;
// Generate SLA report
if (uptimePercent < 99.9) {
// Breach occurred — trigger SLA credit calculation
calculateSLACredit(totalDowntimeMinutes);
}
When your alerting system fires (PagerDuty, Opsgenie, etc.), the on-call engineer needs context immediately. Use the monitoring API to pull current status into your runbook or incident management tool:
def get_incident_context(monitor_id):
response = requests.get(
f"https://api.domain-monitor.io/v1/monitors/{monitor_id}/incidents/latest",
headers={"Authorization": f"Bearer {API_KEY}"}
)
incident = response.json()
return {
"started_at": incident["started_at"],
"duration_minutes": incident["duration_minutes"],
"last_status_code": incident["last_status_code"],
"affected_locations": incident["affected_locations"]
}
If you want to build a simple uptime checker for internal use rather than using a commercial API, here is a minimal implementation:
import requests
import time
from datetime import datetime
def check_url(url, timeout=10):
try:
start = time.time()
response = requests.get(url, timeout=timeout, allow_redirects=True)
elapsed = (time.time() - start) * 1000 # ms
return {
"url": url,
"status": "up",
"status_code": response.status_code,
"response_time_ms": round(elapsed),
"checked_at": datetime.utcnow().isoformat()
}
except requests.exceptions.Timeout:
return {"url": url, "status": "down", "reason": "timeout"}
except requests.exceptions.ConnectionError:
return {"url": url, "status": "down", "reason": "connection_error"}
except Exception as e:
return {"url": url, "status": "down", "reason": str(e)}
# Check multiple URLs
urls = [
"https://example.com",
"https://api.example.com/health",
"https://checkout.example.com"
]
results = [check_url(url) for url in urls]
for r in results:
print(r)
The limitation of a self-built checker is that it runs from a single location and cannot distinguish between your site being down globally versus being unreachable from one network location. Commercial monitoring APIs run from multiple geographic locations, which is critical for accurate availability determination.
Programmatic access to response time data lets you build performance trending over time:
# Get p95 response time for the last 7 days
curl -H "Authorization: Bearer $API_KEY" \
"https://api.domain-monitor.io/v1/monitors/my-monitor/response-times?period=7d&percentile=95"
Use this data to:
Most monitoring tools offer both webhooks and polling APIs. Choose based on your use case:
Webhooks are push-based — the monitoring service calls your endpoint when an incident starts or resolves. Best for:
Polling APIs are pull-based — your system requests data on a schedule. Best for:
Availability API data complements your internal metrics stack. Feed monitoring data into:
Many monitoring tools provide official integrations or Prometheus exporters for this purpose.
See website monitoring vs application monitoring for how external availability data fits alongside internal observability metrics.
Access uptime data programmatically via the Domain Monitor API — integrate availability monitoring into your dashboards and CI/CD pipelines.
Generative AI creates new content — text, images, code, and more. This guide explains how it works, what tools are available, and where it's genuinely useful versus overhyped.
Read moreCursor AI is an AI-powered code editor built on VS Code. Learn what it does, how it works, and whether it's the right tool for your development workflow.
Read moreClaude Opus is Anthropic's most capable AI model, built for complex reasoning and demanding tasks. Learn what it does, how it compares, and when to use it.
Read moreLooking to monitor your website and domains? Join our platform and start today.