Website availability API showing JSON uptime data, status endpoints and programmatic monitoring integration
# website monitoring

Website Availability APIs: Checking Uptime Programmatically

Not every team wants to rely solely on a monitoring dashboard. Engineering teams increasingly want to integrate uptime data into their own systems — custom dashboards, CI/CD pipelines, deployment scripts, runbooks, and internal tooling.

Website availability APIs make this possible. Instead of manually checking a monitoring dashboard, you query an API and get structured data about your monitors, current status, uptime percentages, and incident history.

What Website Availability APIs Provide

A well-designed monitoring API exposes:

  • Monitor list — all configured monitors with their current status
  • Current status — up, down, or degraded, with last check timestamp
  • Uptime statistics — availability percentage over configurable time periods (24h, 7d, 30d, 90d)
  • Response time data — average, p95, p99 response times by period
  • Incident history — list of downtime incidents with start/end times and duration
  • SSL certificate status — expiry date and certificate validity
  • Domain expiry — days until domain registration expires

Use Cases for Availability APIs

Custom Internal Dashboards

Your team may have an internal operations dashboard or a NOC (network operations centre) screen that needs to show uptime data alongside other metrics (server CPU, request rates, deployment status). Rather than embedding an iframe from your monitoring tool, you can pull data via API and style it consistently with your internal design system.

CI/CD Pipeline Integration

Before deploying to production, you may want to verify that all monitored services are healthy. A pre-deployment check using the monitoring API:

#!/bin/bash
# Check all monitors are up before deployment

RESPONSE=$(curl -s -H "Authorization: Bearer $MONITOR_API_KEY" \
  "https://api.domain-monitor.io/v1/monitors")

DEGRADED=$(echo $RESPONSE | jq '[.monitors[] | select(.status != "up")] | length')

if [ "$DEGRADED" -gt "0" ]; then
  echo "Deployment blocked: $DEGRADED monitor(s) are not healthy"
  echo $RESPONSE | jq '.monitors[] | select(.status != "up") | .name'
  exit 1
fi

echo "All monitors healthy — proceeding with deployment"

This prevents deploying into an already-degraded environment, where the deployment might be blamed for an existing issue.

Post-Deployment Verification

After deploying, verify your monitors return to green within your expected recovery window:

import requests
import time

def wait_for_monitors_healthy(api_key, max_wait_seconds=300):
    headers = {"Authorization": f"Bearer {api_key}"}
    start = time.time()

    while time.time() - start < max_wait_seconds:
        response = requests.get(
            "https://api.domain-monitor.io/v1/monitors",
            headers=headers
        )
        monitors = response.json()["monitors"]
        unhealthy = [m for m in monitors if m["status"] != "up"]

        if not unhealthy:
            print("All monitors healthy")
            return True

        print(f"Waiting... {len(unhealthy)} monitors not yet healthy")
        time.sleep(30)

    print("Timeout: monitors did not recover within expected window")
    return False

SLA Reporting and Customer Notifications

If you offer SLAs to customers, you need uptime data to generate SLA reports. The monitoring API provides uptime percentages over specified time periods:

const response = await fetch(
  'https://api.domain-monitor.io/v1/monitors/my-monitor/uptime?period=30d',
  { headers: { 'Authorization': `Bearer ${API_KEY}` } }
);
const data = await response.json();

const uptimePercent = data.uptime_percent;
const totalDowntimeMinutes = data.downtime_minutes;

// Generate SLA report
if (uptimePercent < 99.9) {
  // Breach occurred — trigger SLA credit calculation
  calculateSLACredit(totalDowntimeMinutes);
}

Alerting Integration and Runbooks

When your alerting system fires (PagerDuty, Opsgenie, etc.), the on-call engineer needs context immediately. Use the monitoring API to pull current status into your runbook or incident management tool:

def get_incident_context(monitor_id):
    response = requests.get(
        f"https://api.domain-monitor.io/v1/monitors/{monitor_id}/incidents/latest",
        headers={"Authorization": f"Bearer {API_KEY}"}
    )
    incident = response.json()
    return {
        "started_at": incident["started_at"],
        "duration_minutes": incident["duration_minutes"],
        "last_status_code": incident["last_status_code"],
        "affected_locations": incident["affected_locations"]
    }

Building Your Own Uptime Checker

If you want to build a simple uptime checker for internal use rather than using a commercial API, here is a minimal implementation:

import requests
import time
from datetime import datetime

def check_url(url, timeout=10):
    try:
        start = time.time()
        response = requests.get(url, timeout=timeout, allow_redirects=True)
        elapsed = (time.time() - start) * 1000  # ms
        return {
            "url": url,
            "status": "up",
            "status_code": response.status_code,
            "response_time_ms": round(elapsed),
            "checked_at": datetime.utcnow().isoformat()
        }
    except requests.exceptions.Timeout:
        return {"url": url, "status": "down", "reason": "timeout"}
    except requests.exceptions.ConnectionError:
        return {"url": url, "status": "down", "reason": "connection_error"}
    except Exception as e:
        return {"url": url, "status": "down", "reason": str(e)}

# Check multiple URLs
urls = [
    "https://example.com",
    "https://api.example.com/health",
    "https://checkout.example.com"
]

results = [check_url(url) for url in urls]
for r in results:
    print(r)

The limitation of a self-built checker is that it runs from a single location and cannot distinguish between your site being down globally versus being unreachable from one network location. Commercial monitoring APIs run from multiple geographic locations, which is critical for accurate availability determination.

Monitoring API Response Time Data

Programmatic access to response time data lets you build performance trending over time:

# Get p95 response time for the last 7 days
curl -H "Authorization: Bearer $API_KEY" \
  "https://api.domain-monitor.io/v1/monitors/my-monitor/response-times?period=7d&percentile=95"

Use this data to:

  • Plot performance trends in Grafana or similar tools
  • Alert when p95 response time exceeds your SLO threshold
  • Correlate performance degradation with deployment events

Webhooks vs Polling

Most monitoring tools offer both webhooks and polling APIs. Choose based on your use case:

Webhooks are push-based — the monitoring service calls your endpoint when an incident starts or resolves. Best for:

  • Real-time incident notifications
  • Triggering automated responses (restart a service, scale up, page on-call)
  • Updating an internal status system

Polling APIs are pull-based — your system requests data on a schedule. Best for:

  • Dashboard data refresh (polling every 60 seconds)
  • Batch report generation (daily SLA reports)
  • CI/CD checks (run once before/after deployment)

Integrating Monitoring Data with Observability Tools

Availability API data complements your internal metrics stack. Feed monitoring data into:

  • Grafana — create uptime dashboards alongside server metrics
  • Datadog — correlate availability data with APM traces
  • Prometheus — expose uptime as a metric and alert on it

Many monitoring tools provide official integrations or Prometheus exporters for this purpose.

See website monitoring vs application monitoring for how external availability data fits alongside internal observability metrics.


Access uptime data programmatically via the Domain Monitor API — integrate availability monitoring into your dashboards and CI/CD pipelines.

More posts

What Is Generative AI? How It Works and What It Creates

Generative AI creates new content — text, images, code, and more. This guide explains how it works, what tools are available, and where it's genuinely useful versus overhyped.

Read more
What Is Cursor AI? The AI Code Editor Explained

Cursor AI is an AI-powered code editor built on VS Code. Learn what it does, how it works, and whether it's the right tool for your development workflow.

Read more
What Is Claude Opus? Anthropic's Most Powerful Model Explained

Claude Opus is Anthropic's most capable AI model, built for complex reasoning and demanding tasks. Learn what it does, how it compares, and when to use it.

Read more

Subscribe to our PRO plan.

Looking to monitor your website and domains? Join our platform and start today.