Uptime report dashboard showing availability percentage, response time graphs and incident history
# website monitoring

How to Interpret Uptime Reports: Understanding Your Monitoring Data

An uptime monitoring tool produces a lot of data. Knowing what the numbers mean — and what actions they suggest — is the difference between monitoring that drives improvement and monitoring that just sits in a dashboard.

The Uptime Percentage

The headline metric in any uptime report is the availability percentage — what fraction of checks over the reporting period returned a successful response.

Calculating Uptime Percentage

Uptime % = (Successful checks / Total checks) × 100

With 1-minute checks and 2-failure confirmation, a 1-hour outage results in approximately 60 failed checks out of ~43,200 monthly checks (30 days):

(43,140 / 43,200) × 100 = 99.86%

What Uptime Percentages Mean

Uptime %Monthly DowntimeAnnual Downtime
99%~7.2 hours~3.65 days
99.5%~3.6 hours~1.83 days
99.9%~43 minutes~8.75 hours
99.95%~22 minutes~4.4 hours
99.99%~4.3 minutes~52 minutes

99.9% ("three nines") is the standard target for most production web applications. Achieving it requires no more than ~43 minutes of downtime per month.

The error budget concept formalises this: if your SLO is 99.9%, your error budget is the 0.1% — about 43 minutes of allowed downtime per month.

Response Time Data

Beyond availability, uptime reports contain response time data for each check. This is often more insightful than the availability percentage for ongoing performance management.

Key Response Time Metrics

Average response time: The mean across all checks. Useful as a baseline, but can be misleading if there are occasional spikes.

P50 (median): The midpoint — 50% of requests were faster than this. More representative than average for normal performance.

P95: 95% of requests were faster than this value. Represents worst-case performance for most users.

P99: 99% of requests were faster than this. Represents performance for users in the slowest 1%.

For most web applications, targeting:

  • P50 < 300ms
  • P95 < 1000ms
  • P99 < 3000ms

...provides a good user experience.

Response time data over time tells stories:

Gradual increase over weeks: Growing database without index maintenance; accumulating technical debt; insufficient hardware for growing traffic

Spike at a specific time: Correlated with a deployment — investigate that change

Consistent slowness from one monitoring location: Possible CDN or routing issue affecting that region (see multi-location monitoring)

Spikes at regular intervals: Cron job running and consuming resources; scheduled tasks competing with user traffic

Reading the Incident Log

Every downtime event should appear in your report as an incident:

  • Start time — exactly when the first failed check occurred
  • End time — when monitoring confirmed recovery
  • Duration — total downtime
  • Type — HTTP failure, SSL error, timeout

The incident log is valuable for:

Root cause analysis: What was happening at that start time? Recent deployments? Infrastructure changes? Third-party outages?

Pattern recognition: Are incidents happening at the same time of day? After deployments? When a specific team member is making changes?

Post-mortem input: The precise timestamps from your monitor are the factual timeline foundation for post-incident reports.

SSL Certificate Status

A well-structured uptime report includes SSL certificate monitoring alongside HTTP availability:

  • Certificate validity — is it currently valid?
  • Days until expiry — how long until renewal is needed?
  • Certificate details — issuer, subject, validity period

An SSL certificate showing 14 days remaining requires immediate action. A certificate at 60 days gives you plenty of time.

The SSL certificate monitoring guide covers what advance warning thresholds to configure.

Domain Expiry Status

Similarly, reports should include domain registration status:

  • Current expiry date
  • Days until expiry
  • Alert thresholds

A domain approaching expiry in the next 30 days deserves immediate action.

Comparing Periods

The most useful uptime analysis compares periods:

Month over month: Is uptime improving or degrading? Are incidents becoming more or less frequent?

Before and after changes: Did uptime improve after infrastructure changes? Did a particular deployment cause a degradation?

By monitoring location: Does one region consistently have higher response times? This suggests a geographic or CDN issue.

Setting Uptime SLAs with Report Data

If you need to define or report against an SLA (Service Level Agreement), use your monitoring reports as the source of truth.

A typical SLA reporting process:

  1. Export monthly uptime report
  2. Calculate availability percentage for the period
  3. Identify any incidents that affected the SLA measurement
  4. Note any excluded maintenance windows
  5. Deliver report to stakeholders

Most monitoring tools can export this data automatically. Setting up automated monthly reports means the data reaches stakeholders without manual effort.

When Uptime Looks Good But Problems Persist

An important caveat: 100% uptime doesn't mean everything is fine. Your monitors check specific URLs at specific intervals — they don't check everything.

If users are reporting issues but your monitors show green:

  • Are you monitoring the right URL? (Check the full user journey, not just the homepage)
  • Is content verification enabled? (A 200 response with an error page counts as up without content checking)
  • Is the issue intermittent? (Occurring between check intervals)
  • Is it affecting only certain users or regions your monitors don't check?

Uptime reports are a starting point, not the complete picture. Synthetic monitoring and real user monitoring provide complementary perspectives.


Get detailed uptime reports with incident history and response time data at Domain Monitor.

More posts

What Is Generative AI? How It Works and What It Creates

Generative AI creates new content — text, images, code, and more. This guide explains how it works, what tools are available, and where it's genuinely useful versus overhyped.

Read more
What Is Cursor AI? The AI Code Editor Explained

Cursor AI is an AI-powered code editor built on VS Code. Learn what it does, how it works, and whether it's the right tool for your development workflow.

Read more
What Is Claude Opus? Anthropic's Most Powerful Model Explained

Claude Opus is Anthropic's most capable AI model, built for complex reasoning and demanding tasks. Learn what it does, how it compares, and when to use it.

Read more

Subscribe to our PRO plan.

Looking to monitor your website and domains? Join our platform and start today.