Network latency diagram showing round-trip time between user and server with performance impact metrics
# website monitoring

What Is Network Latency and How Does It Affect Website Performance?

Every time a user's browser makes a request to your server, that request travels across a network — potentially thousands of miles through physical cables, routers, and data centres. The time this journey takes is network latency, and it's one of the most significant factors in how fast (or slow) your website feels.

Defining Network Latency

Network latency is the time it takes for data to travel from one point to another across a network. It's typically measured as round-trip time (RTT) — the time for a request to reach a destination and for the response to return.

Latency is measured in milliseconds (ms):

  • < 20ms — excellent (local or nearby server)
  • 20-100ms — good (same continent, typical cloud hosting)
  • 100-300ms — noticeable (intercontinental)
  • 300ms — significantly degraded user experience

To check the latency to a server:

ping yourdomain.com

The RTT values in the ping output show round-trip latency.

What Causes Network Latency

Physical Distance

Light travels through fibre optic cables at roughly 200,000 km/s. A request from London to a server in New York (5,500km) will have a minimum latency of ~27ms just from physics — and in practice it's higher due to routing.

This is why CDNs (Content Delivery Networks) exist: by serving content from servers geographically close to users, CDNs dramatically reduce latency for static assets.

Number of Network Hops

Data doesn't travel directly between your browser and the destination server — it passes through many intermediate routers. Each hop adds a small amount of latency. Traceroute shows these hops:

traceroute yourdomain.com   # macOS/Linux
tracert yourdomain.com      # Windows

Network Congestion

When network links are heavily utilised, packets queue up and wait — adding queuing latency. This is more variable than physical latency and harder to control.

Server Processing Time

Strictly speaking, server processing time isn't "network" latency, but users experience it as part of the total request time. A slow database query or expensive server-side computation adds to time to first byte (TTFB) — which feels like latency to users.

TLS Handshake Overhead

HTTPS requires a TLS handshake before data transfer begins. This handshake takes 1-2 round trips, meaning an HTTPS connection to a high-latency server takes significantly longer than plain HTTP.

With HTTP/2 and TLS 1.3, the handshake is more efficient — another reason to ensure your server supports modern protocols.

How Latency Affects User Experience

Latency's impact on UX is well-studied:

  • 100ms additional latency reduces conversion rates by ~1% (Amazon research)
  • 1 second delay reduces customer satisfaction by 16% (Kissmetrics research)
  • 3 seconds load time: 40% of users abandon (Google data)

Critically, latency compounds. A page with 50 resources — each requiring its own request — multiplies latency 50 times in a worst-case scenario (though HTTP/2 multiplexing reduces this significantly).

Latency vs. Bandwidth

A common confusion: more bandwidth doesn't reduce latency.

  • Bandwidth determines how much data can be transferred per second (affects large file downloads)
  • Latency determines how long each round trip takes (affects responsiveness, interactive performance)

For most websites, latency has a larger impact than bandwidth on perceived performance. A high-bandwidth connection with high latency still feels slow for interactive applications.

Measuring Latency in Uptime Monitoring

Your uptime monitoring tool measures response time — the total time from sending the HTTP request to receiving the complete response. This includes network latency + TLS handshake + server processing + data transfer.

Response time trends in your monitoring data reveal:

  • Rising response times — potential server performance degradation
  • Latency spikes — network issues, CDN problems
  • Geographic variation — different response times from different monitoring locations (see multi-location monitoring)

Setting a response time threshold in your uptime monitoring — for example, alerting if response time exceeds 5 seconds — catches performance degradation before it causes a complete outage.

Reducing Network Latency

Use a CDN

A Content Delivery Network serves static assets (images, CSS, JavaScript) from edge nodes close to users. CloudFront, Cloudflare, and Fastly all reduce latency for users by minimising geographic distance.

Choose the Right Server Region

Host your application in a region close to your primary user base. For a European-focused business, hosting in Frankfurt or Dublin will give European users much lower latency than hosting in US East.

Enable HTTP/2

HTTP/2 multiplexes multiple requests over a single connection, reducing the overhead of multiple round trips. Enable it at your web server (Nginx monitoring covers this).

Implement Caching

Server-side caching (Redis, Varnish) reduces server processing time, effectively lowering the server's contribution to total response time.

Use DNS with Low TTL During Migrations

DNS resolution adds latency to first requests. Ensure your DNS provider has globally distributed name servers, and understand how DNS TTL affects resolution time.

Monitoring for Latency Issues

External uptime monitoring gives you response time data from multiple global locations. When response times from one location spike while others remain normal, it suggests a regional network issue (CDN problem, routing change) rather than a server issue.

Domain Monitor captures response time on every check, allowing you to track trends, set response time alerts, and correlate performance changes with deployments or infrastructure changes.


Track response times and detect latency issues with Domain Monitor.

More posts

What Is Generative AI? How It Works and What It Creates

Generative AI creates new content — text, images, code, and more. This guide explains how it works, what tools are available, and where it's genuinely useful versus overhyped.

Read more
What Is Cursor AI? The AI Code Editor Explained

Cursor AI is an AI-powered code editor built on VS Code. Learn what it does, how it works, and whether it's the right tool for your development workflow.

Read more
What Is Claude Opus? Anthropic's Most Powerful Model Explained

Claude Opus is Anthropic's most capable AI model, built for complex reasoning and demanding tasks. Learn what it does, how it compares, and when to use it.

Read more

Subscribe to our PRO plan.

Looking to monitor your website and domains? Join our platform and start today.