
Supabase has become the go-to backend-as-a-service for many developers — a managed Postgres database, authentication, storage, realtime, and edge functions all in one platform. That convenience comes with a dependency trade-off: if Supabase has an issue, your application breaks.
Monitoring a Supabase-backed application means monitoring both the application layer and the Supabase dependency layer. Here's how to do it properly.
Supabase is a managed platform with its own infrastructure. Issues that affect your application include:
Postgres database unavailability — If the database is unreachable (planned maintenance, infrastructure issue, connection limits hit), queries fail and your application returns errors.
Auth service issues — If Supabase Auth has a problem, users can't log in, tokens can't be verified, and authenticated requests fail — even if the database itself is fine.
API gateway degradation — Supabase's REST API (PostgREST) and the JavaScript client both route through the API layer. Slowdowns here affect every database query.
Connection pool exhaustion — Supabase's connection limits depend on your plan. Applications with many concurrent connections can exhaust the pool, causing new connections to fail.
Free tier pauses — Supabase pauses projects on the free tier after a period of inactivity. The first request after a pause has a significant cold start delay.
Supabase publishes a status page at status.supabase.com. Subscribe to incident notifications so you're alerted when Supabase has a platform-level issue.
This tells you about known Supabase incidents but doesn't tell you when your specific project is having issues — connection limit exhaustion, a badly performing query, or a project-specific configuration problem won't appear on the status page.
The most meaningful health check for a Supabase-backed application tests the actual database connection:
Node.js / Supabase JS client:
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.SUPABASE_URL,
process.env.SUPABASE_SERVICE_ROLE_KEY
);
app.get('/health', async (req, res) => {
const checks = {};
// Test database connectivity
try {
const { error } = await supabase
.from('health_check')
.select('id')
.limit(1);
checks.database = error ? 'error' : 'ok';
if (error) checks.database_error = error.message;
} catch (e) {
checks.database = 'unreachable';
}
// Test auth service
try {
const { error } = await supabase.auth.getSession();
checks.auth = error ? 'error' : 'ok';
} catch (e) {
checks.auth = 'unreachable';
}
const allOk = Object.values(checks)
.filter(v => typeof v === 'string' && !v.includes('_'))
.every(v => v === 'ok');
res.status(allOk ? 200 : 503).json({
status: allOk ? 'ok' : 'degraded',
...checks,
timestamp: new Date().toISOString()
});
});
Python:
from supabase import create_client
supabase = create_client(os.getenv('SUPABASE_URL'), os.getenv('SUPABASE_SERVICE_KEY'))
@app.route('/health')
def health():
checks = {}
try:
supabase.table('health_check').select('id').limit(1).execute()
checks['database'] = 'ok'
except Exception as e:
checks['database'] = str(e)
all_ok = all(v == 'ok' for v in checks.values())
return jsonify({'status': 'ok' if all_ok else 'degraded', **checks}), 200 if all_ok else 503
Create a minimal health_check table in your Supabase project with a single row that exists solely for this check:
CREATE TABLE health_check (id integer PRIMARY KEY DEFAULT 1);
INSERT INTO health_check VALUES (1);
Supabase Postgres has connection limits that vary by plan. Hitting the limit causes too many connections errors that break new requests while existing connections work fine.
Monitor active connections:
-- Run this query periodically to check connection usage
SELECT count(*) as active_connections,
max_conn as max_connections,
count(*) * 100 / max_conn as usage_pct
FROM pg_stat_activity, (SELECT setting::int as max_conn FROM pg_settings WHERE name = 'max_connections') limits
WHERE state = 'active'
GROUP BY max_conn;
If you're using Supabase's connection pooler (PgBouncer), use the pooler connection string in your application for most queries. Reserve direct connections for migrations and admin operations.
Auth failures should fail gracefully, not crash your application. Handle token verification failures explicitly:
export async function middleware(request) {
const { data: { user }, error } = await supabase.auth.getUser(token);
if (error) {
// Auth service issue — don't crash, return 503
if (error.message.includes('network') || error.status >= 500) {
return new Response('Service temporarily unavailable', { status: 503 });
}
// Actual auth failure — 401
return new Response('Unauthorized', { status: 401 });
}
return NextResponse.next();
}
This distinction matters for monitoring: 401s are expected (invalid tokens), 503s indicate Supabase auth availability issues.
If your project is on Supabase's free tier, add a warmup step to your health check that accounts for the cold start:
app.get('/health', async (req, res) => {
const startTime = Date.now();
try {
const { error } = await supabase.from('health_check').select('id').limit(1);
const responseTime = Date.now() - startTime;
return res.json({
status: error ? 'error' : 'ok',
response_time_ms: responseTime,
// Flag if suspiciously slow (possible wake-up from pause)
cold_start_suspected: responseTime > 5000
});
} catch (e) {
return res.status(503).json({ status: 'error', error: e.message });
}
});
A response time over several seconds on the first request after inactivity indicates a wake-up. Set your monitoring tool's timeout high enough to avoid false positive alerts.
If your application uses Supabase Realtime for live updates, subscription failures are silent — clients stop receiving updates but don't throw obvious errors.
Include a realtime connectivity check if your application depends on it:
let realtimeConnected = false;
const channel = supabase.channel('health')
.on('presence', { event: 'sync' }, () => { realtimeConnected = true; })
.subscribe((status) => {
realtimeConnected = status === 'SUBSCRIBED';
});
app.get('/health', (req, res) => {
res.json({
status: realtimeConnected ? 'ok' : 'degraded',
realtime: realtimeConnected ? 'ok' : 'disconnected'
});
});
Your application's health from a user perspective is what your external monitoring should measure. Even if Supabase itself is fine, your application server could be down — and even if your application server is fine, a Supabase issue could be making every request fail.
Domain Monitor monitors your application endpoint from multiple global locations every minute. Point it at your /health endpoint that actually tests the Supabase connection — a 200 from that endpoint confirms both your application server and your Supabase connectivity are working. A 503 means something in the chain has failed.
Create a free account and set up your monitor before you need it. When Supabase has an incident — or when your own application has a problem — you'll know within a minute.
See how to set up uptime monitoring for a complete monitoring setup, and uptime monitoring best practices for what to monitor and how. Since Supabase runs on PostgreSQL, how to monitor PostgreSQL performance and connection issues covers the database-level signals — connection pool limits, autovacuum lag, and replication health — that are relevant if you're accessing the underlying Postgres directly.
When your site goes down, your status page becomes the most important page you have. Here's why it matters, what happens when you don't have one, and what a good status page does during a real outage.
Read moreYour domain is resolving, but pointing to the wrong server — showing old content, a previous host's page, or someone else's site entirely. Here's what causes this and how to diagnose it.
Read moreUptime monitoring isn't foolproof. Single-location monitors, wrong health check endpoints, long check intervals, and false positives can all cause real downtime to go undetected. Here's what to watch out for.
Read moreLooking to monitor your website and domains? Join our platform and start today.