Code editor showing Anthropic Claude API Python code with terminal output beside it
# ai tools# claude ai# developer tools

Claude API Tutorial: Getting Started With the Anthropic API

The Anthropic API gives you programmatic access to Claude's models — letting you build Claude into your own applications, automate workflows, and create AI-powered features for your users.

This tutorial walks you through getting your API key, making your first call, and understanding the key patterns you'll use in real applications.

Prerequisites

  • A free or paid account at anthropic.com
  • Python 3.8+ or Node.js 18+
  • Basic familiarity with making API calls

Step 1: Get Your API Key

  1. Log into your account at console.anthropic.com
  2. Go to API Keys in the sidebar
  3. Click Create Key, give it a name, and copy the key

Store this key securely. Never hardcode it in your source code — use an environment variable:

export ANTHROPIC_API_KEY="sk-ant-..."

Or add it to a .env file (and add .env to your .gitignore).

Step 2: Install the SDK

Python:

pip install anthropic

Node.js:

npm install @anthropic-ai/sdk

Step 3: Your First API Call

Python:

import anthropic

client = anthropic.Anthropic()  # reads ANTHROPIC_API_KEY from environment

message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Explain what an API is in two sentences."}
    ]
)

print(message.content[0].text)

Node.js:

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic(); // reads ANTHROPIC_API_KEY from environment

const message = await client.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 1024,
    messages: [
        { role: "user", content: "Explain what an API is in two sentences." }
    ]
});

console.log(message.content[0].text);

Run it and you'll get a response from Claude. That's all it takes to make your first call.

Understanding the Messages Structure

The API uses a messages format — a list of turns in a conversation, each with a role and content.

Roles are either "user" (the human) or "assistant" (Claude). For multi-turn conversations, you pass the full history:

messages = [
    {"role": "user", "content": "What is the capital of France?"},
    {"role": "assistant", "content": "The capital of France is Paris."},
    {"role": "user", "content": "What is the population of that city?"}
]

Claude uses the full conversation history to maintain context across turns.

System Prompts

A system prompt sets the context, persona, or instructions for Claude before the conversation starts. Use it to define the role Claude should play in your application:

message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system="You are a helpful assistant for a web hosting company. Answer questions concisely and focus on practical solutions.",
    messages=[
        {"role": "user", "content": "My website is returning a 502 error. What should I check first?"}
    ]
)

System prompts are the primary way to customise Claude's behaviour for your specific use case.

Choosing a Model

The model parameter selects which Claude model to use:

# Most capable — for complex tasks
model="claude-opus-4-6"

# Balanced — good for most applications
model="claude-sonnet-4-6"

# Fastest and cheapest — for high-volume simple tasks
model="claude-haiku-4-5-20251001"

See Claude Opus vs Sonnet for guidance on which to choose. Start with Sonnet — it handles most tasks well and is cost-effective at scale.

Streaming Responses

For real-time applications where you want to show Claude's response as it's generated (rather than waiting for the full response):

with client.messages.stream(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Write a short poem about monitoring."}]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

Streaming improves the perceived responsiveness of your application significantly for longer responses.

Handling Errors

Always handle API errors gracefully in production:

import anthropic
from anthropic import APIConnectionError, RateLimitError, APIStatusError

try:
    message = client.messages.create(
        model="claude-sonnet-4-6",
        max_tokens=1024,
        messages=[{"role": "user", "content": "Hello"}]
    )
except RateLimitError:
    print("Rate limit hit — implement exponential backoff")
except APIConnectionError:
    print("Connection error — check network and retry")
except APIStatusError as e:
    print(f"API error {e.status_code}: {e.message}")

Implement retry logic with exponential backoff for rate limit and connection errors in production applications.

Token Counting and Cost Management

Tokens are the unit of cost. Roughly, one token is about 4 characters of English text. Check the usage field in the response to see how many tokens were consumed:

print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")

Monitor your token usage, especially when building features with large system prompts or long conversation histories.

Monitoring Your Claude-Powered Application

Once your application is deployed, it needs monitoring. The Claude API itself is reliable, but your application has its own failure modes — server issues, deployment problems, database errors, or network problems that have nothing to do with the API.

Domain Monitor monitors your application's availability every minute and alerts you immediately when it stops responding. See building apps with the Claude API for production patterns including monitoring setup. Also check our guide on monitoring AI API endpoints for the specific considerations around AI-powered applications.

More posts

What Is Generative AI? How It Works and What It Creates

Generative AI creates new content — text, images, code, and more. This guide explains how it works, what tools are available, and where it's genuinely useful versus overhyped.

Read more
What Is Cursor AI? The AI Code Editor Explained

Cursor AI is an AI-powered code editor built on VS Code. Learn what it does, how it works, and whether it's the right tool for your development workflow.

Read more
What Is Claude Opus? Anthropic's Most Powerful Model Explained

Claude Opus is Anthropic's most capable AI model, built for complex reasoning and demanding tasks. Learn what it does, how it compares, and when to use it.

Read more

Subscribe to our PRO plan.

Looking to monitor your website and domains? Join our platform and start today.