mirror of
https://github.com/codeflash-ai/codeflash-agent.git
synced 2026-05-04 18:25:19 +00:00
4.6 KiB
4.6 KiB
Getting Started Guide
Step-by-step guide to using the Anthropic Python SDK.
Installation
pip install anthropic
Optional extras:
pip install anthropic[bedrock] # AWS Bedrock
pip install anthropic[vertex] # Google Vertex AI
pip install anthropic[aiohttp] # Alternative async HTTP
Authentication
Set your API key as an environment variable:
export ANTHROPIC_API_KEY='your-api-key'
Or pass it explicitly:
from anthropic import Anthropic
client = Anthropic(api_key="your-api-key")
Basic Message
from anthropic import Anthropic
client = Anthropic()
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
]
)
print(message.content[0].text)
System Prompts
Configure Claude's behavior with system prompts:
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
system="You are a helpful AI assistant specializing in Python programming.",
messages=[
{"role": "user", "content": "How do I read a file?"}
]
)
Multi-Turn Conversations
Maintain conversation history:
messages = [
{"role": "user", "content": "My name is Alice."},
{"role": "assistant", "content": "Hello Alice! Nice to meet you."},
{"role": "user", "content": "What's my name?"}
]
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=messages
)
print(message.content[0].text) # "Your name is Alice."
Streaming Responses
Stream responses for real-time feedback:
with client.messages.stream(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[
{"role": "user", "content": "Write a short story"}
]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
print()
Working with Images
Send images to Claude:
import base64
with open("image.jpg", "rb") as f:
image_data = base64.standard_b64encode(f.read()).decode()
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": image_data
}
},
{"type": "text", "text": "What's in this image?"}
]
}]
)
Error Handling
Always handle potential errors:
from anthropic import APIError, RateLimitError
try:
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
except RateLimitError as e:
print(f"Rate limited. Retry after {e.response.headers.get('retry-after')}s")
except APIError as e:
print(f"API error: {e.message}")
Async Usage
For async applications:
import asyncio
from anthropic import AsyncAnthropic
async def main():
client = AsyncAnthropic()
message = await client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
print(message.content[0].text)
asyncio.run(main())
Best Practices
1. Use Context Managers
with Anthropic() as client:
message = client.messages.create(...)
# Client automatically closed
2. Handle Errors Gracefully
try:
message = client.messages.create(...)
except APIError as e:
# Handle error
...
3. Use Appropriate Models
claude-sonnet-4-5-20250929- Balanced intelligence and speedclaude-opus-4-5-20250929- Maximum capabilityclaude-3-5-haiku-20241022- Fast and cost-effective
4. Set Reasonable Timeouts
import httpx
client = Anthropic(
timeout=httpx.Timeout(60.0)
)
5. Track Token Usage
message = client.messages.create(...)
print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")
Next Steps
- Multimodal Content - Images, documents, PDFs
- Tool Usage - Function calling
- Streaming Guide - Advanced streaming
- Error Handling - Robust error management
See Also
- Messages API - Complete API reference
- Client Configuration - Advanced configuration