The Critical Distinction
Cloudflare announced Python support. You think: “Finally, real Python on serverless!”
Reality check: Pyodide ≠ CPython. It’s CPython compiled to WebAssembly. The implications:
- No OS calls, no
osmodule - No
pip install— packages must be pre-bundled - Single-threaded execution
- HTTP via JavaScript
fetch, notrequests
This isn’t a problem — it’s just different. Understanding the constraints makes the difference between “why doesn’t this work” and “I know exactly what to use.”
My use case: Terraform Registry (~2K requests/day) + APT Repository (~500 requests/day). Both run on free tier.
Pyodide bundles a subset of Python’s standard library and about 150+ packages (numpy, pandas, etc.). But arbitrary PyPI packages won’t work.
Pyodide Architecture
The implications:
- No OS — there’s no Linux, no system calls, no
osmodule as you’d expect - No pip — you can’t
pip install requests. Packages must be pre-bundled - No threads — single-threaded execution model
- Limited stdlib — not everything is compiled to WASM
The Template
I built a template that handles the boilerplate. Here’s how to start:
# Clone the template
git clone https://github.com/cloudflare/worker-python-template.git
cd worker-python-template
# Install dependencies
npm install
# Run locally
npx wrangler devProject Structure
cloudflare-worker-python-template/
├── src/
│ └── entry.py # Your worker code
├── tests/
│ └── test_worker.py # Unit tests
├── wrangler.toml # Worker configuration
├── pyproject.toml # Python tooling
├── requirements.txt # Pyodide packages
├── .github/
│ └── workflows/
│ └── deploy.yml # CI/CD pipeline
└── README.mdThe Entry Point
Every Worker needs an entry point. In Python Workers, it’s on_fetch:
# src/entry.py (from actual template)
import json
from urllib.parse import urlparse
async def on_fetch(request, env):
"""Handle incoming requests."""
# Parse the URL path
path = urlparse(request.url).path
# Route to handlers
if path == "/":
return Response.new("Hello from Python Workers!")
elif path == "/health":
return Response.new(
json.dumps({"status": "ok"}),
headers={"Content-Type": "application/json"}
)
# 404 for everything else
return Response.new("Not Found", status=404)Simple, familiar, Pythonic.
Accessing Environment Variables
Just like in Node.js Workers, you access secrets and environment variables via the env object:
# src/entry.py
async def on_fetch(request, env):
# String variables from wrangler.toml [vars]
debug_mode = env.get("DEBUG", "false")
# Secrets (set via: npx wrangler secret put API_KEY)
api_key = env.API_KEY
# Use them
if debug_mode == "true":
console.log(f"API Key loaded: {api_key[:4]}...")
return Response.new(f"API Key: {api_key[:4]}***")Working with JavaScript APIs
This is where Pyodide gets interesting. You can import JavaScript objects directly into Python:
from js import console, fetch, Response, URL
# Use browser/Workers APIs
async def on_fetch(request, env):
# fetch is available directly
resp = await fetch("https://api.github.com/users/your-username")
data = await resp.text()
return Response.new(data, headers={"Content-Type": "application/json"})The js module exposes global JavaScript objects. This is how you do HTTP requests, interact with the Cache API, use WebCrypto, etc.
Understanding the Constraints
This is critical. Python Workers aren’t Node.js Workers:
| Aspect | Python Workers | Node.js Workers |
|---|---|---|
| Package Manager | Pyodide bundles only | npm (everything) |
| Cold Start | ~5-10ms | ~1ms |
| Memory | 128 MB | 128 MB |
| CPU Time (Free) | 10ms | 10ms |
| CPU Time (Paid) | 30s | 50ms-30s |
| Filesystem | None | None |
Available Packages
Pyodide includes ~150+ packages out of the box:
- Standard Library:
json,re,urllib,hashlib,base64,datetime - Data:
numpy,pandas,scipy - Web: (limited — use
fetchfrom JS instead ofrequests)
# This works
import json
import re
import hashlib
from urllib.parse import urlparse
# This does NOT work (not bundled)
# import requests # ❌
# import httpx # ❌
# import cryptography # ❌For HTTP, use the JavaScript fetch:
from js import fetch
async def call_api(url):
resp = await fetch(url)
return await resp.json()Working Around Missing Packages
For things like cryptographic operations, use WebCrypto via JavaScript:
from js import crypto, TextEncoder
async def hash_sha256(data: str) -> str:
"""Hash data using WebCrypto."""
encoder = TextEncoder.new()
encoded = encoder.encode(data)
hash_buffer = await crypto.subtle.digest("SHA-256", encoded)
return bytes(hash_buffer).hex()Configuration (wrangler.toml)
The worker configuration lives in wrangler.toml:
name = "my-worker"
main = "src/entry.py"
compatibility_date = "2026-04-25"
# Environment variables (non-sensitive)
[vars]
ENVIRONMENT = "production"
DEBUG = "false"
# KV Namespace for key-value storage
[[kv_namespaces]]
binding = "CACHE"
id = "abc123def456"
# D1 Database for SQL
[[d1_databases]]
binding = "DB"
database_name = "my-db"
database_id = "def456abc789"
# R2 Bucket for object storage
[[r2_buckets]]
binding = "ASSETS"
bucket_name = "my-assets"
# Deploy to specific environment
[env.staging]
name = "my-worker-staging"
[env.staging.vars]
ENVIRONMENT = "staging"Development Workflow
Local Development
# Start the dev server
npx wrangler dev
# Test with curl
curl http://localhost:8787/
# {"status": "ok"}The dev server reloads on file changes. It’s fast and works well.
Testing
# tests/test_worker.py (pytest)
import pytest
from src.entry import on_fetch
class MockEnv:
DEBUG = "false"
API_KEY = "test-key"
class MockRequest:
def __init__(self, url):
self.url = url
def test_health_endpoint():
request = MockRequest("http://localhost/health")
response = on_fetch(request, MockEnv())
assert response.status == 200
def test_root_endpoint():
request = MockRequest("http://localhost/")
response = on_fetch(request, MockEnv())
assert response.status == 200
assert "Hello" in response.bodyRun tests:
pip install pytest
pytest -vLinting
pip install ruff
ruff check src/ tests/
ruff format src/ tests/The template includes CI that runs both tests and linting.
CI/CD Pipeline
The included GitHub Actions workflow:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Lint
run: |
pip install ruff
ruff check src/ tests/
- name: Test
run: |
pip install pytest
pytest -v
- name: Deploy
uses: cloudflare/wrangler-action@v3
with:
api-token: ${{ secrets.CLOUDFLARE_API_TOKEN }}
account-id: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}On-Prem Deployment with workerd
Here’s where this gets powerful: you can run the same Python Worker locally using workerd, Cloudflare’s open-source Workers runtime.
Why Run Locally?
- Development — faster iteration than deploy-then-test
- Testing — consistent environment for integration tests
- Air-gapped — run in environments without internet
- Privacy — keep traffic local for sensitive workloads
Docker Setup
# docker-compose.yml
services:
worker:
image: cloudflare/workerd:latest
ports:
- "8787:8787"
volumes:
- ./config.workerd:/etc/workerd/config.capnp:ro
cap_add:
- SYS_ADMINThe Config
// config.workerd (JavaScript format, not TOML)
export default {
services: [
{
name: "my-worker",
script: readFile("dist/worker.mjs"),
bindings: {
ENVIRONMENT: "development",
API_KEY: "dev-key",
}
}
],
sockets: [
{
address: "0.0.0.0:8787",
http: {
endpoint: "0.0.0.0:8787"
}
}
]
};Building for workerd
The trick: Wrangler outputs JavaScript, but workerd needs its own format. The template handles this:
# Build for Cloudflare (default)
npx wrangler deploy
# Build for workerd (local)
npm run build:workerdThis outputs a compatible worker.mjs for local testing.
Real-World Usage
I’ve built several production workers using this template:
- terraform-registry — ~2K requests/day, handles module distribution
- apt-repository — ~500 requests/day, serves packages to 10+ machines
- cloudflare-ddns — Updates DNS records based on IP changes
All run on the free tier. All deploy in seconds. All can run locally.
What I Love
- Python syntax — feels like writing regular Python
- Global distribution — edge deployment out of the box
- Zero infra — no servers, no scaling concerns
- On-prem option — workerd for local/air-gapped needs
What Most People Get Wrong
-
“Pyodide = CPython” — No OS, no pip, no threads. Use
jsmodule for HTTP/WebCrypto. -
“Free tier is unlimited” — 10ms CPU cap. Complex Python on free tier = timeouts.
-
“Works locally = works on edge” — Local dev uses Node, edge uses V8. Test with workerd.
When to Use Python Workers
| Use Python Workers | Use Node.js Workers |
|---|---|
| Python expertise | JavaScript expertise |
| Data processing | I/O-heavy |
| Simple logic | Complex async |
| ~150 bundled packages needed | Full npm ecosystem |
Getting Started
If you want to build your own Python Workers:
- Use the Workers Python template
- Write your
on_fetchhandler - Deploy with
wrangler deploy
This post covers building workers with Python. In future posts, I’ll dive into specific patterns like handling async operations, using KV/D1/R2 bindings, and testing strategies for Workers.