The Actor Model¶
Understanding the theoretical foundation of ring kernels.
What is the Actor Model?¶
The Actor Model is a mathematical model for concurrent computation introduced by Carl Hewitt in 1973. It treats "actors" as the fundamental unit of computation.
Core Principles¶
- Everything is an Actor: Actors are the basic building blocks
- Actors are Isolated: No shared state between actors
- Communication via Messages: Actors interact only through messages
- Async Processing: Messages are processed asynchronously
Actor Properties¶
Each actor has:
┌─────────────────────────────────────┐
│ ACTOR │
├─────────────────────────────────────┤
│ Mailbox (Queue) │ ◄── Messages arrive here
├─────────────────────────────────────┤
│ Behavior (Logic) │ Process messages
├─────────────────────────────────────┤
│ State (Private) │ Internal state
└─────────────────────────────────────┘
When an actor receives a message, it can:
- Send messages to other actors
- Create new actors
- Change its behavior for the next message
Why Actors for GPU Computing?¶
Traditional GPU Model¶
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Launch │ ───► │ Execute │ ───► │ Complete │
│ Kernel │ │ │ │ Return │
└─────────────┘ └─────────────┘ └─────────────┘
│ │
└───────────── Repeat for each call ────────┘
Problems:
- Launch overhead every call
- No persistent state
- Synchronous blocking
Actor GPU Model¶
┌─────────────────────────────────────────────────────────────┐
│ GPU ACTOR (Persistent) │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─────────────┐ │ │
│ │ │ Receive │ ◄─── Messages from host │ │
│ │ └──────┬──────┘ │ │
│ │ │ │ │
│ │ ┌──────▼──────┐ │ │
│ │ │ Process │ Uses persistent state │ │
│ │ └──────┬──────┘ │ │
│ │ │ │ │
│ │ ┌──────▼──────┐ │ │
│ │ │ Send │ ───► Results to host │ │
│ │ └──────┬──────┘ │ │
│ │ │ │ │
│ │ └──────────────────────────────────────┐ │ │
│ │ Loop │ │ │
│ └───────────────────────────────────────────────────┘ │ │
└─────────────────────────────────────────────────────────────┘
Benefits:
- One-time launch overhead
- Persistent state (models, caches)
- Asynchronous message processing
- Natural fit for streaming
Actor Model in PyDotCompute¶
Ring Kernel as Actor¶
@ring_kernel(kernel_id="processor")
async def processor(ctx):
# State: Private to this actor
model = load_model()
cache = {}
# Behavior: Message processing loop
while not ctx.should_terminate:
# Mailbox: Receive messages
msg = await ctx.receive()
# Process and respond
result = model.predict(msg.data)
await ctx.send(Response(result=result))
Message Passing¶
# Producer sends message (fire-and-forget)
await runtime.send("processor", Request(data=x))
# Consumer receives response (async)
response = await runtime.receive("processor")
Isolation¶
# Each actor has private state
@ring_kernel(kernel_id="counter_a")
async def counter_a(ctx):
count = 0 # Private to counter_a
...
@ring_kernel(kernel_id="counter_b")
async def counter_b(ctx):
count = 0 # Private to counter_b, independent of counter_a
...
Comparison with Other Models¶
Threads¶
| Aspect | Threads | Actors |
|---|---|---|
| Communication | Shared memory | Messages |
| Synchronization | Locks, mutexes | Message ordering |
| State | Shared | Private |
| Deadlocks | Possible | Avoided by design |
CSP (Go channels)¶
| Aspect | CSP | Actors |
|---|---|---|
| Identity | Anonymous | Named |
| Channels | Shared | Private mailbox |
| Blocking | Synchronous | Asynchronous |
Traditional GPU¶
| Aspect | Traditional | Ring Kernel |
|---|---|---|
| Lifetime | Per-call | Persistent |
| State | None | Persistent |
| Communication | Memory copy | Messages |
| Latency | High (launch) | Low (running) |
Benefits of Actor Model¶
1. Concurrency Safety¶
No shared mutable state means no race conditions:
# No locks needed!
@ring_kernel(kernel_id="safe_counter")
async def safe_counter(ctx):
count = 0 # Only this actor touches this
while not ctx.should_terminate:
msg = await ctx.receive()
if msg.action == "increment":
count += 1 # No race condition possible
await ctx.send(CountResponse(count=count))
2. Scalability¶
Add more actors for more parallelism:
# Scale horizontally
for i in range(num_workers):
await runtime.launch(f"worker_{i}", worker_fn)
await runtime.activate(f"worker_{i}")
3. Fault Isolation¶
Actor crashes don't affect others:
# worker_a crashes, but worker_b continues
@ring_kernel(kernel_id="worker_a")
async def worker_a(ctx):
raise Exception("Crash!") # Only affects worker_a
@ring_kernel(kernel_id="worker_b")
async def worker_b(ctx):
# Still running fine
...
4. Location Transparency¶
Actors can run anywhere:
# Same code works locally or distributed
await runtime.send("worker", message) # Local
await runtime.send("remote_worker", message) # Could be remote
Design Patterns¶
Request-Response¶
Pipeline¶
Fan-Out / Fan-In¶
Supervision¶
Further Reading¶
- Hewitt, C. "Actor Model of Computation" (2010)
- Agha, G. "Actors: A Model of Concurrent Computation" (1986)
- Armstrong, J. "Programming Erlang" (2007)
Next Steps¶
- GPU Computing Background: GPU architecture
- Ring Kernels Concept: Implementation details