Symphony arrives with a bold premise: stop supervising coding agents and start managing work. The daemon runs continuously in the background, polling Linear for issues assigned to it, and autonomously spawning a coding agent for each ticket. The agent works in isolation with its own git branch, implements the feature or fix, generates proof-of-work artifacts, and creates a pull request for human review. This is not assisted coding — it is delegated coding with oversight at the review stage rather than the implementation stage.
The choice of Elixir and OTP is Symphony's most thoughtful architectural decision. Erlang's BEAM VM was designed for telecom systems that require nine-nines availability, and OTP supervision trees bring those guarantees to agent orchestration. When a coding agent crashes — and they will crash — the supervisor automatically restarts it without affecting other running agents. The lightweight process model means spawning hundreds of agents costs negligible memory, and hot code reloading allows updating orchestration logic without stopping active sessions.
Setting up Symphony requires comfort with the Elixir ecosystem. You need Erlang, Elixir, and Mix installed, plus a Linear API key and OpenAI API credentials. The configuration is straightforward once you have the prerequisites, but developers outside the BEAM ecosystem face a steep learning curve just to get started. The docker-compose option simplifies deployment but limits customization.
Agent isolation is thorough. Each spawned agent operates in a dedicated working directory with its own git branch, environment variables, and context window. There is no shared state between agents — the daemon manages lifecycle transitions independently for each ticket. This prevents the contamination issues that plague shared-workspace approaches but limits opportunities for agents to collaborate on related tickets.
The developer experience during operation is surprisingly hands-off. Once configured, Symphony checks Linear periodically, picks up new issues, and starts working. The operator's role shifts to reviewing pull requests rather than writing code. The proof-of-work system ensures PRs include context about what was done and why, making review efficient. This workflow genuinely changes the development cadence for teams with large backlogs.
Performance depends heavily on the underlying LLM and issue complexity. Simple bug fixes and feature additions complete in minutes. Complex multi-file changes can take longer as the agent iterates through implementation attempts. The OTP scheduler ensures fair resource distribution across concurrent agents, but heavy LLM API usage can become expensive when multiple agents are active simultaneously.
Integration limitations are Symphony's biggest weakness today. Linear is the only supported issue tracker — no GitHub Issues, Jira, Asana, or Shortcut support. No autonomous CI failure remediation means failed builds require manual intervention or a separate automation layer. No code review comment routing means review feedback must be manually communicated back to the agent.
The pricing model is indirect: Symphony itself is free and open-source under Apache 2.0, but running it requires OpenAI API credits for the coding agents. Costs scale with the number and complexity of issues being processed. For teams already paying for OpenAI API access, the marginal cost of running Symphony is the additional token usage — which can be significant with multiple concurrent agents.
The explicit prototype label and recommendation to build your own hardened version are refreshingly honest. OpenAI is sharing the architecture rather than the product, and SPEC.md is a detailed, well-written specification that serves as a blueprint for custom implementations. This approach respects that production requirements vary wildly across organizations.
Symphony is best understood as OpenAI's answer to a specific question: what does autonomous software development look like when built on proven distributed systems principles? The answer is elegant — OTP supervision for reliability, one-agent-per-issue for simplicity, proof-of-work for accountability. Teams with Elixir expertise and willingness to customize will find a powerful foundation here. Teams wanting turnkey autonomous coding should look at more mature alternatives.