TL;DR: When running coding agents in parallel, using separate dev containers enables you to safely execute code without needing to worry about security issues. However, sandboxed coding agents waste minutes downloading dependencies before they can start working. Sculptor can use cached Docker images customized for your project to cut task startup times from minutes to seconds. You get speed, safety (agents never touch your machine), and reproducibility (every agent starts from the same clean state). This post shows you how to set it up with dev containers.
You spin up a coding agent, give it a task, and wait. Three minutes for pip install. By the time it’s ready, you’ve moved on.
Most coding agents run directly in your local environment or use worktrees that share your system’s Python, packages, and configuration. Either way, you’re waiting for setup and risking conflicts. Plus, agents can access anything on your machine.
Sculptor runs every agent in its own Docker container for true isolation. But containers have a startup cost too with building environments and downloading packages.
To get around this limitation, we support dev containers to install your dependencies into a Docker image once, so every new agent starts ready to code. In our internal tests with Sculptor, agent setup time dropped from minutes to seconds. Here’s how to set up dev containers in your Sculptor project for 10x faster (or even more!) coding agents.
How Sculptor’s Docker containers work
When you spin up a new Sculptor agent, here’s what happens behind the scenes:
Your repository gets cloned into a fresh Docker container built from your dev container spec. The agent starts working in this isolated environment, with full access to run commands, execute tests, and modify code—all safely contained.
Because every agent runs in its own container, they can work in parallel without stepping on each other’s toes. Want to kick off a refactor while another agent runs your test suite? Go ahead. The containers keep them completely separate.
And when you’re ready to see an agent’s work, Sculptor’s Pairing Mode syncs the container state directly to your local IDE. You get instant access to test changes, review code, and iterate—without the agent ever touching your machine directly.
Setting up your dev container for success
Here’s where things get powerful: you control exactly what’s in those containers using the dev container spec. Set your agents up for success by giving them immediate access to your code’s dependencies, and they’ll spend less time on setup and more time on real work.
The key is caching setup work in your Dockerfile. When you pre-install dependencies at build time, agents don’t waste cycles downloading packages or setting up environments—they start ready to run tests, measure coverage, and write code from the first prompt.
Example project: Python, uv, and HuggingFace for speech recognition
Here’s a demo Sculptor project with a speech recognition CLI as a practical example of how to set up a Python environment with uv that gives your agents a head start.
You can clone our project and point Sculptor at it, then ask it to run the unit tests:
git clone https://github.com/imbue-ai/speech_recognition_demo.git
devcontainer.json can be surprisingly simple; this is enough:
{"name": "Speech Recognition Demo","build": {"dockerfile": "Dockerfile","context": ".."}}
(Our devcontainer.json is just a bit more complex — Sculptor will ignore the parts it doesn’t understand.)
The real work happens in your Dockerfile, where you pre-warm the environment. Here’s the key section:
# ============================================================================# PYTHON & ML MODELS - Pre-warm the UV cache and download Hugging Face models# ============================================================================# Pre-create uv virtual environment and install dependencies# Install dependencies first (without the project itself) to maximize layer caching.# See: https://docs.astral.sh/uv/guides/integration/docker/#cachingCOPY --chown=dev:dev pyproject.toml uv.lock /tmp/deps/ENV UV_PROJECT_ENVIRONMENT=/home/dev/venvRUN cd /tmp/deps && \uv sync --locked --no-install-project && \echo "Python virtual environment created successfully at $UV_PROJECT_ENVIRONMENT"# Pre-download Whisper-tiny model (small, fast model for demos)RUN cd /tmp/deps && \uv run python -c "from transformers import pipeline; pipeline('automatic-speech-recognition', model='openai/whisper-tiny')" && \echo "Whisper-tiny model downloaded successfully"
This approach follows Docker best practices: install dependencies first (without your project code) to maximize layer caching. When your code changes, Docker only needs to rebuild the final layers—not re-download every package and HuggingFace model.
The best part is that we didn’t actually write this Dockerfile ourselves. Instead, we asked Sculptor to do it for us! Sometimes it helps to point Sculptor at an example to follow — feel free to paste this URL into Sculptor for that purpose.
What this enables
- Agents get to real work faster: No waiting while pip install downloads half of PyPI. Your agent starts with dependencies ready and can immediately run tests or analyze code.
- Container snapshots are smaller: Pre-installed dependencies get baked into the base image, so Sculptor’s container snapshots only need to capture your code changes, not gigabytes of packages.
- Consistency across agents: Every agent gets the exact same starting environment. No more “works on my machine” moments—if one agent can run your tests, they all can.
- Ecosystem compatibility: Both VS Code and GitHub support the dev container spec as well. For example, you can use this link to open a web-based IDE inside our demo project; once Github has spun up the container, you’ll be able to run the unit tests from your browser, complete with VS Code’s debugger and breakpoint support.
Best practices and tips
Iterating on your dev container
Here’s an important caveat: when a Sculptor agent modifies your dev container configuration or Dockerfile, it’s still running inside a container built from the old configuration. The changes won’t take effect until you start a new agent from a branch that includes those updates.
This means iterating on the dev container configuration can’t happen inside the agentic loop itself. You’ll need to step in directly.
One effective workflow: use Sculptor’s merge workflow or Pairing Mode to pull the agent’s dev container changes onto your local machine. Then Docker build the image yourself, and report any errors back to the agent so it can fix them. Once you get your dev container image builds correctly, starting a fresh agent inside of Sculptor to test the new environment should be quick and use the cached version.
What this means for your workflow
Sculptor’s container approach changes how you think about coding agents. Instead of treating them like risky experiments you need to supervise constantly, you can give them real autonomy—because the worst they can do is mess up their own container.
Spin up agents freely. Let them explore different approaches in parallel. Test their work when you’re ready, not when you’re worried. The containers keep your machine safe, your workflow smooth, and your agents productive.
That’s the power of proper sandboxing: freedom to experiment, without the fear.
Ready to safely give your agents free reign? Download Sculptor and start running agents in isolated containers today.