Droid - maxxing

/drɔɪd-mæk-sɪŋ/noun
  1. The practice of building and deploying autonomous AI agents to maximize productivity and automate complex workflows
  2. Pushing AI-assisted development to its limits using Droid CLI

"After droid maxxing my workflow, I shipped three features before lunch."

Why Droids?

Two things sold me on Droids over Claude Code and Codex. Better orchestration out of the box and model agnostic. They're already optimized for developers and come built-in with capabilities such as:

Feature Implementation

Build complete features from scratch.

Code
Refactoring

Optimize existing codebases.

Debugging & Testing

Hunt down bugs and write tests.

This is the normal LLM coding workflow.

UserInputDroid CLILLM-basedTask AnalysisFile SystemOperationsRead, Write, SearchFiles & DirectoriesShellCommandsExecute TerminalCommands SafelyAPICallsExternal Services& IntegrationsTool ExecutionResultsLLM ResponseGenerationOutputAgentic Loop

Skills

Skills are markdown files that tell the Droid how to do something. Think of it as predefined tool calls. You don't have to manually call them. Droid looks at your task and pulls in whatever skills seem relevant based on the name and description.

Design System

skills.md

components.md

globals.css

Animation

framer-motion.md

transitions.md

QA

checklist.md

testing.md

Build Landing Page

Design Engineer

Animation

QA

Fix Responsive Layout Bug

Design Engineer

QA

Update Form Styles

Design Engineer

Animation

Add Page Transitions

Animation

QA

The skills get reused across different tasks. Build a frontend feature? The Droid pulls in Design Engineer for component structure, Animation for interactions, and QA to verify everything works. Fix a layout bug? It grabs Design Engineer and QA, skipping Animation since it's not needed.

My Workflow

With skills combined, each part of your workflow gets its own specialist. This is exactly how my design engineering agent currently works.

"Build apricing page"Orchestratordelegates toUI specialistsLayout InspectorChecklist:☐ Grid/flex structure correct☐ Spacing uses design tokens☐ Responsive breakpoints☐ Container max-widths☐ Z-index layering☐ Overflow handlingStyle ConsistencyChecklist:☐ Colors from palette only☐ Shadows match system☐ Border radii consistent☐ No magic numbersTypography AuditorChecklist:☐ Font sizes from scale☐ Line heights correct☐ Heading hierarchy☐ Font weights consistentComponent ValidatorChecklist:☐ Props typed correctly☐ Reuses existing components☐ No inline styles☐ Accessible markup☐ Error boundaries☐ Loading states handledVisual QAChecklist:☐ Screenshot matches design☐ Hover/focus states work☐ Animations smooth☐ No visual regressionsAll Checks Pass?yesno → fix issues → re-validateProduction-Ready UI✓ No hallucinated styles✓ Design system compliant✓ Consistent across views✓ Accessible & responsive

Writing Better Skills

The key is reducing ambiguity. Make everything as deterministic as possible. I think of LLMs as repetition machines. They're really good at copying patterns. So give them patterns or hardcoded values to copy.

Yes! Hardcode the actual values.

Don't tell the Droid "follow the design system." Tell it exactly what the design system is. Put the actual hex codes, the actual spacing values, the actual class names right in the skill file.

My globals.css skill doesn't say "use consistent colors." It says something like:

Background: #0a0a0a
Text primary: #ffffff at 90% opacity
Text secondary: #ffffff at 60% opacity  
Borders: #ffffff at 20% opacity

My component skill doesn't say "match existing patterns." It lists the actual patterns, something like:

Cards: border border-white/20 p-4 rounded-sm
Buttons: px-4 py-2 rounded-md font-medium
Labels: text-xs uppercase tracking-wide text-white/40

When the Droid builds something, it's not guessing what "looks right." It's copying values I've already decided on. No interpretation. No hallucination. Just the exact styles I want, every time.

I believe this is how you should build a custom Agentic system around your daily workflow. It won't matter how good or bad the next model is. Your system keeps working the way you want and helps ship code much faster.

Magic Zone

I've also noticed that somewhere between Deterministic and Non-deterministic workflows, there are moments where the model comes up with insane output that I could have never expected. I call this the Magic Zone.

DeterministicMagic ZoneAmbiguity

Examples

Here's what this workflow looks like in practice.

I first built this component. Then I copied all the CSS values and created a skill.md with those values.

AI Usage

Cursor
Droid
ChatGPT
Claude
v0

Later I tried to trigger that skill and asked it to generate a scatterplot and later a bar chart. It built these two in a single prompt:

CurDroChaClav0
CuDrChClv0

Sure, they're not perfect and need further work. The text labels are cut off, there's no tooltip. But the creativity of adding the dashed grid lines and maintaining the same color scale is so much better than if you had just prompted "build me a scatterplot" or "build me a bar chart."

This playlist card was another spinoff from the same skill. It took around 12-15 prompts to get it to this state:

December playlist cover

December

2025
01
untitledBlaxian
2:27
02
Universal TongueAnthony Russo
2:05
03
NaturalCroozer
3:04
04
Fly AwayGirlfriend Wife
3:25
05
A Dream Goes on ForeverVegyn & John Glacier
4:39
06
BelmontRoy Blair
4:32
Cover
Playlist
Tracks
DECEMBER 2025

Memory

The missing piece in most AI coding workflows is memory. Every new session starts fresh. The agent forgets what it was working on, what decisions were made, and why. Context compaction kicks in and a lot of info gets lost. Summaries of previous sessions don't quite work as expected most of the time. Plus tracking issues in a markdown file is a nightmare.

I built Tracer to solve this. It's a CLI-first issue tracker designed for AI agents. Everything is stored in .trace/issues.jsonl, one line per issue, git-tracked, human-readable.

Session 1Agent WorkSession 2Agent WorkSession NAgent WorkTracer.trace/issues.jsonlContextRestoredContinueWorkUpdates persisted to git

The key features that make it work:

  • Dependency tracking: Tasks can block other tasks. tracer ready shows only what's actually ready to work on.
  • Multi-agent coordination: Multiple agents can work on the same project. They communicate via comments and get auto-assigned when starting work.
  • AI-native design: Every command has --json output. Every operation is fast (~5ms).
  • Git is the database: No external server. Clone the repo, you get the entire issue database.

When you start a new session, the agent can run tracer list --status open --json and immediately know what needs to be done. Multiple agents can coordinate through comments. One agent can leave notes for another, track who's working on what, and avoid conflicts. Context preserved. No repetition.

Conclusion

No one knows your workflow better than yourself. So actually spend time writing a really good skills.md with all the possible deterministic values and patterns.

If you're not sure where to start, just experiment with an AI-written skills.md as a first draft and keep refining it to your liking. I'm also working on a site called droids.directory where I'll post pre-made skills. Follow on Twitter for updates. Happy Droiding!