An AI coding agent is not merely a language model — it is a compound system combining a reasoning model with surrounding infrastructure that enables autonomous action in a software environment.
The foundational equation is:
coding agent = AI model(s) + harness
The model provides language understanding, reasoning, and code generation. The harness provides everything the model needs to act: tools, configuration, memory, verification, and environmental context.
Core Architectural Components
Modern AI agent architecture comprises four subsystems:
- Perception: Captures and processes environmental inputs — file contents, terminal output, test results, web data — transforming them into representations the model can reason over
- Reasoning/Planning: Decomposes complex tasks into subtasks, generates solution candidates, and reflects on outcomes; techniques include Chain-of-Thought, Tree-of-Thought, and ReAct (Reason + Act)
- Memory: Retains knowledge across contexts — short-term memory lives in the context window; long-term memory is stored in files, databases, or vector indices and retrieved as needed
- Execution/Action: Translates model decisions into concrete environment changes — running shell commands, editing files, calling APIs, or spawning sub-processes
What Distinguishes Agents from Chatbots
The defining capability is tool use in a loop. As Mitchell Hashimoto articulates: “An agent is the industry-adopted term for an LLM that can chat and invoke external behavior in a loop.” Minimum capabilities include reading files, executing programs, and making HTTP requests.
Chatbots respond; agents act. When given a way to verify their own work, agents can detect errors and self-correct without human prompting. This feedback loop — generate, execute, observe, revise — is absent from conversational interfaces.
Historical Lineage
Pre-LLM software agents used deliberative architectures like BDI (Belief-Desire-Intention), where:
- Beliefs represent the agent’s world-state knowledge
- Desires represent goals the agent pursues
- Intentions represent committed plans of action
BDI agents could reason and plan, but required handcrafted knowledge representations and explicit programming of every capability. LLMs collapsed this complexity: the model itself handles natural language understanding, knowledge retrieval, and plan generation — all previously separate subsystems. The harness pattern emerged to give LLMs grounding in real environments that BDI systems previously hardcoded.
Why Architecture Matters
Understanding the model + harness decomposition is essential for Harness-Engineering. The model is largely fixed; the harness is what practitioners engineer. Context-Engineering — how information is structured and delivered to the model — operates as the primary lever within this architecture.
Related Concepts
- Harness-Engineering
- Context-Engineering
- Agent-Harness-Components
- Dual-Agent-Design
- Sub-Agents-Context-Isolation
Sources
-
Horthy, Dex (2026). “Skill Issue: Harness Engineering for Coding Agents.” HumanLayer Blog. Retrieved from https://www.humanlayer.dev/blog/skill-issue-harness-engineering-for-coding-agents
- Source of the core
coding agent = AI model(s) + harnessequation and harness component taxonomy
- Source of the core
-
Hashimoto, Mitchell (2026). “My AI Adoption Journey.” mitchellh.com. Retrieved from https://mitchellh.com/writing/my-ai-adoption-journey
- Practitioner definition of agent as “LLM that can chat and invoke external behavior in a loop”; tool-use as the key differentiator from chatbots
-
Weng, Lilian (2023). “LLM Powered Autonomous Agents.” Lil’Log. Retrieved from https://lilianweng.github.io/posts/2023-06-23-agent/
- Widely cited framework establishing the planning, memory, and tool-use triad as the canonical LLM agent architecture
-
Zhang, Yizhang, et al. (2025). “Fundamentals of Building Autonomous LLM Agents.” arXiv:2510.09244. Retrieved from https://arxiv.org/html/2510.09244v1
- Academic survey formalizing the four-subsystem model: perception, reasoning, memory, execution
-
Rao, Anand S. and Michael P. Georgeff (1995). “BDI Agents: From Theory to Practice.” Proceedings of the First International Conference on Multi-Agent Systems (ICMAS). AAAI Press. Retrieved from https://cdn.aaai.org/ICMAS/1995/ICMAS95-042.pdf
- Foundational paper on BDI agent architecture; establishes historical lineage of autonomous agent design prior to LLMs
Note
This content was drafted with assistance from AI tools for research, organization, and initial content generation. All final content has been reviewed, fact-checked, and edited by the author to ensure accuracy and alignment with the author’s intentions and perspective.