Overview

The conventional wisdom “Don’t Repeat Yourself” (DRY) faces a fundamental challenge in distributed architectures: sometimes duplication is preferable to coupling. Code reuse in microservices and distributed systems requires balancing contradictory forces—minimizing redundant code while preserving service independence, deployment autonomy, and fault isolation.

This framework presents four primary reuse patterns forming a spectrum from zero coupling (code replication) to runtime coupling (shared services), plus an orthogonal dimension addressing cross-cutting infrastructure concerns through sidecars and service meshes. The critical insight: reuse decisions must prioritize rate of change and coupling tolerance over simplistic DRY adherence, recognizing that distributed architectures trade local optimization (single code copy) for system-level properties (independent deployability, fault tolerance, team autonomy).

The patterns addressed: Code-Replication-Pattern, Shared-Library-Pattern, Shared-Service-Pattern, Sidecar-Pattern, Service-Mesh, and Orthogonal-Coupling.

The Reuse Spectrum

Code Replication: Accepting Duplication for Independence

Code-Replication-Pattern deliberately violates DRY by copying functionality across services rather than extracting it into shared dependencies. This pattern prioritizes deployment independence over code consolidation, enabling services to evolve at different rates without coordination overhead.

Trade-offs:

  • Zero coupling benefit: Services share no compile-time or runtime dependencies, preserving complete autonomy
  • Maintenance cost: Bug fixes and enhancements require updating multiple service codebases independently
  • When duplication is better: Stable utility functions (string formatters, validators) with low change frequency across different bounded contexts
  • When it breaks down: Complex or volatile domain logic where synchronized updates outweigh coupling concerns

Key principle: Duplication across bounded contexts maintains semantic boundaries. Duplication within a bounded context still warrants extraction and DRY adherence.

Shared Library: Compile-Time Sharing

Shared-Library-Pattern extracts common functionality into versioned artifacts (JAR, DLL, NPM packages) consumed as compile-time dependencies. While reducing code duplication, this introduces version coupling where library updates trigger cascading redeployments across dependent services.

Trade-offs:

  • DRY achievement: Single implementation eliminates redundant code maintenance
  • Versioning complexity: Multiple library versions may coexist; semantic versioning discipline required
  • Deployment coupling: Library changes mandate rebuilding and redeploying all consumers simultaneously
  • Appropriate scope: Infrastructure concerns (logging, serialization, monitoring) with slow change rates
  • Problematic scope: Domain logic or models that create temporal coupling and break service autonomy

Critical distinction: Shared libraries work well for stable, cross-cutting technical concerns. They create anti-patterns when sharing business logic across bounded contexts, tightly coupling services that should evolve independently.

Shared Service: Runtime Sharing

Shared-Service-Pattern extracts functionality into separately deployed services called via network protocols (REST, gRPC, messaging). This preserves deployment independence—services don’t share compiled artifacts—but introduces runtime dependencies with operational complexity.

Trade-offs:

  • Deployment independence maintained: Services deploy separately; no version coordination needed
  • Runtime failure points: Shared service unavailability cascades to all consumers unless circuit breakers implemented
  • Network latency overhead: Every invocation adds milliseconds compared to in-process library calls
  • Scalability coupling: Shared service must scale proportionally with aggregate consumer load
  • Operational investment: Requires fault tolerance patterns, monitoring, health checks, and high availability architecture
  • Appropriate volatility: High-change-rate functionality benefiting from single deployment point
  • Justified value: Cross-cutting capabilities (authentication, address validation, payment processing) warranting operational overhead

Key insight from Ford et al.: “Shared services carry much more risk than shared libraries” because failures manifest at runtime, affecting all consumers simultaneously. Only extract shared services when functionality is stable enough and valuable enough to justify making it highly available and fault-tolerant.

Orthogonal Coupling: A Different Dimension

While code replication, shared libraries, and shared services form a reuse spectrum, Orthogonal-Coupling addresses a fundamentally different concern: cross-cutting infrastructure capabilities that must integrate with domain logic despite conceptual independence.

The Orthogonal Problem

Operational concerns—logging, monitoring, security, service discovery, traffic management—fragment across every microservice, creating code scattering and inconsistent implementations. These concerns are “orthogonal” because they operate along separate dimensions from business logic yet must intersect at every service boundary.

Traditional approaches embed cross-cutting concerns directly in application code (logging libraries, security middleware), polluting domain logic with infrastructure noise and creating maintenance burden across polyglot environments.

Sidecar Pattern: Decoupling Operational Concerns

Sidecar-Pattern separates infrastructure from domain by deploying helper containers alongside application containers. The primary container handles business logic; the sidecar manages operational concerns (observability, security, traffic control).

Trade-offs:

  • Clean separation: Domain logic remains focused; platform teams manage operational infrastructure independently
  • Language-agnostic consistency: Polyglot microservices gain uniform capabilities without per-language implementation
  • Resource overhead: Each service instance incurs additional CPU and memory for sidecar containers
  • Deployment complexity: Managing dual-container lifecycle and shared network namespaces
  • Network latency: Additional proxy hop adds milliseconds per request

Appropriate use: Container-based deployments (Kubernetes) with cross-cutting operational needs shared across many services, especially in polyglot environments where implementing features per language creates unsustainable maintenance burden.

Service Mesh: Scaling Sidecars

Service-Mesh links sidecars across services into a unified infrastructure layer with centralized control plane and distributed data plane. This provides cluster-wide traffic management, security policies, and observability without touching application code.

Trade-offs:

  • Uniform infrastructure: mTLS encryption, circuit breakers, distributed tracing applied consistently across all services
  • Centralized policy management: Single configuration point for routing, security, resilience patterns
  • Operational complexity: Control plane requires monitoring, upgrades, expertise
  • Resource footprint: 10-100MB memory per sidecar across potentially hundreds of service instances
  • Debugging challenges: Traffic flowing through proxies complicates request tracing and troubleshooting

When to adopt: Managing 10+ microservices where operational consistency benefits outweigh infrastructure costs, zero-trust security requirements, complex traffic management needs (canary deployments, A/B testing).

Key distinction: Sidecars and service meshes address orthogonal coupling (infrastructure intersecting domain logic), not domain code reuse. They complement rather than replace the replication-library-service spectrum.

Decision Framework

When to Use Code Replication

  • High domain volatility: Code changes frequently; coordination overhead outweighs duplication costs
  • Different bounded contexts: Syntactically similar code represents diverging domain concepts
  • Small, stable utilities: Simple functions (validators, formatters) unlikely to change
  • Team autonomy priority: Independent deployment cycles valued over code consolidation

When to Use Shared Library

  • Low volatility: Stable infrastructure utilities changing infrequently (quarterly or slower)
  • Homogeneous tech stack: Single-language environments simplifying version management
  • Strong versioning discipline: Teams committed to semantic versioning and compatibility contracts
  • Non-domain concerns: Cross-cutting technical capabilities (serialization, logging, monitoring)

When to Use Shared Service

  • High volatility requiring agility: Frequently changing functionality benefiting from single deployment point
  • Polyglot environments: Multiple languages/frameworks making shared libraries impractical
  • Domain logic consistency: Business rules that must stay synchronized across contexts
  • Operational investment justified: Functionality valuable enough to warrant fault tolerance, scaling, monitoring infrastructure

When to Use Sidecar/Service Mesh

  • Operational not domain concerns: Cross-cutting infrastructure capabilities (security, observability, traffic management)
  • Polyglot consistency needs: Uniform capabilities across multiple languages/frameworks
  • Container-based platforms: Kubernetes or similar with native sidecar co-location support
  • Service mesh threshold: 10+ microservices where centralized control outweighs operational complexity

The Rate of Change Principle

The fundamental determinant of reuse success is rate of change. Frameworks and operating systems succeed as reuse targets because they evolve slowly—quarterly or annual releases. Internal domain capabilities change continuously, making them poor reuse candidates despite syntactic similarities.

As Ford et al. articulate: “Reuse is derived via abstraction but operationalized by slow rate of change.” The lower the change frequency, the more viable shared libraries and services become. High-volatility code amplifies coupling costs, making replication preferable despite maintenance burden.

This explains why:

  • Infrastructure libraries work: Logging frameworks change quarterly
  • Domain shared libraries fail: Business logic changes weekly or daily
  • Shared services succeed selectively: Only for stable, high-value cross-cutting concerns

The architectural heuristic: match reuse pattern to volatility. High volatility → replicate. Low volatility → extract and share.

Common Pitfalls

  • Reusing for reuse’s sake: Extracting shared libraries without considering coupling costs and change frequency
  • Mixing concerns in sidecars: Embedding domain logic in infrastructure sidecars, blurring operational/domain boundaries
  • Over-sharing: Creating distributed monolith through excessive shared services coupling everything together
  • Under-sharing: Pure duplication without evaluating stable cross-cutting concerns benefiting from extraction
  • Ignoring rate of change: Sharing volatile code creates version hell and deployment cascades
  • Premature extraction: Moving code to shared libraries before stability and reuse patterns emerge
  • Service mesh over-engineering: Deploying complex infrastructure for <10 services where simple patterns suffice

Real-World Considerations

Team Structure and Conway’s Law

Service ownership aligns with team boundaries. Shared libraries and services create coordination dependencies between teams—acceptable for platform teams providing infrastructure, problematic when domain teams must synchronize business logic changes. Code replication enables team autonomy by eliminating cross-team coordination for code updates.

Governance and Ownership Models

  • Shared libraries: Require centralized platform teams owning versioning, compatibility, deprecation policies
  • Shared services: Need clear SLA definitions, operational ownership, incident response protocols
  • Sidecars/service mesh: Platform engineering responsibility providing infrastructure to application teams
  • Replicated code: Domain team ownership with potential cross-team best practice sharing (templates, patterns)

Migration Strategies

Moving from monolith to distributed: begin with code replication to establish service boundaries, then selectively extract shared services for stable high-value capabilities. Avoid premature sharing—duplication provides learning space before committing to coupling. Service meshes typically arrive later after service proliferation justifies operational investment.

Polyglot vs. Homogeneous Environments

Polyglot architectures favor code replication and shared services (language-agnostic) over shared libraries (language-specific). Homogeneous stacks can leverage shared libraries more readily. Service meshes shine in polyglot contexts providing uniform capabilities across Java, Python, Go, Node.js services without per-language implementation.

Sources

Primary Sources

  • Ford, Neal; Richards, Mark; Sadalage, Pramod; Dehghani, Zhamak (2022). Software Architecture: The Hard Parts - Modern Trade-Off Analyses for Distributed Architectures. O’Reilly Media. ISBN: 978-1-492-08689-5.

Synthesized Atomic Notes

  • Hunt, Andrew and Thomas, David (1999). The Pragmatic Programmer: From Journeyman to Master. Addison-Wesley. ISBN: 978-0201616224.

    • Original DRY (Don’t Repeat Yourself) principle formulation
  • Evans, Eric (2003). Domain-Driven Design: Tackling Complexity in the Heart of Software. Addison-Wesley. ISBN: 978-0321125217.

  • Newman, Sam (2021). Building Microservices: Designing Fine-Grained Systems (2nd Edition). O’Reilly Media. ISBN: 978-1492034025.

  • Richards, Mark (2016). Microservices AntiPatterns and Pitfalls. O’Reilly Media.

AI Assistance

This content was drafted with assistance from AI tools for research, organization, and initial content generation. All final content has been reviewed, fact-checked, and edited by the author to ensure accuracy and alignment with the author’s intentions and perspective.