Core Idea

The Fallacies of Distributed Computing are eight false assumptions that architects commonly make when designing distributed systems, originally identified by Peter Deutsch and James Gosling at Sun Microsystems. These fallacies represent implicit assumptions that lead to architectural failures when systems move from single-process to distributed environments.

What Are the Fallacies of Distributed Computing?

The Fallacies of Distributed Computing are a set of eight assumptions that developers and architects often unconsciously make when designing distributed systems:

  • These assumptions work fine in monolithic, single-process applications
  • But cause serious problems when applied to systems with network boundaries

The eight fallacies are:

  1. The network is reliable
  2. Latency is zero
  3. Bandwidth is infinite
  4. The network is secure
  5. The topology never changes
  6. There is only one administrator
  7. Transport cost is zero
  8. The network is homogeneous
  • Each fallacy represents a dimension of distributed computing complexity that must be explicitly addressed in the architecture

Historical context:

  • First articulated in the 1990s at Sun Microsystems as engineers observed repeated patterns of failure in distributed systems
  • The original list contained seven fallacies by Peter Deutsch
  • James Gosling later added the eighth about network homogeneity
  • Despite being identified decades ago, these fallacies remain highly relevant as distributed architectures like microservices and cloud-native systems become increasingly common

Understanding these fallacies is essential for making informed decisions about architectural style:

  • When architects ignore these realities, they create systems that fail unpredictably in production
  • Exhibit poor performance under load, have security vulnerabilities, or become operationally unmaintainable
  • Each fallacy must be consciously addressed through architectural patterns, operational practices, and infrastructure choices

Why This Matters

The fallacies explain why distributed systems are fundamentally harder than monolithic systems:

  • You cannot simply decompose a working monolith into microservices without adding complexity to handle each of these distributed computing realities
  • Every network call introduces failure modes, latency, security boundaries, and operational complexity that didn’t exist within a single process

These fallacies are the reason that architectural decisions have trade-offs:

  • Choosing distributed architecture buys you independent deployability, scalability, and fault isolation
  • But you pay for it by having to explicitly handle:
    • Network unreliability
    • Latency budgets
    • Service-to-service communication security
    • Complex distributed infrastructure operations
  • The fallacies force architects to be honest about whether the benefits justify the costs

In practice:

  • Architects who understand these fallacies design systems with retry logic, circuit breakers, timeouts, distributed tracing, service meshes, and other patterns specifically to mitigate the risks
  • Those who ignore the fallacies build systems that work in development but fail catastrophically in production

Sources

  • Richards, Mark and Neal Ford (2020). Fundamentals of Software Architecture: An Engineering Approach. O’Reilly Media. ISBN: 978-1-492-04345-4.

  • Deutsch, Peter (1994-1997). “The Eight Fallacies of Distributed Computing.” Originally articulated at Sun Microsystems.

    • Original source identifying first seven fallacies
    • Widely documented in industry literature and technical articles
    • Historical context available through various technical publications

Note

This content was drafted with assistance from AI tools for research, organization, and initial content generation. All final content has been reviewed, fact-checked, and edited by the author to ensure accuracy and alignment with the author’s intentions and perspective.