Core Idea
“The topology never changes” is the fifth of the Fallacies of Distributed Computing—the false assumption that network topology remains static once deployed. In reality, network topology changes constantly through server additions, removals, relocations, failures, load balancer changes, firewall updates, DNS modifications, cloud auto-scaling, container orchestration, and infrastructure evolution, requiring distributed systems to handle dynamic topology without manual reconfiguration.
What Is the “Topology Never Changes” Fallacy?
The False Assumption: The “topology never changes” fallacy is the assumption that once a distributed system is deployed, the physical and logical arrangement of servers, networks, and services remains constant. This assumption is false because modern infrastructure is inherently dynamic:
- Servers fail and are replaced
- Services scale up and down
- Networks are reconfigured for performance or security
- Containers migrate across hosts
- Cloud infrastructure changes continuously through automation and orchestration
Monolithic vs Distributed:
- Monolithic applications: Topology is trivial—the application runs on a known server at a fixed address, and clients connect to that address. While the server might occasionally move, such changes are rare events managed through scheduled maintenance.
- Distributed systems: Fundamentally depend on topology—services must discover and communicate with multiple other services, and any change in where those services exist requires updating connections, routes, and configurations.
Dimensions of Topology Change:
Physical Topology Changes:
- Servers added to handle increased load
- Servers removed for maintenance
- Servers relocated across data centers for disaster recovery
Logical Topology Changes:
- Services deployed to new ports
- Services assigned new IP addresses
- Services moved behind different load balancers
Cloud Environment Dynamism:
- Auto-scaling adds and removes instances based on demand
- Container orchestration (Kubernetes, ECS) continuously reschedules containers across hosts
- Serverless platforms create and destroy function instances for every invocation
Topology Changes Through Failures:
- Network switches fail
- Routing rules change during incident response
- DNS entries updated to redirect traffic
- Services crash and restart at different addresses
Security-Driven Topology Changes:
- Firewalls reconfigured to isolate compromised systems
- VPNs route traffic through new endpoints
- Network segmentation creates new topology boundaries
Deployment-Driven Topology Changes:
- Rolling updates temporarily run old and new service versions simultaneously at different addresses
- Blue-green deployments switch entire environments
- Canary releases gradually shift traffic percentages
Addressing the Fallacy - Dynamic Discovery and Configuration:
- Service discovery mechanisms (Consul, Eureka, Kubernetes DNS): Allow services to locate dependencies by name rather than hardcoded addresses, automatically updating when topology changes
- Load balancers: Abstract individual server addresses, presenting a stable endpoint while distributing requests across dynamically changing backend instances
- Health checks: Detect failed instances and remove them from routing tables without manual intervention
- Configuration management systems (etcd, ZooKeeper, Consul): Centralize topology information, allowing services to query current infrastructure state rather than embedding static assumptions
- Service meshes (Istio, Linkerd): Handle topology changes automatically through control-plane coordination of sidecar proxies
Why This Matters
Operational Failures When Ignored:
- Services hardcoded to connect to specific IP addresses break when those servers move or fail, requiring manual configuration updates during incidents when speed is critical
- Deployments that assume fixed topology cannot handle rolling updates—new service versions cannot coexist with old versions at different addresses
- Auto-scaling fails when services don’t discover newly added instances, leaving capacity unused during demand spikes
- Cloud migrations fail when services assume on-premise topology constraints that don’t apply to elastic infrastructure
Multiplied Complexity in Distributed Architectures: The assumption of static topology becomes particularly problematic in distributed architectures because distribution multiplies topology dependencies:
- Monolith: Minimal external topology—perhaps a database and cache server
- Microservices system (50 services): Might have hundreds of inter-service connections, each dependent on correct topology information
- When topology changes affect multiple services simultaneously, cascading failures occur as services lose connectivity to dependencies
Architectural Trade-offs: This fallacy drives architectural decisions:
- Implementing dynamic topology handling adds complexity:
- Service discovery requires additional infrastructure
- Health checking adds monitoring overhead
- Load balancers add network hops
- For stable infrastructure: The operational cost of dynamic topology might outweigh benefits, favoring simpler static configuration
- For cloud-native systems: With frequent deployments, auto-scaling, and container orchestration, static topology assumptions guarantee operational failure
Impact on Operational Characteristics: Understanding this fallacy shapes operational characteristics like deployability and scalability:
- Architectures embracing topology changes (through service discovery and load balancing): Enable zero-downtime deployments, elastic scaling, and resilient failure handling
- Architectures assuming static topology: Sacrifice these capabilities, trading operational flexibility for implementation simplicity
- The choice depends on deployment environment and operational requirements
Related Concepts
- Fallacies-of-Distributed-Computing — The complete set of eight fallacies this belongs to
- Fallacy-The-Network-Is-Reliable — Related fallacy about network failure assumptions
- Monolithic-vs-Distributed-Architectures — The architectural decision this fallacy impacts
- Operational-Characteristics — Deployability and scalability require handling topology changes
- Microservices-Architecture-Style — Style requiring dynamic service discovery
- Trade-Offs-and-Least-Worst-Architecture — Dynamic topology handling exemplifies architectural trade-offs
Sources
-
Richards, Mark and Neal Ford (2020). Fundamentals of Software Architecture: An Engineering Approach. O’Reilly Media. ISBN: 978-1-492-04345-4.
- Chapter 9: Foundations
- Discusses the Fallacies of Distributed Computing and their architectural implications
- Available: https://www.oreilly.com/library/view/fundamentals-of-software/9781492043447/
-
Deutsch, Peter (1994-1997). “The Eight Fallacies of Distributed Computing.” Originally articulated at Sun Microsystems.
- Fifth fallacy in the original list
- Identified through observing distributed systems failing when network topology changed
- Widely referenced in distributed systems literature
Note
This content was drafted with assistance from AI tools for research, organization, and initial content generation. All final content has been reviewed, fact-checked, and edited by the author to ensure accuracy and alignment with the author’s intentions and perspective.