Core Idea
“Bandwidth is infinite” is the third of the Fallacies of Distributed Computing—the false assumption that network capacity is unlimited and you can transfer any amount of data without considering throughput constraints. In reality, bandwidth is finite, expensive, and often becomes a bottleneck when distributed systems transfer large payloads, stream data, or handle high-frequency communication between services.
What Is the “Bandwidth Is Infinite” Fallacy?
The “bandwidth is infinite” fallacy is the assumption that network connections can transmit unlimited amounts of data instantaneously without capacity constraints. This assumption leads architects to design distributed systems that freely pass large objects, transfer entire datasets between services, or implement chatty protocols without considering the cumulative bandwidth consumption. However, bandwidth—the rate at which data can be transmitted over a network connection—is always finite and varies dramatically based on infrastructure.
Local network connections within a data center might offer 1-10 Gbps bandwidth. Connections between data centers or cloud regions typically provide 100 Mbps to 1 Gbps. Internet connections to end users range from 10 Mbps to 1 Gbps, with mobile networks often limited to 5-50 Mbps. These constraints mean that transferring a 100 MB object over a 100 Mbps connection takes at least 8 seconds—before accounting for network overhead, protocol inefficiencies, and congestion from other traffic.
The fallacy becomes problematic when systems are designed without bandwidth budgeting:
- Example: A microservices architecture where each service calls others with full domain objects rather than minimal DTOs
- If a Customer object is 5 KB and an Order service fetches customer data for 1000 orders per second, that’s 5 MB/s just for customer lookups—40 Mbps of bandwidth dedicated to a single data flow
- Add similar patterns across dozens of services, and available bandwidth saturates, causing network congestion, increased latency, and timeouts
Bandwidth constraints compound when combined with the latency fallacy:
- Large payloads not only consume bandwidth but take longer to transmit, increasing end-to-end latency
- A 1 MB response over a 10 Mbps connection requires 800ms just for data transfer, on top of network latency
- Streaming large datasets between services can exhaust network capacity, starving other services of bandwidth and creating cascading performance degradation
Addressing this fallacy requires deliberate data transfer strategies:
- Use minimal DTOs that contain only required fields rather than passing entire domain objects
- Implement compression to reduce payload sizes
- Cache frequently accessed data locally to avoid repeated transfers
- Use pagination and lazy loading for large datasets
- Employ bulk operations to reduce round-trip overhead
- Design asynchronous message-based communication to decouple producers and consumers, smoothing bandwidth usage over time rather than creating spikes
Why This Matters
This fallacy directly impacts system performance, cost, and scalability:
- Cloud providers charge for data transfer—both egress (data leaving their network) and inter-region transfers
- A chatty architecture that transfers gigabytes of data between services daily can incur thousands of dollars in monthly bandwidth costs
- Worse, bandwidth exhaustion causes performance bottlenecks that no amount of CPU or memory scaling can resolve
Understanding bandwidth constraints forces architects to make trade-offs when designing distributed systems:
- Monolithic architectures avoid this problem entirely because in-process method calls transfer data through shared memory at gigabytes per second, essentially “infinite” compared to network bandwidth
- Distributed architectures must carefully manage what data crosses service boundaries and how often
This fallacy particularly affects systems that process media, analytics, or large datasets:
- Video streaming services, data pipelines, and machine learning platforms must architect around bandwidth constraints
- Often pre-positioning data near compute resources
- Using content delivery networks (CDNs)
- Or accepting higher latency in exchange for reduced bandwidth consumption
- For these domains, bandwidth is a primary operational characteristic that shapes the entire architecture
Related Concepts
- Fallacies-of-Distributed-Computing — The complete set of eight fallacies this belongs to
- Fallacy-Latency-Is-Zero — Related fallacy about network delay; interacts with bandwidth limitations
- Fallacy-The-Network-Is-Reliable — Related fallacy about network failure assumptions
- Fallacy-Transport-Cost-Is-Zero — Related fallacy about the cost of data transfer
- Monolithic-vs-Distributed-Architectures — The architectural decision this fallacy impacts
- Operational-Characteristics — Performance, cost, and scalability are affected by bandwidth
- Trade-Offs-and-Least-Worst-Architecture — Bandwidth costs exemplify architectural trade-offs
- Microservices-Architecture-Style — Style requiring careful bandwidth management
Sources
-
Richards, Mark and Neal Ford (2020). Fundamentals of Software Architecture: An Engineering Approach. O’Reilly Media. ISBN: 978-1-492-04345-4.
- Chapter 9: Foundations
- Discusses the Fallacies of Distributed Computing and their architectural implications
- Available: https://www.oreilly.com/library/view/fundamentals-of-software/9781492043447/
-
Deutsch, Peter (1994-1997). “The Eight Fallacies of Distributed Computing.” Originally articulated at Sun Microsystems.
- Third fallacy in the original list
- Identified through observing repeated bandwidth exhaustion in distributed systems
- Widely referenced in distributed systems literature
Note
This content was drafted with assistance from AI tools for research, organization, and initial content generation. All final content has been reviewed, fact-checked, and edited by the author to ensure accuracy and alignment with the author’s intentions and perspective.