Why Backbone Communication Links Are Becoming More Expensive

January 23, 2026

While the whole world is talking about AI and news feeds are packed with announcements like “Nano Banana,” far less inspiring discussions are taking place behind the scenes of the industry. While consumers debate new possibilities, providers are increasingly confronted with growing constraints and rising traffic costs. The problem has become so acute that no one is eager to discuss it openly with end users.

Yet indirect signs suggest that the structural imbalance has already occurred, and that the spiral of rising costs is only beginning to unwind. It is important to understand that this situation cannot be viewed as an isolated case. This systemic condition of the entire industry, and every consumer has felt its effects to one degree or another. Only a few, however, have recognized the real cause of what is happening.

To untangle this knot, we need to dive into the past and trace the stages at which key industry events occurred, as well as analyze their consequences. This will provide a key to understanding current processes and allow us to form a fairly concrete forecast for the coming years.

«At 3HCloud, we observe this shift not as an abstract industry trend, but as a daily operational reality. Over the past few years, demand for bandwidth has been growing faster than both customer expectations and the economic models behind “cheap” connectivity. Businesses need more throughput, lower latency, and more consistency - yet react painfully to any sign of higher pricing or degraded network quality caused by congestion or overselling.»

From the very beginning of the internet’s development, the primary metric was how many megabits could be transmitted for one dollar. Starting in the early 1990s, investors poured enormous sums into telecommunications companies, which in turn built infrastructure on a colossal scale. In addition to network nodes, thousands of kilometers of fiber-optic backbone cables were laid “for the future.” All of this naturally drove prices down: high-speed communication links became cheaper and more accessible year after year.

This came to an end with the dot-com bubble crash. At the same time, it became clear that telecommunications companies had literally “buried” millions of dollars in the ground that generated no immediate revenue. Thousands of backbone fibers were recorded simply as “dark fiber” and were effectively unused. Nevertheless, practice showed that this was a sound long-term investment.

A major turning point occurred around 2010, driven by the exponential growth of traffic. Rising demand for backbone capacity pushed prices upward, and providers had little choice: business growth required increased capacity, which inevitably meant expanding throughput. Files grew larger, web services heavier, and the rapid development of mobile internet multiplied consumption many times over.

At a certain point it became clear that, at such growth rates, existing backbones would simply no longer suffice. The previously stalled construction flywheel began spinning again. But laying backbone cables is not only extraordinarily expensive - it is also extremely slow. Permits from regulatory authorities, compliance with international law, contractor selection - all of this is complex, time-consuming, and often takes years. This is especially true for cables laid not only on land, but also along the seabed.

Key factors

A general trend emerged: the disappearance of small players from the market. Large providers could withstand the high capital expenditures required for construction and turn new lines into profitable investments. For small local providers this was impossible; they had to negotiate leases with the large ones. The latter could confidently dictate their terms, which often proved onerous.

Ultimately, this usually ended in standard scenarios: acquisition of small companies by large ones, or outright bankruptcy. However, there was another serious factor that had a global impact on the entire industry. Backbone cables are long-lived, but they still require maintenance, and sometimes complete replacement. This process is complex and quite costly.

Another factor was the rise in electricity prices. It is important to understand that any backbone also implies hundreds of supporting devices: channel multiplexing equipment, optical regenerators, and networking hardware in data centers consume significant amounts of power. Even a small increase in the cost of a kilowatt-hour translates into millions of dollars in additional operating expenses. These costs cannot be reasonably optimized - they are fundamental to backbone operation.

It is also worth noting the introduction of increasingly strict regulatory requirements by governments. States are more frequently demanding data and traffic localization, redundant connectivity, and compliance with resilience requirements. From an economic perspective, this results in a multiple increase in capital expenditures without proportional revenue growth. A backup link generates no income, yet must exist.

The same applies to specialized traffic-monitoring equipment. Formally, it is used to support law enforcement, but in practice it often becomes a tool for suppressing freedom of speech. Providing connectivity and power for such equipment falls on telecom operators, who in turn pass these costs on to end users.

«Backbone connectivity is no longer a neutral commodity. High capital costs, maintenance of aging infrastructure, rising energy prices, and regulatory pressure systematically favor large operators and interfere with smaller ones. For end users, this translates into weaker competition, fewer real choices, and pricing that increasingly reflects structural constraints rather than service differentiation.»

Traffic structure

Alongside the factors described above, backbone networks have faced fundamental changes. In the past, they primarily carried “human internet” data: web pages, scripts, email, and streaming media. Today, the majority of the load consists of service traffic. Synchronization with cloud services, distribution of machine-learning models, data replication, and data collectors all generate massive volumes of traffic that are largely insensitive to cost in the classical sense.

A typical cloud provider, when faced with even a modest increase in the cost per megabit, does not attempt to optimize these expenses - it simply passes them further down the chain. For end users, this creates a genuine cognitive dissonance: they are accustomed to the internet becoming faster and cheaper. Providers, meanwhile, feel growing pressure from rising costs in a market that is unwilling to accept higher prices. As a result, the cost of backbone traffic increasingly becomes disguised in various ways.

One main thing to keep in mind - a gigabit of bandwidth isn’t a universal unit. Its cost varies dramatically depending on many factors: from geography and traffic profile to delivery guarantees. A gigabit delivered in a major Internet exchange hub, consumed asymmetrically and cached via CDNs, is fundamentally different from a gigabit delivered to a remote region, used symmetrically, and routed through multiple transit networks.

The cumulative influence of latency guarantees, redundancy requirements, peak-to-average ratio constraints, and interconnected upstream dependency chains dramatically increases service costs, leading to scenarios where identical nominal speed tiers can differ by an order of magnitude in price within the same country. This disparity is often exacerbated by pricing models that fail to transparently reflect these underlying infrastructural complexities, resulting in consumers being disproportionately burdened by costs they can’t easily quantify or compare.

The most obvious manifestation is the appearance of “unlimited” plans with “asterisks” conditions. Once a certain threshold is exceeded, access speed is significantly reduced. This artificial limitation helps curb traffic volumes. Another commonly used constraint is asymmetric pricing plans. For example, a download speed of 1 Gbit/s paired with an upload speed of only 250 Mbit/s. This allows for significant savings without greatly affecting most users. In many cases, such limits go unnoticed, while providers gain the ability to pack more users into a single fiber.

These constraints are designed not to speed per se, but to limit predictability of consumption. Traffic shaping, fair use policies, asymmetric upload/download ratios - all these mechanisms exist to protect the provider from subscribers who turn residential access into quasi-backbone usage. If these constraints were not in place, even a small number of heavy users can significantly increase the cost of the entire network segment.

A particularly problematic factor in today’s networks is that many subscribers use VPN servers in different countries to redirect their usual home traffic. This results in excessive load on international trunk channels, thereby distorting traffic patterns that backbone networks were never designed for. Instead of traffic remaining largely within national or regional boundaries, where peering is cheap and capacity is relatively abundant, it is artificially forced through long international routes.

At first glance, the situation looks strange: backbone technologies are evolving, advertising speeds are increasing, yet users lose access to truly unlimited plans but there is no proportional reduction in prices. The key reason is the difference between peak technical capacity and economically sustainable average usage.

Modern backbones are like banks. Their proposals are based on the assumption that not all users will consume maximum bandwidth simultaneously. As long as traffic patterns remain bursty, this model works. However, once a growing share of subscribers begins to generate constant and high-volume traffic, the economic assumptions collapse.

A less obvious form of masking is the widespread deployment of CDNs. In many cases, content caching, even accounting for substantial infrastructure costs, can drastically reduce backbone load. In practice, any means that squeeze more out of the “pipe” without expanding it are welcome.

Another important factor worth mentioning is internet reselling by providers. This is not inherently problematic itself. Many providers invest in traffic engineering, peering optimization, caching and customer segmentation. They sell available capacity aligned with actual consumption patterns.

Problems arise when usual reselling becomes purely financial arbitrage: buying cheap transit and overselling it without regard for geography, sustained usage or redundancy. This amplifies congestion, lowering service quality, and accelerates price inflation upstream.

«Unlimited access is economically incompatible with machine-generated, sustained traffic. Providers are forced to introduce asymmetric speeds, traffic shaping, and fair-use policies to preserve oversubscription models.»

The Shannon limit

Increasing the capacity of backbone links would be impossible without DWDM (wavelength-division multiplexing). Its essence lies in splitting data streams within a single fiber and transmitting them separately using optical transceivers with different wavelengths. All these signals can travel through a single cable, enabling extreme throughput values, such as 10-20 Tbit/s.

Modern DWDM systems, combined with high-quality new backbone lines, can deliver 20-40 Tbit/s per fiber, but this already requires much more expensive equipment. In laboratory conditions, significantly higher figures have been achieved. For example, in 2024 Japanese researchers set a world record of 402 Tbit/s over a single, fairly standard fiber.

In real-world conditions, however, even a quarter of that speed can be considered a practical limit, and modern backbones are approaching it rapidly. Beyond this point, economics becomes the decisive factor. Each additional gigabit costs more than the previous one, and in many cases it is far more cost-effective to deploy additional links than to continue increasing density.

Real-world operation reveals many non-obvious nuances familiar only to network engineers. For instance, adding a new DWDM channel can unexpectedly degrade the performance of existing ones. Moreover, real backbone routes often consist of fibers manufactured in different years, using different types of amplifiers, splices, and connectors. The result is an uneven signal-to-noise ratio (SNR) across the spectrum.

Naturally, these effects are addressed using all available methods: adaptive modulation modules, reduced per-channel power, and adaptive compensation profiles. All of this helps extract the maximum from existing backbones, but such solutions are always compromises.

«Each additional gigabit of backbone throughput costs more than the previous one, and returns diminish rapidly. This makes extreme speeds possible only in carefully engineered and premium scenarios, while mass-market connectivity becomes increasingly constrained despite impressive headline numbers.»

The role of hyperscalers

In addition to traditional Tier-1 operators, the cost of backbone connectivity is increasingly influenced by the largest corporations, primarily the “big four”: Google, Microsoft, Amazon, and Meta. Over recent decades, they have invested tens of billions of dollars in building their own network infrastructure, including core networks comparable in scale to the largest backbones on the planet. Some even own their own subsea cables.

These corporations do not “rent the internet”. They have built their own. However, it is used as internal infrastructure rather than a commercial product. At a certain point, hyperscalers came to own backbone networks that surpass those of Tier-1 operators by virtually every metric. This has placed strong pressure on the market, as hyperscalers have gradually attracted the largest customers.

The result is a paradoxical situation: traffic volumes continue to grow, yet the backbone transit market shows steadily declining revenues. Where operators once built backbones and easily sold capacity to major content sources, that model has lost relevance. Today, content sources themselves own the transport, leaving operators with only the crumbs of servicing the “residual” internet.

For customers seeking 10 or 20 Gbps connectivity, a hard trade-off inevitably emerges. One can choose at most two of the following:

  • high speed,
  • guaranteed quality,
  • simplicity of access.

High speed and simplicity lead to best-effort connections with strict usage policies. High speed and quality require dedicated links, redundancy channels, and long-term contracts. Simplicity and quality cap achievable throughput at economically reasonable levels. This is not a temporary limitation. It is a direct consequence of infrastructure costs, physical constraints, and risk distribution.

«Hyperscalers have exited the public internet economy by building private global backbones. The remaining “open internet” becomes harder to sustain, and its costs are redistributed downward to smaller providers and end users.»

Future trends

At the time of writing, in late 2025, we have reached a point where the global public network infrastructure has, for the first time in several decades, become a bottleneck. Several cycles that once catalyzed industry growth have ended simultaneously.

Old investments in backbones, even the largest ones, have run their course. Now it is time for equally significant investments in both maintaining existing lines and building new ones. This coincides with rising electricity prices, regulatory pressure, and the explosive growth of machine-generated traffic. The resulting price increases are not a temporary spike or a localized crisis: they represent a long-term systemic trend.

Market stratification is likely to continue at an accelerated pace. Only large players with their own backbones and direct access to capital will remain relatively comfortable. They will be able to maintain current links and likely continue expanding them. Local providers, however, will face far more challenging conditions, with “last-mile” revenues shrinking to minimal margins.

End users will be hit hardest. A sharp increase in internet access prices is unlikely, but over time more and more restrictions will appear, both hidden and explicit, such as the abandonment of unlimited plans. These will become the norm even in countries that are currently actively expanding fiber networks, increasing access speeds while lowering prices.

«Internet access will not suddenly become much more expensive, but it will become more restricted. Unlimited plans will fade, symmetric access will remain costly, and guarantees for latency or redundancy will sharply increase prices. The industry is entering a phase where growth continues, but freedom and simplicity steadily decline.»

Горячие предложения

Получите скидку до 80% на весь срок аренды сервера