The artificial intelligence boom is built on a series of bets—on the belief that the logic of exponential growth can define the future.
Exponential growth is not just an increase in numbers but an acceleration of that increase. The tech industry is convinced that these ever-steepening curves will simultaneously improve technology itself, boost human demand for its products, and expand society’s capacity to meet AI’s growing appetite for computing power, energy, and data. Trillions of dollars have already been invested in this idea—a wager made not by individual companies but, in effect, by the entire world.
AI developers believe the technology can fuel its own advancement, creating a “virtuous cycle.” Skeptics, however, warn that another round of acceleration could lead to a collision with the wall—and then a global financial crisis. In theory, exponential curves can continue indefinitely, but in reality, each eventually hits a limit. Moore’s Law, which defined the pace of semiconductor progress, ran up against the physical constraints of heat dissipation at the atomic level. Metcalfe’s Law, linking a network’s value to the number of its users, collided with the limits of human society and the imperfections of its institutions. Artificial intelligence, unless it proves an exception to history’s rule, will one day reach its own boundary.
Optimists believe that such an exception is possible—when AI begins to create itself. They anticipate the “takeoff moment”: the point at which the growth of capability and reliability shifts into a phase of self-rewriting models, closing the loop of self-improvement. From this notion have sprung both utopian and apocalyptic visions of the future.
Yet exponential equations describe closed, predictable systems. The world we inhabit is different—it is chaotic, interdependent, and unpredictable. “All things are deeply intertwingled,” observed technology visionary Ted Nelson half a century ago. “People keep pretending they can arrange everything in hierarchies, categories, and sequences, but in fact they can’t.”
The machine-learning models that underpin AI are themselves an interconnected network. But the biggest players—OpenAI, Google, and Anthropic—have chosen the path of brute force. They rely on so-called “scaling laws,” which link model size to performance gains. Their executives believe that if they keep building the same systems, only larger, they will inevitably become better—until they cross some mysterious threshold beyond which self-improvement begins.
Об ИИ

The Limits of Control
OpenAI and the Visionary Who Can Neither Be Held Back nor Replaced

Ideology at the Top, Infrastructure at the Bottom
While Washington Talks About AI’s Bright Future, Its Builders Demand Power, Land, and Privileges Right Now

AI’s Carbon Conundrum
The technology that could save the planet might also help burn it
History shows that every exponential acceleration eventually levels off. The only question is how long the AI curve can hold and how many “doublings” the technology can endure before it reaches its limit.
That limit could be environmental—when data centers begin consuming more energy and water than nations can supply, or when climate disruptions provoke a backlash against the technology’s wastefulness. It could be financial—if external shocks such as war, a pandemic, or a political crisis, or internal failures within the industry trigger a sharp market collapse. And finally, the limit may simply be disillusionment: if the promises—PhD-level personal tutors, the conquest of aging, abundance, and the colonization of Mars—fail to materialize.
Silicon Valley was built on the capitalist idea of infinite growth. But growth without fair distribution of its benefits becomes a source of corruption and social unrest, while growth without ethical grounding turns into exploitation and chaos. “Growth for the sake of growth is the ideology of the cancer cell,” said environmentalist Edward Abbey.
The AI industry has already learned a “bitter lesson”: instead of training machines on structured knowledge, it chose to scale up systems capable of learning on their own. But if it continues to expand without purpose or direction, it will face new—and far more painful—lessons.
The central question remains the same: are we ready to believe that “this time is different,” or will we remember the old rule that no one has yet managed to escape—that everything that grows, eventually falls.