AI as Infrastructure: The Next Phase Is Stewardship
Artificial intelligence has captured the attention of markets, media, startup culture, and public discourse. It is widely framed as the next great machine of production, a revolutionary factory poised to redefine work and generate unprecedented economic output. That framing is understandable, but it is fundamentally wrong.
AI is not a factory. It is not a living thing trapped in a machine. It is a tool.
More precisely, AI is a large language model. Its output is language. Words, relationships, context, synthesis. At its core, it is a workflow automation tool with the ability to learn from human input and reflect that thinking back to us. AI does not think independently. It mirrors us. Its intelligence is not autonomous. It is derivative of human language, human values, human intent.
This distinction matters because it collapses much of the mythology surrounding AI. When viewed clearly, AI is not here to replace humanity or eliminate meaningful work. It is here to remove the vast amount of work that never should have existed in the first place.
Good tools simplify. They reduce friction. They eliminate unnecessary steps and return time and mental space to the user. Less, when done well, produces more thinking. AI is an extremely powerful tool precisely because it operates at the level of language, which is where modern work has become most bloated. Documentation, reporting, compliance theater, performative productivity, duplicated workflows, endless systems talking past one another. Much of what we call work today is not creation. It is maintenance of unnecessary complexity.
The internet accelerated this problem. The early promise of connectivity unleashed extraordinary innovation, but it also produced redundancy at massive scale. Dashboards multiplied. Apps stacked on apps. Tools designed to solve narrow problems persisted long after their usefulness, accumulating into sprawling digital ecosystems that demand constant attention simply to justify their existence.
AI does not represent the next phase of building more of this. It represents the opportunity to clean it up.
A more accurate way to understand AI is as a combination of a vacuum cleaner, a sewing machine, and a library. It can identify waste, remove redundancy, and stitch together the best parts of existing systems into something simpler and more coherent. It can automate repetitive documentation, reconcile fragmented workflows, and allow people to return to the work they originally set out to do when they sat down in front of a screen.
This is why framing AI as an overnight cash register is misguided. AI will generate enormous value, but not primarily through novelty or spectacle. Its value will come from subtraction. From removing bullshit. From restoring clarity. From telling the truth about how systems actually function.
Knowledge itself behaves like infrastructure. Libraries are not owned in the traditional sense. They are stewarded. Roads, bridges, sidewalks, and power grids are not speculative assets. They are funded because societies depend on them. The internet functions the same way. No one truly owns it. It is a medium through which information flows, sustained by collective participation and governance.
AI belongs in this category. It is cognitive infrastructure.
This is why openness is not an ethical nicety but a structural advantage. The most effective AI systems will not be the most closed. They will be the most open. Openness allows systems to learn from broader context, discern truth from distortion, and improve through exposure rather than insulation. Attempts to build moats around fundamental knowledge misunderstand how gravity works. The strongest systems attract participation. They do not wall themselves off.
Charging tolls for access to essential infrastructure has always produced the same outcome. People build alternatives, find workarounds, or abandon the system entirely. Knowledge systems become indispensable by being trustworthy and useful, not by being exclusive.
This framing has direct implications for economics. We already know how societies fund infrastructure. We do not rely on speculative equity markets to build bridges, libraries, or power grids. We issue bonds. Long horizon instruments designed to reward patience, stability, and stewardship rather than volatility. These investments are intentionally boring because the systems they support must endure.
AI, if treated honestly as infrastructure, belongs in this funding model. The long term demands of power generation, grid expansion, maintenance, and governance require stable capital, not speculative hype. This naturally selects for the right incentives and filters out those seeking quick returns at the expense of long term trust.
Much of the fear surrounding AI stems from misunderstanding and delay. The gap between public exposure and full capability has allowed worst case narratives to flourish. But what AI will erode are not meaningful jobs. It will erode institutions and workflows built on unnecessary complexity, dishonesty, and performative process. These systems are already fragile. AI simply exposes that fragility faster.
This is not destruction. It is correction.
There is also a deeply human dimension to this shift. Modern technology overwhelms attention and distorts perception. We consume more information than we can process, much of it contradictory or false. Over time, this creates cognitive dissonance and exhaustion. At a fundamental level, mental distress often arises when internal truth collides with external distortion, when what we intuitively know to be true is contradicted by the systems we are forced to engage with daily.
AI, used with integrity, has the potential to reverse this dynamic. By stripping away noise, surfacing consistency, and reinforcing truth, it can help realign conscious and subconscious understanding. Honest systems produce calmer minds. Reduced bullshit produces healthier people. Fulfillment follows when work once again aligns with meaning rather than maintenance of unnecessary systems.
For this to work, leadership and governance matter. Truth must be prioritized above growth theatrics. Accuracy above engagement. Stewardship above dominance. Oversight should be institutional, transparent, and grounded in first principles. Properly designed AI systems will be largely self regulating, because systems built on honesty naturally filter out distortion. Regulation in this context is not about control. It is about validation and trust.
AI’s highest value is not in what it creates, but in what it removes. It advances us not by accelerating attention, but by protecting it. It allows us to move forward by looking backward, cleaning up what already exists, and freeing people to think, create, and live with greater clarity.
This is not a revolution.
It is a return to first principles.
-charles macmillan