This week, I was mentoring a team of US-based university students preparing to submit a paper for the 2025 ACM FAccT Conference (Conference on Fairness, Accountability, and Transparency). They’re diving into the complex world of intellectual property and AI governance in the United States—not the easiest of topics.
Their paper was dense, layered, and brilliant. It took me a few reads, highlighter in hand, to fully grasp everything they were arguing, and it triggered a great conversation.
In a team meeting we discussed innovation and the role that literacy played in advancing the First Industrial Revolution (1760-1840). We talked about how earlier, during the Age of Enlightenment (1685-1815), people were encouraged to think critically, to read and write, and to value science and education. The ability to share ideas, take notes, and build on others’ work became a foundation for the technological and social progress that followed. The Enlightenment period created the conditions for innovation to take hold and flourish.
Then one student asked, “Could we be seeing the end of innovation?” The question came at just the right moment.
We had been debating the tension between regulation and innovation—or, in the US context, deregulation and its impact on content creators. His question triggered a connection. I had never considered that by deregulating AI, we might actually be killing innovation, rather than fueling it.
The word innovation comes from the Latin innovare, meaning “to renew or alter.” It first appeared in English in the 15th century, usually to describe changes in religious or political practices. Over time, especially during the Industrial Revolution and the 20th century, it became more closely tied to progress in science, technology, and business.
Innovation comes from human curiosity, creativity, and our instinct to improve things. It requires the human judgment to know when and how to let new ideas grow. We innovate by asking questions, testing ideas, learning from failure, and sharing knowledge. So what happens if we stop and let machines take over the work of thinking and experimenting?
At the Paris AI Summit in February 2025, US Vice President JD Vance warned that “excessive regulation” could hold back innovation, especially for smaller companies that can’t easily navigate compliance. Around the same time, Meta’s Chief AI Scientist, Yann LeCun, argued that “regulation can kill open source,” cautioning governments against moving too fast with legislation.
But what if the real threat to innovation isn’t regulation, but deregulation? What if the lack of oversight has led to an overdependence on tools like generative AI, which have entered our work and personal lives so rapidly that we begin to lose the very skills needed to innovate? What if, in the absence of regulation, we surrender our human agency—our drive to experiment, our ability to think critically, and our capacity to connect complex ideas into meaningful, tangible innovations?
A recent MIT study suggests that tools like ChatGPT might already be eroding some of our critical thinking skills. So that student's question sits with me: What happens if we forget how to ask questions?
Could AI deregulation - and not AI regulation - be the death of innovation?
Fascinating read, it's made me pause several times as well. Thanks for sharing.