Saturday, November 23, 2024

AI regulation: People should be players, not pawns

Paul Graham of Y Combinator recently shared an anecdote that perfectly encapsulates the challenge of regulating artificial intelligence (AI). When he asked someone helping the British government with it what they would regulate, the response was a refreshingly honest, “I’m not sure yet.” As Graham noted, this might be the most intelligent thing anyone has said about AI regulation thus far.

AI creates a “pacing problem,” first explained in Larry Downes’ book The Laws of Disruption. He states that technology changes exponentially, but corresponding social, economic and legal systems change incrementally. Regulators are trying to govern Hogwarts with rules written for a Muggle school. Good luck stopping magical mischief with detention and a dress code.

And they are also faced with the Collingridge Dilemma, the regulatory equivalent of being stuck between a rock and a hard place. A 2023 paper in Science, Technology & Human Values analysed 50 cases of emerging technologies and found that in 76% of cases, early regulation stifled innovation, while in 82% of cases, late regulation failed to address societal impact adequately. 

Regulate too early, and you might accidentally outlaw the cure for cancer. Regulate too late, and you might find yourself in a Black Mirror episode.

Governments are aware of the need for regulation, but it is a tough job. A 2022 report by the Belfer Center at Harvard University found that only 18% of US federal agencies have employees with both technical and policy expertise in AI. A similar study by the AI Now Institute found that only 23% of government agencies across OECD countries have this expertise. This lack of skills would be common around the world.

The EU’s AI Act and the US’s proposed AI Foundation Model Transparency Act, which mandate several disclosures to increase transparency of AI models, are welcome steps. But these measures are still inadequate.

So, who can help? BigTech? Can we count on it to self-regulate? It sounds a little like asking foxes to guard the henhouse. So far, Big Tech has cared little about societal polarization, disinformation on its platforms, or the ecological footprint of its inventions. 

OpenAI, which has the word ‘open’ in its name, has repeatedly stated that it will not be transparent about most aspects of its flagship model, GPT-4. It is in it for profit. I would even argue that they don’t even pay their fair share of taxes, but that’s a debate for another day.

What options does that leave us with? In my view, we should try to make this a fairer fight. Empower ordinary citizens with tools to manage their data and control who has access to it. We need to equip ordinary citizens to protect themselves and, if possible, profit.

The US Second Amendment protects individuals’ right to possess firearms and use them for purposes like self-defence within their homes. If this is true, then it is logical to assume that individuals should have the means to defend themselves against misuse of their personal data. It is digital self-defence.

Is this a radical solution? Not at all. Study-after-study has concluded so.

The 2022 Gartner Privacy Survey reveals that 75% of consumers want more control over their data. A 2023 report by the Oxford Internet Institute argues that user-centric data governance models are essential to ensure that AI is developed and deployed in a way that respects user privacy and autonomy. 

A 2023 report by the World Economic Forum emphasizes the importance of digital identity solutions in enabling individuals to control their data and participate in the digital economy.

So, it’s not such a radical idea. And nothing new. These solutions could take the self-sovereign identity (SSI) approach. They could use zero-knowledge proofs (ZKPs). These are matters of detail. A 2023 study in MIT Technology Review demonstrated that decentralized identity systems could reduce data breaches by up to 70% while giving users granular control over their information. 

Furthermore, a pilot project by the EU found that user-controlled data-sharing increased willingness to participate in AI-driven services by more than 60%. It’s like having a personal bouncer for your data—you decide who gets in and who gets rejected.

The cost of implementing such systems is significant, but manageable. While India’s Aadhaar digital identity programme cost $1 billion, a World Bank report estimates that appropriate agencies could set up a global decentralized identity infrastructure for $25 billion. 

That’s a fraction of the $7 trillion Sam Altman is asking for to reshape the chip industry to power his AI dreams. The point is to stop businesses stealing more data from you and me only to misuse it with some dubious consent.

Regulating AI remains a challenge akin to nailing jelly to a wall. Providing citizens with tools to manage their data offers a pragmatic approach to mitigating risks and ensuring that AI development respects individual autonomy. 

As we navigate this rapidly evolving landscape, empowering users may be our best defence against the potential misuse of AI and our surest path to harnessing its benefits ethically and equitably. After all, in the high-stakes game of technological progress, it’s better to be the player than a pawn.

#regulation #People #players #pawns

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles