Watercooler
November 19, 2024
7
min read

Should AI be Regulated? The Arguments For and Against

Adrien Book

Ever since OpenAI released ChatGPT into the wild in late 2022, the world has been abuzz with talks of Generative Artificial Intelligence and the future it could create. Capitalism’s fanboys see the technology as a net positive; the logical continuation of the digital world, which has contributed to the creation of untold wealth… for a select few. Boomers, meanwhile, recall the best of the 80s’ Sci-Fi, and fear we may be well on our way to create our own HAL / SHODAN / Ultron / SkyNet / GLaDOS.

These are the loud minorities. Most people presented with the possibilities offered by Generative Artificial Intelligence understand that technology is merely a tool, without a mind of its own. The onus is on users to “do good” with it. And if that is not possible because “good” is inherently subjective… then democratic governments need to step in and regulate.

How (and if) this is to be done is still hotly debated. The European Union was first out of the gate with the proposed AI Act. It is an imperfect first draft, but has the benefit of being a real attempt at managing a highly disruptive technology rather than letting tech billionaires call the shots. So the question this article raises is whether AI should be regulated? Below is a summary of the proposed law, and the pros and cons of such regulations.

Are Brain-Computer Interfaces Good or Bad? →

What is in the EU’s AI Act

The EU’s AI Act, a landmark framework for regulating artificial intelligence, was proposed by the European Commission in April 2021 and finalized through agreement by the European Parliament and the Council in December 2023. According to the European Commission, the Act is designed to address potential risks to citizens’ health, safety, and fundamental rights while offering clear requirements and obligations for developers and deployers regarding specific uses of AI. By adopting a balanced approach, the regulation also aims to reduce administrative and financial burdens for businesses, fostering innovation while ensuring accountability.

The AI Act adopts a risk-based approach, categorizing AI systems into different risk levels:

  • Prohibited AI Practices: Certain AI practices considered harmful and in contradiction with EU values are banned.
  • High-Risk AI Systems: These systems are subject to strict obligations, including requirements on data training and governance, technical documentation, transparency, and human oversight.
  • Limited-Risk AI Systems: These are subject to minimal transparency obligations, allowing users to make informed decisions.

The Act also includes specific provisions for general-purpose AI models, requiring providers to ensure compliance with the regulation's requirements.

While the AI Act entered into force in August 2024, its provisions will be enforced starting August 2, 2026, allowing stakeholders time to adapt to the new requirements.

Why AI should not be regulated

There has been plenty written about the fact that tech billionaires say they want AI to be regulated. Let’s make one thing clear: that is a front. Mere PR. They do not want regulation, and if it comes, they want it in their own terms. Below are some of the best arguments presented by them and their minions over the past few months.

1. Stifling Innovation and Progress

The case could be made that regulations will slow down AI advancements and breakthroughs. That not allowing companies to test and learn will make them less competitive internationally. However, we are yet to see definitive proof that this is true. Even if it were, the questions would remain: is unbridled innovation right for society as a whole? Profits are not everything. Maybe the EU will fall behind China and the US when it comes to creating new unicorns and billionaires. Is that so bad, as long as we still have social nets, free healthcare, parental leaves and 6 weeks of holidays a year? If having this, thanks to regulations, means a multi-millionaire cannot become a billionaire, so be it.

The non-international competitiveness argument is a lot more relevant for the discussion at hand: regulation can create barriers to entry (high costs, standards, or requirements on developers or users) for new companies, strengthening the hand of incumbents. The EU has already seen this when implementing the GDPR. Regulations will need to carve out a space for very small companies to experiment, something that is already being discussed at EU-level. If they’re so small, how much harm can SMEs do anyway, given the exponential nature of AI’s power?

2. Complex and Challenging Implementation

Regulations relating to world-changing technologies can often be too vague or broad to be applicable. This can make them difficult to implement and enforce across different jurisdictions. This is particularly true when accounting for the lack of clear standards in the field. After all, what are risks and ethics if not culturally relative?

This makes the need to balance international standards and sovereignty a particularly touchy subject. AI operates across borders, and its regulation requires international cooperation and coordination. This can be complex, given varying legal frameworks and cultural differences. This is what they will say.

There are however few voices calling for one worldwide regulation. AI is (in so many ways) not the same as the atomic bomb, whatever the doomsayers calling for “New START” approach may claim. The EU will have its own laws, and so will other world powers. All we can ask for is a common understanding around the risks posed by the technology, and limited cooperation to cover blind spots within and between regional laws.

3. Potential for Overregulation and Unintended Consequences

Furthermore, we know that regulation often fails to adapt to the fast-paced nature of technology. AI is a rapidly evolving field, with new techniques and applications emerging regularly. New challenges, risks and opportunities continuously emerge, and we need to remain agile / flexible enough to deal with them. Keeping up with the advancements and regulating cutting-edge technologies can be challenging for governing bodies… but that has never stopped anyone, and the world still stands.

Meanwhile, governments must make sure that new industries (not considered AI) are not caught up in the scope of existing regulation, with unexpected consequences. We wouldn’t want, for example, ecology to suffer because a carbon capture system uses a technology akin to generative AI to recommend regions to target for cleanup.

It is important to avoid excessive bureaucracy and red tape… but that is not a reason to do nothing. The EU’s proposed risk-based governance is a good answer to these challenges. Risks are defined well-enough to apply to all people across the territory, while allowing for changes should the nature of artificial intelligence evolve.

There are, in truth, few real risks in regulating AI… and plenty of benefits.

Why AI should be regulated

There are many reasons why AI should be regulated, specifically when looking through the prism of risks to under-privileged or defenceless populations. It can be easy not to take automated and wide-scale discrimination seriously… when you’ve never been discriminated against. Looking at you, tech bros.

1. Ensuring Ethical Use of Artificial Intelligence

Firstly (and obviously), regulation is needed to apply and adapt existing digital laws to AI technology. This means protecting the privacy of users (and their data). AI companies should invest in strong cyber-security capabilities when dealing with data-heavy algorithms… and forego some revenues as user data should not be sold to third parties. This is a concept American companies seem to inherently and wilfully misunderstand without regulation.

As mentioned in the AI Act, it is also crucial that tech companies remove the potential for bias and discrimination from algorithms dealing with sensitive topics. That entails A) ensuring none is purposefully injected and B) ensuring naturally occurring biases are removed to avoid reproduction at scale. This is non-negotiable, and if regulatory crash testing is needed, so be it.

More philosophically, regulation can help foster trust, transparency, and accountability among users, developers, and stakeholders of generative AI. By having all actors disclose the source, purpose, and limitations of AIs’ outputs, we will be able to make better choices… and trust the choices of others. The fabric of society needs this.

2. Safeguarding Human Rights and Safety

Beyond the “basics”, regulation needs to protect populations at large from AI-related safety risks, of which there are many.

Most will be human-related risks. Malicious actors can use Generative AI to spread misinformation or create deepfakes. This is very easy to do, and companies seem unable to put a stop to it themselves — mostly because they are unwilling (not unable) to tag AI-generated content. Our next elections may depend on regulations being put in place… while our teenage daughters may ask why we didn’t do it sooner.

We also need to avoid letting humans do physical arm to other humans using generative Artificial Intelligence : it has been reported that AI can be used to describe the best way to build a dirty bomb. Here again, if a company cannot prevent it to the best of its abilities, I see no reason for us to continue to allow it exist in its current form.

All this is without even going into the topic of AI-driven warfare and autonomous weapons, the creation of which must be avoided at all cost. This scenario is however so catastrophic that we often use it to hide the many other problems with AI. Why concentrate on data privacy when Terminator is right around the corner, right? Don’t let the doomers distract you from the very boring, but very real fact: without strong AI regulation tackling the above, society may die a death of a thousand cuts rather than one singular weaponised blow.

This is why we must ensure that companies agree to create systems that align with human values and morals. Easier said than done, but having a vision is a good start.

3. Mitigating Social and Economic Impact

There are important topics that the AI Act (or any other proposed regulation) does not completely cover. They will need to be further assessed over the coming years, but their very nature makes regulating without over-regulating difficult, though not any less needed.

Firstly, rules are needed to fairly compensate people whose data is used to train algorithms that will bring so much wealth to so few. Without this, we are only repeating the mistakes of the past, and making a deep economical chasm deeper. This is going to be difficult; there are few legal precedents to inform what is happening in the space today.

It will also be vital to address gen. AI-led job displacement and unemployment. Most roles are expected to be impacted by artificial intelligence, and with greater automation often comes greater unemployment. According to a report by BanklessTimes.com, AI could displace 800 million jobs (30% of the global workforce) by 2030.

It may be at the macro-economic level for some (“AI could also shift job roles and create new ones by automating some aspects of work while allowing humans to focus on more creative or value-adding tasks”, they’ll say), but it is decades of despair for others. We need a regulatory plan for those replaced and automated by AI (training, UBI…).

Finally, it will be important to continuously safeguard the world’s economies against AI-driven economic monopolies. Network effects mean that catching up to an internet giant is almost impossible today, for lack of data or compute. Anti-trust laws have been left rather untouched for decades, and it can no longer go on. Regulations will not make us less competitive in this case. It may make the economy more so.

United Kingdom's Approach to AI Regulation

In contrast to the EU's regulatory framework, the United Kingdom initially refrained from immediate AI-specific legislation. In November 2023, Viscount Jonathan Camrose, the UK's first minister for AI and intellectual property, stated that there would be no UK law on AI 'in the short term' due to concerns that heavy-handed regulation could curb industry growth.

However, this stance evolved over time. By November 2024, the UK government announced plans to introduce legislation within the next year to address AI risks. Technology Secretary Peter Kyle emphasized the necessity of turning the country's voluntary AI testing agreements into legally binding codes and establishing the UK's AI Safety Institute as an independent government body to ensure the protection of British citizens.

This shift indicates the UK's recognition of the need for a structured approach to AI regulation, balancing innovation with safety and ethical considerations.

Final thoughts...

The regulatory game has just started. Ultimately, AI should be regulated to some level to ensure its responsible development and use. Moving forward, governments will need to collaborate and cooperate to establish broad frameworks while promoting and encouraging knowledge sharing and interdisciplinary collaboration.

These frameworks will need to be adaptive and collaborative, lest they become unable to keep up with AI’s latest development. Regular reviews and updates will be key, as will agile experimentation in sandbox environments.

Finally, public engagement and inclusive decision-making will make or break any rules brought forwards. We need to Involving diverse stakeholders in regulatory discussions, while engaging the public in AI policy decisions. This is for us / them, and communicating that fact well will help governments counter-act tech companies' lobbying.

The regulatory road ahead is long : today, no foundational LLM currently complies with EU AI Act. Meanwhile, China’s regulation concentrates on content control rather than risks, further tightening the Party’s grip on free expression.

The regulatory game has just started. But… we’ve started, and that makes all the difference.

Should AI be Regulated? The Arguments For and Against

November 19, 2024
7
min read

Subscribe to DevDigest

Get a weekly, curated and easy to digest email with everything that matters in the developer world.

Learn more

From developers. For developers.