What is the EU’s Artificial Intelligence Act and how does it plan to rein in tech like ChatGPT?
- Members of the European Parliament reached a preliminary deal this week on a new draft of the European Union’s ambitious Artificial Intelligence Act.
Key highlights
- Members of the European Parliament reached a preliminary deal this week on a new draft of the European Union’s ambitious Artificial Intelligence Act.
- The AI Act was drafted in 2021 with the aim of bringing transparency, trust, and accountability to AI.
- It also aims to create a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU.
- Following a risk-based approach, it regulates the prohibition of certain AI systems, and sets out several obligations for the development, placing on the market and use of AI systems.
- The Act envisages establishing an EU-wide database of high-risk AI systems and setting parameters so that future technologies can be included if they meet the high-risk criteria.
- The legislation seeks to strike a balance between promoting the uptake of AI while mitigating or preventing harms associated with certain uses of the technology.
- The EU’s AI Act mainly addresses providers of AI systems.
About Artificial Intelligence:
- Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.
- The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalise, or learn from experience.
- AI algorithms are trained using large datasets so that they can identify patterns, make predictions and recommend actions, much like a human would, just faster and better.
Experts’ Concern with Artificial Intelligence:
- Recently, a group of more than 1,000 AI experts, including Elon Musk, have written an open letter calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4.
- This AI moratorium has been requested because powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
- An example of why there is a need for AI moratorium –
- As many as 300 million full-time jobs around the world could be automated in some way by the latest AI, according to Goldman Sachs economists.
Should AI be Regulated before it’s too late?
- Artificial Intelligence is already suffering from three key issues – privacy, bias and discrimination.
- Currently, governments do not have any policy tools to halt work in AI development.
- If left unchecked, it can start infringing on – and ultimately take control of – people’s lives.
- Businesses across industries are increasingly deploying AI to analyse preferences and personalise user experiences, boost productivity, and fight fraud.
- For example, ChatGPT Plus, has already been integrated by Snapchat, Unreal Engine and Shopify in their applications.
- This growing use of AI has already transformed the way the global economy works and how businesses interact with their consumers.
- However, in some cases it is also beginning to infringe on people’s privacy.
- Hence, AI should be regulated so that the entities using the technology act responsible and are held accountable.
Benefits of Regulating AI outweigh Potential Losses:
- It is true that regulating AI may adversely impact business interests. It may slow down technological growth and suppress competition.
- However, taking a cue from General Data Protection Regulation (GDPR), the governments can create more AI-focused regulations and have a positive long-term impact.
- GDPR is the European Union’s law which ensures the protection of individuals with regard to the processing of personal data and on the free movement of such data.
- Governments must engage in meaningful dialogues with other countries on a common international regulation of AI.
Where Does Global AI Governance Currently Stand?
- The rapidly evolving pace of AI development has led to diverging global views on how to regulate these technologies.
- The S. does not currently have comprehensive AI regulation and has taken a fairly hands-off approach.
- On the other end of the spectrum, China over the last year came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI.
- It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information.
- In the case of India, the Union Minister for Electronics and Information Technology said that the government is not considering any law to regulate the growth of AI in India.