Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

Europe’s world-first AI rules set for final approval

The EU is expected to give final approval to the 27-nation bloc’s artificial intelligence law (AP)
The EU is expected to give final approval to the 27-nation bloc’s artificial intelligence law (AP)

European Union legislators are set to give final approval to the 27-nation bloc’s artificial intelligence law, putting the world-leading rules on track to take effect later this year.

Members of the European Parliament are poised to vote in favour of the Artificial Intelligence Act five years after they were first proposed.

The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.

Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said: “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential.”

Big tech companies generally have supported the need to regulate AI while lobbying to ensure any rules work in their favour.

OpenAI chief executive Sam Altman caused a minor stir last year when he suggested the ChatGPT maker could pull out of Europe if it cannot comply with the AI Act — before backtracking to say there were no plans to leave.

Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, taking a “risk-based approach” to products or services that use artificial intelligence.

The riskier an AI application, the more scrutiny it faces. Low-risk systems, such as content recommendation systems or spam filters, will only face light rules such as revealing that they are powered by AI. The EU expects most AI systems to fall into this category.

High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.

Some AI uses are banned because they are deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.

Other banned uses include police scanning faces in public using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.

The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning CVs and job applications. The astonishing rise of general purpose AI models, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to keep up.

They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that can produce unique and seemingly lifelike responses, images and more.

Developers of general purpose AI models – from European start-ups to OpenAI and Google – will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law.

Europe AI Rules Explainer
The world-leading set of rules is aimed at the fast-developing technology (AP)

AI-generated deepfake pictures, video or audio of existing people, places or events must be labelled as artificially manipulated.

There’s extra scrutiny for the biggest and most powerful AI models that pose “systemic risks”, which include OpenAI’s GPT4 – its most advanced system – and Google’s Gemini.

The EU says it is worried that these powerful AI systems could “cause serious accidents or be misused for far-reaching cyber attacks”.

They also fear generative AI could spread “harmful biases” across many applications, affecting many people.

Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone’s death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use.