Wednesday, December 25, 2024
HomeWorld NewsBritain's first formal rules for exploring artificial intelligence: what's next? | Real...

Britain’s first formal rules for exploring artificial intelligence: what’s next? | Real Time Headlines

On April 26, 2023, in Suqian City, Jiangsu Province, China, an Internet user viewed ChatGPT on his mobile phone.

Future Publishing | Future Publishing | Getty Images

LONDON — Britain is on the verge of introducing its first-ever artificial intelligence laws — but Prime Minister Keir Starmer’s new Labor government faces a delicate balance between setting rules that are strict enough while allowing for innovation.

in a speech King Charles III presented the report on behalf of Starmer’s government, which said on Wednesday it would “seek to develop appropriate legislation to impose requirements on those working to develop the most powerful artificial intelligence models”.

But the speech made no mention of an actual artificial intelligence bill, which many tech executives and commentators have been waiting for.

In the European Union, authorities issued a comprehensive law called artificial intelligence method This puts tighter restrictions on companies developing and using artificial intelligence.

Many tech companies – big and small – hope the UK doesn’t take the same approach to imposing rules they say are too harsh.

What a UK Artificial Intelligence Bill might look like

Labor is still expected to introduce formal rules for artificial intelligence, as the party set out in its election manifesto.

Starmer’s government has pledged “binding regulation of the small number of companies developing the most powerful artificial intelligence models” and legislation to ban explicit deepfake content.

By targeting the most powerful artificial intelligence models, Labor will target OpenAI, Microsoft, Google, Amazonas well as artificial intelligence startups including Anthropic, Cohere and Mistral.

Matt Calkins, chief executive of software company Appian, told CNBC: “The largest artificial intelligence companies may face more scrutiny than before.”

“What we need is an environment conducive to broad-based innovation, governed by a clear regulatory framework that provides fair opportunities and transparency for everyone.”

Lewis Liu, head of artificial intelligence at contract management software company Sirion, warned that governments should avoid a “broad hammer approach to regulating every use case”.

Use cases involving sensitive medical data, such as clinical diagnostics, shouldn’t be put in the same bucket as things like enterprise software, he said.

“The UK has an opportunity to seize this nuance and bring huge benefits to its tech industry,” Liu told CNBC. However, he added that so far he had seen “positive signs” from Labour’s AI plan .

JP Morgan says Europe is now in a better position on artificial intelligence but still lags behind the US and China

Legislation on artificial intelligence will be in stark contrast to Starmer’s predecessor. Under former chancellor Rishi Sunak, the government chose to take a softer approach to artificial intelligence, seeking instead to apply existing rules to the technology.

The previous Conservative government said in a policy paper in February that introducing binding measures too early could “ineffectively address risks, become outdated quickly or stifle innovation”.

In February this year, British New Technologies Minister Peter Kyle said that Labor would legally force companies to share test data on the safety of artificial intelligence models with the government.

The then shadow science and technology minister Keir said in an interview with the BBC at the time: “We will legally force the release of these test data results to the government.”

Sunak’s government has struck deals with technology companies to share security testing information with the Artificial Intelligence Security Institute, a state-backed agency that tests advanced artificial intelligence systems. But this is only done on a voluntary basis.

The risk of inhibiting innovation

The UK government wants to avoid putting too much pressure on AI rules that could ultimately hinder innovation. The Labor Party also stated in its manifesto that it hopes to “support diversified business models and bring innovation and new products to the market.”

Salesforce UK and Ireland chief executive Zahra Bahrololoumi told CNBC any regulation would need to be “nuanced” and assign responsibilities “accordingly”, adding that she welcomed the government’s call for “appropriate legislation”.

Matthew Houlihan, senior director of government affairs at Cisco, said any artificial intelligence rules need to be “centered on a thoughtful, risk-based approach.”

Other proposals already put forward by British politicians offer some insight into what might be included in Labour’s artificial intelligence bill.

Chris Holmes, a Conservative backbencher in the House of Lords, introduced a bill last year proposing to regulate artificial intelligence. The bill passed third reading in May and was submitted to the lower house of parliament.

Holmes’ law has a lower chance of success than laws proposed by the government. However, it provides some ideas for how Labor might craft its own AI legislation.

The bill introduced by Holmes includes proposals to create a centralized artificial intelligence agency that would oversee enforcement of the technology’s rules.

OpenAI's new safety committee is important given pace of innovation: Data and AI company

Companies must provide the AI ​​Authority with third-party materials and intellectual property used for model training and ensure that any use of such materials and intellectual property has the consent of the original source.

This somewhat echoes the EU Office on Artificial Intelligence, which oversees the development of higher-order models of artificial intelligence.

Another suggestion from Holmes is for companies to appoint personal AI officers who would be tasked with ensuring that the company uses AI safely, ethically and fairly, and that the data used in any AI technology is unbiased.

How it compares to other regulators

Matthew Holman, a partner at law firm Cripps, told CNBC that based on Labor’s commitments so far, any such law would inevitably be “far removed from the far-reaching scope of the EU Artificial Intelligence Act”.

Holman added that the UK was more likely to find a “middle ground” rather than requiring arbitrary disclosures from AI model makers. For example, the government could require AI companies to share their ongoing work at a closed-door meeting at the AI ​​Security Institute, but not reveal trade secrets or source code.

Science Minister Keir previously said at London Technology Week that Labor would not pass strict laws like the Artificial Intelligence Bill because it did not want to hinder innovation or prevent investment by large artificial intelligence developers.

Even so, UK AI laws will still be one step ahead of the US, which currently does not have any kind of federal AI legislation. At the same time, China’s regulations are stricter than any legislation that the EU and possibly the UK may propose.

Last year, Chinese regulators finalized rules governing generative artificial intelligence, aiming to eliminate illegal content and strengthen security protections.

Sirion’s Liu said one thing he hopes the government won’t do is restrict open source AI models. “It is vital that the UK’s new AI regulations do not stifle open source or fall into regulatory traps,” he told CNBC.

“There’s a huge difference between the harm done by a big LLM like OpenAI and the harm done by a specific custom open source model used by a startup to solve a specific problem.”

Herman Narula, CEO of Metaverse venture capital firm Improbable, agreed that limiting open source AI innovation would be a bad idea. “New government action is needed, but this action must be focused on creating a viable world for open source AI companies, which is necessary to prevent monopolies,” Narula told CNBC.

RELATED ARTICLES

Most Popular

Recent Comments