AI Bill Set to Be Introduced in UK Parliament by End of the Year

As the UK ramps up its efforts to regulate artificial intelligence (AI), a new AI Bill is expected to be introduced to Parliament before the end of the year. While this development wasn't highlighted in the King's Speech, the government has been keen to push forward on the matter. The bill could still face delays if other governmental priorities arise, but indications suggest that the government intends to move swiftly.

Peter Kyle, the Minister for the Department for Science, Innovation, and Technology, has been a key proponent of the bill. His discussions about the legislation predate the last general election, signaling the growing importance of AI regulation on the government's agenda.

Voluntary Agreements to Become Legally Binding

The AI Bill will focus on transitioning current voluntary agreements into legally binding frameworks. These agreements, established after the previous government’s AI Summit, allowed for the sharing and testing of AI models before their deployment by tech companies. Large companies, such as OpenAI, Microsoft, and Google, have already signed these agreements. The bill aims to ensure that before deployment, AI models undergo thorough risk and vulnerability assessments by governments.

Additionally, the AI Safety Institute, a body set up under Rishi Sunak’s government, is expected to receive formal recognition under the bill, giving it a stronger legal footing.

UK’s Approach: A Balancing Act

Although some companies have already signed up for the voluntary agreements, they have not been adopted globally. Oliver Nelson-Smith, a commentator on AI legislation, notes that territories attending the AI Summit, including members of the OECD, have been involved, but the global reach is still limited.

The government has signaled that the AI Bill will not be a "Christmas Tree Bill," meaning it will not offer extensive concessions to the sector. Instead, the legislation aims to hold tech companies accountable. While this firm approach has raised concerns in the tech industry, it aligns with international moves to regulate AI.

The European Union's AI Act, passed last year, has already set prescriptive requirements and risk categorisations for AI models. Similarly, the Biden administration in the U.S. has issued an executive order mandating safety, security, and trustworthiness standards for AI. In comparison, the UK's approach, while still in development, might appear to be trailing behind these major global powers.

Addressing Concerns Over AI’s Impact

One argument against further AI regulation is that existing data laws, such as the UK's General Data Protection Regulation (GDPR), already cover many potential risks posed by AI models. For example, the Information Commissioner’s Office (ICO) has been investigating the use of AI in biometric systems, including facial recognition and emotion analysis software, under current regulations. Emotion analysis, in particular, has raised ethical concerns, especially as some companies have developed hiring software that attempts to detect nervousness or dishonesty in job applicants.

This kind of AI application, however, has faced criticism. Nelson-Smith suggests that while such technologies might improve decision-making processes, relying on AI to detect human emotions is risky, as even human beings are not good at interpreting these signals accurately.

ICAEW’s Perspective and Recommendations

The Institute of Chartered Accountants in England and Wales (ICAEW) has long called for stronger regulation of AI, particularly in relation to financial crime. With advancements in deepfake technology and AI-driven financial scams, ICAEW believes that existing frameworks are insufficient to address the rapidly evolving risks. They argue that while regulators in specific industries have the authority to manage AI’s impact within their sectors, broader legislation is necessary to cover the increasing use of general-purpose AI models, such as ChatGPT and Microsoft’s Copilot.

There are also concerns that the UK could fall behind its international counterparts. However, Nelson-Smith believes that being a step behind may not be a negative, as it allows the UK to learn from the approaches taken by the European Union and the United States.

Preparing for AI’s Future

ICAEW emphasises that regulation should encourage responsible innovation. While AI models can revolutionise industries, the institute advocates for international alignment in AI governance to ensure that UK-developed models, such as Google DeepMind’s AlphaGo, can continue to compete globally.

Furthermore, ICAEW is pushing for increased funding to help regulators manage the complexities of AI. Nelson-Smith points out that the £10 million allocated by the government to upskill regulators in AI is insufficient, given the breadth of sectors AI could affect.

In the meantime, ICAEW is actively preparing its members for the future of AI. The organisation has launched an AI hub offering free learning modules and plans to release more content on AI ethics and use cases in mergers and acquisitions.

Looking Ahead

As the AI Bill nears introduction, the UK government is poised to strengthen its regulatory stance on AI, moving from voluntary guidelines to enforceable laws. While some sectors might express concerns, this legislation is expected to provide a robust framework to manage AI's growing influence across industries, particularly in areas where existing laws fall short. With ongoing global efforts to regulate AI, the UK must carefully balance innovation with safety, ensuring it remains competitive in the international AI landscape.

Previous
Previous

Van Leeuwen Ice Cream: From Food Truck to Multi-Million Dollar Empire

Next
Next

How to Build Stronger Teams