AI Regulations:

Balancing Innovation and Compliance

AI Regulations: Balancing Innovation and Compliance

Introduction

“AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.”

The sentiment on the sublimity of AI technologies as speculated by Elon Musk in March 2024 is echoed by Jensen Huang, the CEO of Nvidia, when he said, “Generative AI is the single most significant platform transition in computing history. In the last 40 years, nothing has been this big. It’s bigger than PC, it’s bigger than mobile, and it’s going to be bigger than the internet, by far.”

New technologies have always disrupted our lives. However, the disruption caused by Artificial Intelligence technologies is unprecedented both in scale and speed.  

Clearly, something so pervasive and powerful needs to be regulated. That’s where AI regulations come in.

Governments around the world are concerned not only about the progress of this new technology but also about how they will affect the rights of their citizens. Compliance with AI laws to mitigate the risks associated with AI technologies is the way forward. In this blog, we will look at these risks and the AI regulations designed to cater to them.

Importance of AI Regulations

In a meeting with the Chief Executives of AI companies, US President Joe Biden said, “What you’re doing has enormous potential and enormous danger.” In fact, AI technology has often been perceived as a double-edged sword. In the US Senate hearing on the Oversight of A.I.: Rules for Artificial Intelligence this sentiment is echoed widely. In the hearing, among others, Sam Altman, CEO of OpenAI said, “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.”

US Senate hearing on the Oversight of A.I.: Rules for Artificial Intelligence

Governments across the world, therefore, have crafted regulations to mitigate the risks associated with AI technologies. However, more comprehensive AI regulations will need to be developed over time. On 13th March 2024, the European Union passed the World’s first Artificial Intelligence Act that shall pave the way for other countries to develop their own laws to regulate AI technologies.  

Follow AKW Consultants on WhatsApp Channels for the latest updates.

Assessing the Risks of Artificial Intelligence

It has been difficult for regulators worldwide to assess the risks linked to AI. The diverse range of technologies associated with Artificial Intelligence, the speed at which these technologies develop, and the technical knowledge required to oversee them, make it challenging for Governments to establish AI regulations.

Here are some of the risks associated with AI technologies:

  1. Potential Data Breach: AI programmes can analyse huge volumes of data at an incredible speed. The volume of data accessible to AI programmes, which will continue to increase, also makes them susceptible to data breaches.
  2. Deception: AI technologies can be used to generate video and audio deep fakes that may be used to deceive people. The potential for AI to decimate the wall that exists between fact and fiction is really dangerous and can have troubling repercussions in the future.
  3. Jobs: The risks associated with AI replacing jobs are truly astounding. According to the World Economic Forum Report on the future of jobs, “approximately 85 million jobs may be eliminated by the shift between humans and machines by 2025, while 97 million new roles may emerge.” While the net job increase is important, the people who will lose their jobs and those who will gain them might not be the same.
  4. Bias and Discrimination: AI systems may develop biases if they are not trained and tested on a diverse group of people. In 2015, a couple of months after the release of the Photos app, the program’s image recognition algorithm misidentified images in a way that reflected centuries of internalized racial stereotypes. As AI systems can quickly scale up and influence vast numbers of people globally, the risks of amplifying discriminatory tendencies are also increased by AI technologies.
  5. Security Threats: Advanced AI could be used by bad actors for cyber-attacks, and therefore can pose significant cyber security risks to organisations.
  6. Transparency: With Gen-AI technologies such as natural language processing and speech synthesis, it is often difficult for someone to know if they are talking to a human or a machine. This will only increase as the technologies become even more sophisticated making communication with machines indistinguishable from humans.
  7. Surveillance: Using advanced AI for surveillance has numerous ethical concerns and can impact citizens’ rights.

International AI Regulations and Compliance

Balancing technological innovation with protecting citizens’ rights through regulation is definitely a challenge. Countries around the world have established various AI regulations to manage risks associated with artificial intelligence. The OECD Framework for the Classification of AI Systems is designed to help policymakers characterise AI for specific contexts and use cases. In terms of regulating AI, the passage of the EU’s AI Act, as mentioned before, is a significant milestone.

The European Union’s Artificial Intelligence Act has taken a risk-based approach to regulating AI. It has categorised the level of risks associated with AI technology into four different tiers: unacceptable, high, limited, and minimal. Technologies with limited and minimal risks are required to meet only transparency obligations. On the other hand, AI systems that fall under the unacceptable category, such as government social scoring and real-time biometric identification in public spaces, are banned, with a few exceptions. For technologies in high-risk categories, AI regulations require developers and users to conduct rigorous testing, properly document data quality, and implement an accountability framework that includes detailed human oversight.

The White House Office of Science and Technology Policy has also recently published a Blueprint for AI Bill of Rights that outlines five principles meant to shape how automated systems are designed, used, and implemented to safeguard the American public. On October 30, 2023, US President Joe Biden issued an Executive Order on managing the risks associated with Artificial Intelligence. It includes new standards for safety and security that are meant to protect US Citizens’ Privacy and other important Rights.

UAE: Potential World Leader in AI

The UAE’s Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, Omar Sultan Al Olama said, “Whoever is going to lead in the Artificial Intelligence race will lead the future. This technology will change the world.”  On 29th April 2024, Sheikh Hamdan, the Crown Prince of Dubai, launched a new annual plan called the Dubai Universal Blueprint for Artificial Intelligence. Under the first phase of the plan, Chief AI Officers will be appointed in all government entities in Dubai who will be responsible for overseeing many aspects of the implementation of Artificial Intelligence in these entities including the adherence to AI regulations. Subsequently, a new AI company licence will also be introduced. In fact, under this plan, AI Week will be launched across schools to encourage the use of AI.  

Recently, the Financial Times reported that the Biden Administration has been pushing US tech companies to collaborate with the UAE on matters related to artificial intelligence. Clearly, the UAE is all set to emerge as a leader in the AI space. In addition to leading the innovation of AI technologies, UAE is all set to play a very important role in AI regulation as well. Recently, Sam Altman of OpenAI has said that the UAE could become a “regulatory sandbox” for experimenting with AI. As reported in Bloomberg, the UAE backs the idea of transforming itself into a global leader in testing and regulating AI technologies.

Compliance with AI Regulations

Even though several countries have already framed AI regulations mitigating different kinds of risks associated with this technology, the EU is the first entity to enact comprehensive regulations. However, these AI-specific laws will soon be a reality in many countries. For organisations that have adopted AI technologies, it is important to establish very clear protocols regarding the responsible usage of AI. Transparency about the technology and continuous improvement through removing biases should be maintained. Organisations may also strictly adhere to the requirements of their respective country’s Data Protection law to mitigate one of the most fundamental risks associated with the misuse of AI. They should also strengthen their cyber security by following their country’s cyber security law as well as by getting licences such as ISO 27001. It is also important for organisations to keep adequate documentation and train their staff on AI systems.

Conclusion

As reported in Forbes, the Global artificial intelligence market size will grow at a whopping compound annual growth rate (CAGR) of 37.3% from 2023 to 2030, reaching an estimated $1,811.8 billion by 2030. Growth of this scale transcends being mere statistics and becomes a part of our lived realities. Indeed, there are very few things certain about the times we are living in. That AI technologies will determine the direction of human civilisation is not one of them! Clearly, governments will also continue to implement stricter and more comprehensive regulations. A forward-thinking business organisation should assess how AI laws and regulations will evolve in the coming months and years and prepare its own blueprint to adapt to them.

Find out how AKW Consultants can help you integrate AI technologies into your business operations in a way that your business becomes compliant with International AI regulatory standards.

AKW Newsletters


    Choose your preferences:


    Scroll to Top