Monday, November 25, 2024

AI regulation in India must take a targeted approach to keep risk levels low

As India aspires to become a global AI leader, it is crucial to establish a robust approach to regulate high-risk AI systems that prioritize consumer protection while fostering innovation.

Focus on high-risk AI systems: These could significantly impact critical sectors such as healthcare, finance and law enforcement, and thus need regulation to mitigate potential harms. The EU’s AI Act uses such a risk-based approach, imposing stricter regulations on high-risk applications.

We should build on the successes of other nations while considering local needs. To this end, India’s convening of the recent Global India AI Summit 2024 was a step in the right direction. 

Its key highlights included recommendations on how AI technologies could be integrated into health systems worldwide while being attuned to local challenges. But integration alone is insufficient. It is crucial to ensure that it works against bias arising out of pre-existing societal discrimination issues.

For instance, in healthcare, AI-driven diagnostic tools must undergo stringent testing to ensure patient safety and accuracy. In the financial sector, AI algorithms used for credit scoring and fraud detection should be audited regularly for bias and accuracy, ensuring fair treatment of all consumers. 

Finally, to combat the threat to democratic processes from the rise of deep-fakes during elections, legislation must be brought to criminalize the malicious use of AI to spread misinformation.

Enhance consumer protection to build trust: This must be a cornerstone of AI regulation. Ensuring that these systems operate transparently and fairly is crucial to create public trust. 

Canada’s Directive on AI mandates algorithmic impact assessments to identify and mitigate risks before AI systems are deployed, emphasizing transparency and accountability. India’s redressal mechanism under the Consumer Protection Act of 2019 (CPA) protects individuals adversely affected by unfair trade. 

With the rising use of AI-driven algorithms by businesses in deploying products and services, the CPA shall automatically encompass protection of consumers against AI biases. Still, India too can mandate similar impact assessments for high-risk AI systems. 

This would involve establishing a dedicated regulatory body to oversee AI-related consumer protection issues, enhance accountability and provide clear guidelines for ethical AI deployment.

The EU and Singapore have shown similar promise in creating balanced regulatory environments. A study by the European Parliamentary Research Service highlights the effectiveness of the risk-based framework in the EU’s AI Act in regulating high-risk AI applications while promoting innovation in low-risk areas. 

Similarly, Singapore’s Infocomm Media Development Authority’s public consultation on its draft Model AI Governance Framework for Generative AI has been praised for its practical guidelines on ethical AI deployment and proactive approach to GenAI concerns while facilitating innovation. 

These efforts are leading to increased industry compliance and public trust in AI systems. India can learn from these successes and adopt similar strategies to balance innovation and regulation.

Leverage existing legal frameworks: Given AI’s extensive data processing needs, it is crucial to learn from the interpretation and application of data protection laws. Australia provides a valuable case study. 

Despite not having specific AI regulations, it applies extant laws such as on consumer protection, privacy and online safety to AI systems. An example is an Australian consumer law being applied in a federal court case where Trivago was fined for misleading consumers through algorithmic decision-making. 

Similarly, once India’s Digital Personal Data Protection Act is enforced, it shall also apply to AI systems, ensuring that personal data is processed ethically, responsibly and in accordance with the law.

Both the National Strategy for Artificial Intelligence and Responsible AI for All reports in India focus on ethical AI deployment and how AI could be leveraged for growth in sectors like healthcare, agriculture and smart cities. 

But, since we do not have a cohesive and stringent framework aimed specifically at high-risk AI applications, risks still prevail. By regulating high-risk AI systems and defining ‘no-go’ zones for businesses and internet intermediaries using AI in consumer-facing applications, the draft Digital India Act 2023 should reduce these risks.

Unquestionably, AI-driven growth would accelerate India’s transition to a mature economy and support efforts in several critical sectors expected to drive the country’s economic success. This will require a balanced and responsible regulatory environment, focusing on high-risk AI systems. 

While aligning our rules with international standards, it is equally important to address local challenges related to discrimination and bias in data. This would help India harness the full potential of AI in promoting sustainable growth while safeguarding societal interests.

#regulation #India #targeted #approach #risk #levels

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles