Wednesday, May 29, 2024
HomeLawPotential approach for Artificial Intelligence governance in India

Potential approach for Artificial Intelligence governance in India

The CAS Framework Proposal sheds light on five core principles to regulate the use of AI:

(i) Instituting Guardrails and Partitions to ‘Prevent Wildfire’ – This principle emphasizes (i) the need to limit the operation of AI systems within specific predefined technical boundaries to control the unpredictable behavior of the AI systems; and (ii) the creation of different partitions for different AI processes via strict separation protocols and techniques.

(ii) Ensuring Human Control through Manual ‘Overrides’ and ‘Authorization Chokepoints’ – This principle focuses on ensuring human oversight over the behavior of AI. Humans are required to intervene, control, and remediate the unpredictable or non-standard behavior of AI systems. Further, where high-risk decisions are intended to be taken by the AI systems, this principle emphasizes the need for human validation of those decisions by establishing a hierarchical governance process to assess and validate the AI systems’ decisions, and this requires imparting specialized training on AI to those human validators.

(iii) Transparency and Accountability – This principle (i) promotes the use of open-source licenses for AI algorithms, which can be evaluated by external auditors for bias and risks; (ii) emphasizes the documentation—in a uniform format for consistency in interpretation—detailing AI system development (i.e. via coding or learning), logs, data sources, training procedures, performance metrics, and known limitations; (iii) mandates dynamic monitoring through regular audits of AI systems, the disclosure of extreme outcomes, and the use of debugging and monitoring tools to track AI systems’ decisions real-time.

(iv) Distinct Accountability – This principle requires predefining liability protocols in case of AI systems’ malfunctioning or non-standard behaviors. In other words, to ensure accountability, any malfunctioning or non-standard behavior is expected to be attributable to a particular individual or department in an entity, and traceability mechanisms within the AI systems would assist in ensuring safety of all the components and actors involved in the functioning of the AI systems. It also mitigates any negative impact that AI systems’ non-standard behaviors may cause. Further, this principle requires the establishment of incident reporting and investigating mechanisms for AI systems’ failures or non-standard behaviors.

(v) Specialized, Agile Regulatory BodyThis final principle focuses on (i) the establishment of a separate and independent expert regulatory body with a mandate to swiftly counter the emerging AI challenges by avoiding red-tapism; (ii) equipping the regulatory body with tools and methods for scrutinizing the AI domain for compliance gaps and/ or any other matter that warrants regulatory attention and intervention; (iii) encouraging the regulatory body to coordinate with academia and industry bodies and take into account any feedback received from them while issuing any directives. Further, this principle specifies the use of real-time monitoring tools to monitor the AI systems’ behaviors against set standards; the adoption of automated systems to notify any potential non-standard behaviors; the establishment of a centralized database for AI algorithms for regulatory compliance and promoting innovation; and the establishment of a national registry of non-standard behaviors, which is necessary to provide feedback and course correct the AI for the regulator.

Source: Barandbench

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments