
Hung mentioned the draft regulation is constructed on the precept of each managing dangers and selling the event and innovation of AI.
The regulation “ensures the development of AI for humans, placing humans at the center,” with the State taking part in the main function in administration, coordination, and growth planning for AI.
The draft regulation establishes a risk-based administration mechanism to be certain that the event, utility, and use of AI are secure, clear, controllable, and accountable.
The Government goals for versatile and efficient AI growth and encourages innovation. Specifically, there are 4 ranges of danger: unacceptable (the best stage), excessive danger, medium danger, and low danger.
Providers should classify their methods earlier than circulation and are accountable for the classification outcomes. For medium- and high-risk methods, suppliers should notify the Ministry of Science and Technology (MST).
Systems with unacceptable danger are prohibited from being developed, supplied, deployed, or used below any circumstances.
The prohibited record contains methods used for habits that’s unlawful, methods that use falsification to deceive or manipulate inflicting severe hurt, methods that exploit vulnerabilities of at-risk teams (youngsters, the aged and many others), or methods that create fabricated content that severely threatens nationwide safety.
Under the draft regulation, organizations and people that violate rules will face disciplinary motion, administrative penalties, or legal prosecution; if inflicting harm, they need to compensate in accordance with civil regulation.
For severe violations, the utmost high quality can attain 2 p.c of the group’s income from the earlier yr. In the case of repeated violations, the utmost penalty is 2 p.c of world income from the earlier yr.
The most administrative high quality is VND2 billion for organizations and VND1 billion for people.
A key provision states that harm attributable to high-risk AI methods is taken into account attributable to a supply of excessive hazard. Accordingly, suppliers and deployers of such methods should compensate even when they don’t seem to be at fault, besides in circumstances eligible for exemption below the Civil Code.
The draft regulation requires labeling for: content generated or edited by AI that entails falsification, simulates actual individuals or actual occasions, and should trigger viewers, listeners, or readers to misunderstand it as actual; and AI-generated content used for communication, promoting, propaganda, or publicly supplied data.
Regarding incentives, the Government proposes varied insurance policies to assist analysis, funding, high-quality human useful resource coaching, and to allow enterprises, organizations, and people to take part within the growth and utility of AI.
The draft regulation additionally establishes a nationwide AI ethics framework to be certain that AI methods are developed and used for people, do no hurt, keep away from bias, and uphold humanistic values.
Avoiding bottlenecks in AI R&D
Presenting the appraisal report, Chair of the NA’s Committee for Science, Technology and Environment Nguyen Thanh Hai mentioned the appraisal physique agrees with the foremost insurance policies within the draft regulation.
The appraisal committee recommends including core ideas to guarantee knowledge high quality for AI, reminiscent of guaranteeing that knowledge are correct, ample, clear, real-time, and standardized for shared use; establishing mechanisms for interconnected and shared knowledge to keep away from fragmentation and stop bottlenecks in AI analysis and growth.
The Committee for Science, Technology and Environment additionally recommends establishing necessary ideas for cybersecurity, knowledge safety, and protection measures for nationwide AI infrastructure to forestall dangers reminiscent of system hijacking or knowledge leakage.
According to the appraisal physique, AI can carry out actions and errors comparable to these dedicated by people. Meanwhile, obligation for AI itself stays a fancy and debated situation, making it troublesome to outline legal responsibility within the conventional sense. When incidents happen, disputes over administrative, civil, and legal duty might come up.
The committee recommends including ideas to distinguish tasks amongst stakeholders, together with overseas suppliers providing cross-border AI companies, and to differentiate between intentional violations, unintentional violations, and errors attributable to unpredictable technical limitations.
Du Lam