On November 21, Minister of Science and Technology Nguyen Manh Hung offered the draft Law on Artificial Intelligence to Vietnam’s National Assembly. The draft laws is guided by the twin targets of managing dangers whereas fostering growth and innovation in the sector.

According to Minister Hung, the law is designed to advertise human-centered AI growth and positions the state as a key actor in regulating, coordinating, and enabling the expansion of synthetic intelligence applied sciences.

The law proposes a risk-based regulatory framework to make sure that the event, software, and use of AI is protected, clear, accountable, and controllable. It outlines 4 ranges of danger: unacceptable, excessive, medium, and low.

nguyễn mạnh hùng
Minister of Science and Technology Nguyen Manh Hung presents the draft to the National Assembly. Photo: National Assembly

AI suppliers should classify their programs earlier than launch and are accountable for the accuracy of the classification. For programs deemed medium or excessive danger, the supplier should notify the Ministry of Science and Technology.

Systems categorized as “unacceptable” can be banned from growth, distribution, implementation, or use in any kind. This consists of programs used for legally prohibited actions, misleading deepfakes, manipulation that causes severe hurt, exploitation of susceptible teams (comparable to kids or the aged), or the creation of falsified content that endangers nationwide safety.

Organizations or people violating the law might face disciplinary motion, administrative penalties, or legal prosecution. If hurt is induced, compensation have to be paid beneath civil law.

In extreme circumstances, fines might attain 2% of the group’s income from the earlier 12 months. Repeat offenders could possibly be fined as much as 2% of their world income. The most administrative positive could be 2 billion VND (roughly $82,000) for organizations, and 1 billion VND (round $41,000) for people.

Importantly, the law defines harm brought on by high-risk AI programs as harm from “sources of extreme danger,” that means that suppliers and operators could also be responsible for compensation even in the absence of fault, besides in circumstances outlined beneath civil law exemptions.

A key provision requires labeling of AI-generated or modified content that features fake components or simulated individuals/occasions that might mislead viewers into believing the content is actual. This additionally applies to AI-generated content used in media, promoting, propaganda, or public data.

The draft law consists of incentives to encourage analysis, funding, and high-quality workforce growth. It additionally supplies a nationwide AI ethics framework to make sure that programs are developed and used for human profit, with out hurt or bias, and in line with humanistic values.

No obstacles for AI analysis

Presenting the assessment report on the draft law, Nguyen Thanh Hai, Chair of the Committee for Science, Technology, and Environment, expressed broad help for the draft’s predominant insurance policies.

She proposed including core rules to make sure AI-related knowledge is correct, full, clear, real-time, and shared throughout programs. This would stop knowledge fragmentation and bottlenecks that hinder analysis and growth.

The committee additionally emphasised the necessity for necessary cybersecurity and knowledge safety guidelines to defend nationwide AI infrastructure from potential management breaches or knowledge leaks.

Recognizing that AI can commit the identical errors as people, Hai famous the authorized complexities in assigning accountability to AI, which lacks authorized personhood. This might result in disputes involving administrative, civil, or legal liabilities.

The committee referred to as for clarification on accountability between events, significantly overseas suppliers providing cross-border AI companies, and differentiation between intentional violations, negligence, or technical limitations past foreseeable management.

Tran Thuong




Sources