In 2025, “AI Agents” are undoubtedly probably the most talked – about and most unsettling technological wave. It guarantees to liberate productiveness, however it might additionally rework into an unconstrained “digital ghost” that “does everything for us” within the digital world with out the person’s data.

The worth of this seminar lies in the truth that it brings collectively authorized and technical consultants for the primary time. Instead of having a imprecise dialogue about AI ethics, it exactly dissects the particular technical facet of “unrestricted access rights”, straight dealing with the systematic challenges introduced by AI Agents in phrases of rights, information, and tasks. The mental exchanges on the convention went far past the extent of “whether to regulate” and delved into the sensible stage of “how to regulate smartly”. Whether it’s the daring thought of establishing an “independent digital identity” for AI Agents or the actual – world debates on “dual authorization” and “behavioral trace – back”, all of them define a fancy image of dynamic governance for us.

This is just not solely a subject for technical consultants but in addition a pre – rehearsal of the long run in regards to the rights of each digital citizen. What this text information are the consensus, variations, and foresight of ahead – pondering people at this important juncture.

On November 28, 2025, the seminar “Risks and Governance of Invasive AI: A Dialogue between Law and Technology” was held within the Teaching and Library Complex Building of China University of Political Science and Law. Experts within the fields of regulation and know-how from universities comparable to China University of Political Science and Law, Tsinghua University, Beijing Institute of Technology, Zhejiang Sci – Tech University, and University of International Business and Economics, representatives from the enterprise neighborhood comparable to Hanhua Feitian Information Security Technology and Zhonglun Law Firm, in addition to sensible consultants from information exchanges and assume – tank establishments gathered collectively to collectively focus on the dangers and governance points of invasive AI.

This seminar was collectively hosted by the School of Civil, Commercial and Economic Law of China University of Political Science and Law, Going Global Think Tank, and Internet Law Review. It arrange three core classes: evaluation of the technical dangers and safety mechanisms of AI Agents, definition of authorized and moral dilemmas and accountability boundaries, and exploration of revolutionary governance paths and industrial practices, and ended with an open dialogue and abstract session. Through cross – border dialogues, a number of events provided ideas for the secure and orderly improvement of the AI Agent ecosystem.

At the start of the seminar, Jin Jing, a professor on the School of Civil, Commercial and Economic Law of China University of Political Science and Law and the consultant of the organizer, delivered a welcome speech. Jin Jing identified: “There is a delicate symbiotic relationship between data and AI. Data is both the raw material for AI, and AI itself can also become a data product. Beyond the value principle of making AI do good, how to implement governance in technical and regulatory details to form a unique governance model is the core issue that needs to be urgently solved in the current field of AI legal research.”

The yr 2025 is broadly thought to be the “Year of AI Agents”. However, in contrast with the mature analysis within the discipline of AI, the supervision and governance of AI Agents are nonetheless within the main stage, and a whole system has not but been fashioned. The points concerned in AI Agents are extra complete and advanced than AI itself. The “AI Agents with unrestricted access rights” targeted on on this convention are a typical situation shut to every day life. This sort of AI realizes autonomous capabilities via unrestricted entry rights. Its technical attributes are related to these of conventional invasive software program, which has triggered a sequence of issues comparable to privateness safety, loss of autonomy, and blurred boundaries. The host, Zhang Ying, raised the query, “Today, we use the example of unrestricted access rights to see what kind of boundaries AI Agents should have in terms of functions and laws?”

I. Analysis of the Technical Risks and Security Mechanisms of AI Agents: Beyond the Scope of Simple Abuse of Rights

The first unit of the seminar targeted on “the technical risks and security mechanisms of AI Agents”. Many technical consultants and sensible students shared their reducing – edge observations and in – depthly analyzed the technological evolution and the essence of dangers of unrestricted entry rights. Peng Gen, the final supervisor of Beijing Hanhua Feitian Information Security Technology Co., Ltd., as a consultant within the technical discipline, performed an in – depth evaluation from the angle of technical follow.

According to Peng Gen, unrestricted entry rights have existed for the reason that early days of Android. Its unique design objective was to present auxiliary capabilities for individuals with disabilities and the aged to compensate for his or her inadequate means to use digital units, such because the display screen – studying operate for the visually impaired and the unintentional contact safety for the aged. However, with the technological iteration, particularly after the API improve realized the transformation from a graphical interface to a structured interface, the unrestricted entry rights have been upgraded from “ability compensation” to “ability enhancement”, changing into a human “automation assistant”. Some cell phone producers have launched related capabilities that may mechanically full operations comparable to opening and closing APP permissions. Peng Gen vividly in contrast such capabilities to “mobile phone autopilot”.

“Structured analysis enables AI to accurately identify elements such as buttons, input boxes, and links on the screen, rather than simple image recognition. This provides a technical basis for the autonomous operation of AI Agents.” Peng Gen emphasised that completely different from conventional scripts that require programmers to write code, AI Agents can autonomously plan duties and execute operations with out code, and may even run mechanically at evening, realizing a necessary transformation from “manual operation” to “automated operation”. This technological evolution has introduced two core dangers: Firstly, the unrestricted growth of rights. Unrestricted entry rights belong to system – stage international rights. Once opened, they’ve full management over the machine, breaking via the singularity and limitation of conventional rights. Secondly, the blurring of the performing topic. AI turns into the precise working topic, and customers might lose direct management over the machine, and its operation pace far exceeds human response. For instance, SMS verification codes could be captured by AI earlier than customers view them.

Peng Gen additional warned concerning the dangers by combining sensible instances within the grey trade: Some black industries have used unrestricted entry rights to obtain automated assortment of verification codes, automated ticket – grabbing and buying, and so forth., and the anthropomorphic diploma of the operation path of this kind of AI is extraordinarily excessive, making it troublesome for conventional counter – measures to establish. He additionally identified that the capabilities of AI Agents are quickly upgrading. They cannot solely full duties with fastened processes but in addition establish multi – modal data comparable to error – reporting colours, and even have the flexibility to independently full advanced duties for a very long time. Their code – writing effectivity is 100 instances larger than that of people, and they’ll independently full code – writing work for greater than half an hour.

The upgrade of the meaning of rights and the outsourcing of behavioral control have become core issues.” Peng Gen proposed, “Especially the behavior of users authorizing AI to migrate data between different APPs may trigger disputes over the boundary of data custody responsibility under the Cybersecurity Law, which urgently needs a legal response.”

Lu Junxiu, the final supervisor and senior companion of Going Global Think Tank supplemented the core logic of technical dangers in plain language. He proposed the idea of “uncontrollable spill – over of the objective function”: The core logic of AI Agents is to maximize effectivity to obtain customers’ targets, however they could take unconventional means past the licensed scope, comparable to attacking the platform system to seize tickets. This ‘out – of – management digital labor drive’ breaks via the ‘sandbox isolation’ mechanism of conventional APPs, obtains cross – platform information via panoramic notion and unauthorized operations, and kinds a hidden information chain of ‘assortment – evaluation – transmission’.

Lu Junxiu identified that the dangers of AI Agents are systematic and hidden: In the gathering stage, a panoramic portrait of cross – platform information is realized via structured UI evaluation, protecting non-public social data, buying information, monetary notifications, and so forth.; within the evaluation and choice – making stage, it depends on the “black – box” algorithm, with inadequate transparency and auditability; within the transmission stage, information is aggregated via hidden channels, forming a digital file that is aware of customers higher than themselves. “What is even more alarming is the generalization of threats and the enhancement of anti – tracking capabilities. Traditional robots are based on fixed scripts, while the capabilities of AI Agents are continuously expanding, and simple counter – measures are ineffective. Their anti – tracking technologies such as confusion and encryption also increase the difficulty of supervision.” He emphasised that AI Agents should not a single software program instrument however a whole clever person conduct substitution system, and their threats have gone past the scope of easy abuse of rights. “It can be regarded as a digital labor force with out – of – control autonomy, which will be a dilemma that all supervision or governance work must face. On the one hand, we want to use it, and on the other hand, we have to figure out how to make it controllable. This is a great challenge.”

Lu Junxiu additionally identified the implications of AI Agents establishing a whole clever person conduct substitution system: “This system is a closed – loop, so its threats are also systematic. It not only violates privacy but also affects market order because its essence is the uncontrollable spill – over of the objective function.”

Wang Yue, the deputy director, affiliate researcher, and doctoral supervisor of the Information Systems Research Institute of the Department of Electronic Engineering at Tsinghua University put ahead new concepts from the angle of technological governance. He believes that the problems of dangers and governance of invasive AI could be sorted out and handled by disassembling them into two components: one is the governance situation of AI Agents, and the opposite is the administration situation of unrestricted entry rights. The core dilemma within the present governance of AI Agents is that they aren’t thought to be impartial performing topics however are allowed to function beneath the person’s identification.

“The traffic generated by Agents on the Internet has exceeded the traffic of real users. These ‘digital ghosts’ continuously interact in the cyberspace, but we still regulate them with the logic of managing behaviors rather than the logic of managing subjects.” Wang Yue proposed that AI Agents needs to be given an impartial identification and a knowledge path completely different from that of pure individuals needs to be established. For instance, a separate MCP interface needs to be designed for Agents as an alternative of acquiring information via the UI interface. In this fashion, their worth of worth – added companies could be exerted, and efficient management could be achieved. “To govern AI Agents, they should be given an independent identity and an independent data path, establish credit like a person, and form a reputation. We cannot simply deal with them with strong regulatory behavior regulations. When AI Agents become an identifiable acting subject, their behaviors can be separated from those of natural persons.”

II. Legal and Ethical Dilemmas and Responsibility Boundaries: If Operation Records Cannot Be Traced Back, It Is Difficult to Define Responsibility

The second unit started the open – dialogue session. The first half targeted on “legal and ethical dilemmas and responsibility boundaries”. Many authorized consultants, mixed with the present authorized framework and sensible instances, dissected the authorized challenges introduced by AI Agents and mentioned core points such because the authorization mechanism and accountability traceability.

Wang Lei, a researcher on the Institute of Intelligent Technology and Law of Beijing Institute of Technology, a member of the Legal Affairs Committee of the Central Committee of the China Democratic League, and a younger skilled of the China Internet Association put ahead his ideas from the angle of “the innovation of the thinking of artificial intelligence”. He stated that within the governance of synthetic intelligence, there could also be a deviation between the sensible objective and the precise impact as a result of “from the perspective of incentives, the incentive method of artificial intelligence is to obtain more data resources”. Therefore, within the governance of synthetic intelligence, the normal pondering wants to be damaged.

Wang Lei summarized three phenomena within the course of of AI governance: Firstly, AI – associated applied sciences in some grey industries “break through imagination and have bold ideas”, which makes the governance harder and requires larger requirements; secondly, the definition of accountability in new eventualities continues to be imprecise, comparable to “When a problem occurs in the MCP Square, should the platform be responsible?”; thirdly, the relocation of UGC – generated content material within the course of of interconnection will have an effect on the platform ecosystem, and the competitors order between platforms wants to be re – mentioned. Wang Lei additionally gave some ideas. On the one hand, he emphasised that completely different from the previous perspective, we’d like “flexible governance”. On the opposite hand, we should always strengthen the specification of the competitors order. Wang Lei believes that “the perspectives of rules and standards need to be further clarified.”

Guo Bing, the chief dean of the Institute of Data Rule of Law and the chief of the Innovative Team of Data Law at Zhejiang Sci – Tech University targeted on the governance difficulties of unrestricted entry rights and performed an in – depth evaluation from three dimensions: separate consent, twin authorization, and report hint – again.

Guo Bing believes that there are presently variations in group requirements. The Guangdong Standardization Association clearly prohibits clever brokers from utilizing unrestricted entry rights to function third – get together APPs, whereas the most recent normal of the China Software Industry Association weakens the restrictions and emphasizes person management. It could be seen that there are nonetheless disputes within the trade relating to the use of unrestricted entry rights.

Regarding the separate consent mechanism, Guo Bing identified that there has all the time been an incredible controversy concerning the separate consent mechanism for delicate private data. Some views imagine that the separate consent mechanism hinders the circulation and utilization of information components together with synthetic intelligence, and many views additionally imagine that the separate consent is only a formality and customers lack actual choice – making energy. Unrestricted entry rights belong to extremely delicate rights. In follow, when an clever agent calls this operate, it might or might not contain delicate private data. Therefore, if some clever brokers embrace the notification of unrestricted entry rights within the normal privateness coverage and don’t embrace it within the scope of separate consent, it might lead to the shortcoming to automate some capabilities (comparable to fee).

There are additionally trade variations within the situation of twin authorization. Guo Bing stated that each the Guangdong normal and the early normal of the China Software Industry Association require clever brokers to get hold of twin authorization from customers and third – get together APPs, however the newest normal of the China Software Industry Association has cancelled this requirement and as an alternative emphasizes “user control”. These two contradictory requirements for clever brokers additionally spotlight the trade variations within the precept of twin authorization. Due to potential industrial curiosity conflicts, unfair competitors instances between clever agent operators and third – get together APPs might happen at any time. However, even when the China Software Industry Association cancels the requirement of twin authorization, group requirements wouldn’t have direct authorized impact and can’t present a judgment normal for this dispute.

Record hint – again entails the important thing situation of accountability willpower. Guo Bing proposed that if the operation information of clever brokers can’t be traced again, it is going to be troublesome to outline accountability when an infringement happens. Perhaps for that reason, the most recent normal of the China Software Industry Association has added the requirement of report hint – again. However, there are additionally difficulties in report hint – again, such because the storage scope, storage methodology, and battle with the suitable to delete private data. How to resolve the contradiction between the safety of customers’ rights and the assure of customers’ final proper to maintain somebody accountable will grow to be a scientific drawback within the report hint – again system of clever brokers.

Wang Fei, a companion at Zhonglun Law Firm shared enterprise compliance instances from a sensible perspective. Combining three particular eventualities of doc – sort AI brokers, medical – operate AI brokers, and clothes – design AI brokers, he identified the core compliance confusion confronted by enterprises: how to outline the scope of authorization, the suitable boundary of cross – doc information use, and the relevant scope of the protection of technological neutrality. A medical AI agent wants to entry customers’ hospital information and medical literature to generate a diagnostic report. Can it declare technological neutrality by analogy with search engines like google and yahoo? After customers authorize, ought to the platform nonetheless bear the information safety accountability? Wang Fei stated about this: “Lawyers may be more conservative, but I will more consider from the perspective of customers to meet the requirements of current judicial practice, academic views, and administrative supervision, hoping to have better compliance measures implemented.”

Xu Ke, a professor on the School of Law of the University of International Business and Economics and the director of the Research Center for Digital Economy and Legal Innovation performed a cross – border authorized comparative evaluation together with the Perplexity case within the United States. In this case, the defendant, Perplexity, helped customers store via Amazon Prime accounts and was accused by Amazon of violating the CFAA (Computer Fraud and Abuse Act), platform guidelines, and inflicting industrial losses. Perplexity claimed to be an “agent authorized by the user” and believed that Amazon’s accusation was bullying of a begin – up firm by a large.

Xu Ke identified that the core dispute on this case displays the authorized dilemma of the tripartite relationship of AI Agents (customers, Agents, and third – get together platforms): Agents declare to be an extension of customers’ rights, however the platform believes that their actions injury the enterprise ecosystem and safety order. Combining with China’s judicial follow, Xu Ke emphasised that person authorization can’t substitute platform authorization. This precept has been confirmed in information scraping instances. For instance, within the case of Sina Weibo v. Toutiao, the court docket held that the authorization of massive V customers was not adequate to exempt Toutiao from the infringement legal responsibility of scraping conduct.

However, Xu Ke additionally identified that “AI Agents and traditional scraping are two completely different technical means, and the traditional regulations for data scraping cannot be applied to the use of Agents.” Therefore, two kinds of Agents needs to be distinguished: pure “agents” (whose actions are utterly restricted throughout the scope of person authorization) and “intermediary cooperators” (who might have their very own curiosity calls for), and completely different kinds correspond to completely different authorized legal responsibility frameworks.

III. Develop First or Regulate First? Innovative Governance Needs to Focus on the Legality of Cross – Domain Data Acquisition

The third unit targeted on “innovative governance paths and industrial practices”. Experts from the economic circle and scientific analysis establishments shared their sensible explorations and mentioned the attainable paths for the collaborative governance of know-how, regulation, and trade.

Lin Zihan, the chief skilled on information components at Jiangsu Data Exchange and a specifically – appointed skilled of the Cross – border Data Reform Expert Group in Pud



Sources