The Standing Committee on Industry, Science and Technology is certainly one of a number of House and Senate committees at present grappling with authorized, regulatory and coverage challenges and alternatives introduced by AI. I appeared before the committee yesterday alongside Yoshua Bengio and Colin Bennett. Bengio unsurprisingly garnered the lion’s share of the questions, however the committee did give me the probability to spotlight my ideas on coverage priorities and to handle just a few questions. I plan to put up some reflections on the coverage tensions in the coming days. In the meantime, the video and textual content of my opening assertion are posted under.
Appearance earlier than the House of Common Standing Committee on Industry, Science and Technology March 23, 2026
Good afternoon. My title is Michael Geist. I’m a regulation professor at the University of Ottawa the place I maintain the Canada Research Chair in Internet and E-commerce Law. I seem in a private capability representing solely my very own views.
I feel all of us acknowledge that we’re at a second when there may be mounting strain to do one thing shortly on AI regulation. That strain is comprehensible however dangerous. I submit that we are able to’t merely fall again on “doing something.” The aim have to be well-considered authorized and regulatory frameworks that steadiness facilitating innovation with safeguards towards potential dangers and harms. And I’ve issues that our preliminary efforts to search out that steadiness have led to a haphazard amalgam of proposals that danger doing extra hurt than good. Let me present 4 fast examples of the place I’ve issues, then shift to a few suggestions.
First, Bill C-27, the former privateness and AI invoice, all the time felt like a rushed response to that strain “to do something” on AI. It largely mirrored the EU method that has failed to search out broad international assist. Reviving it underneath a brand new title would repeat the similar mistake and probably undermine our AI competitiveness. The risk-based evaluation might have a task to play in future laws, however even some European international locations, resembling France, have slowly backed away from EU AI Act.
Second, the current push so as to add AI chatbots to on-line harms laws is equally ill-conceived. Applying it could not merely prolong these on-line security guidelines to a brand new expertise past the authentic social media focus. The Online Harms Act explicitly exempted personal messaging from the regulatory regime and it didn’t require providers to interact in proactive monitoring. Extending the Act to AI chatbots would require gutting the very privateness protections the authorities added after its earlier proposals have been extensively criticized.
Third, requires copyright reform to handle the use of works in giant language fashions are untimely. In truth, we should always think about including a text-and-data mining exception to maintain us aggressive. Many copyright circumstances are working their method by way of courts proper now, resulting in authorized steering and market offers. Legislating too shortly dangers locking in guidelines that don’t match the authorized and market panorama.
Fourth, the emphasis on knowledge or digital sovereignty usually presents Canadian infrastructure as an answer to sovereignty issues. Yet the actual difficulty is whether or not Canadian legal guidelines apply to Canadian knowledge, no matter location. The reply is that they typically don’t. The push for home AI infrastructure seems like sovereignty, but when Canadian privateness legal guidelines don’t apply to how Canadian knowledge is used, the servers could possibly be in Gatineau and it wouldn’t matter.
So what to prioritize?
First, prioritize passing modernized privateness and knowledge governance legal guidelines. There is consensus that the present regulation is badly outdated. Modernized privateness regulation would assist set up much-needed safeguards for the use of AI knowledge, repair weak privateness enforcement, and go a good distance towards addressing knowledge sovereignty issues.
Second, introduce and cross an AI Transparency Act. The lack of transparency round AI techniques is straight correlated to diminished public belief. The current issues about OpenAI and the Tumbler Ridge shooter are a living proof. It mustn’t take a gathering with firm executives for the Minister – or anybody else – to find out about the firm’s insurance policies on banning consumer accounts or reporting conduct to the police. An AI Transparency Act ought to do three issues: (1) guarantee AI company insurance policies are publicly accessible, (2) mandate transparency on which works are included in giant language fashions to present creators the data they should probably search content material removals, and (3) require transparency reviews on authorities and regulation enforcement efforts concentrating on customers or content material removals.
Third, as Professor Scassa famous to this committee, there are already many disparate pointers and steering on the use of AI. Existing legal guidelines additionally apply to AI as they do in different contexts. We want to cut back the rhetoric, keep away from panic-driven insurance policies, and present Canadians and companies with a clearer sense of what has been carried out and how the technique suits collectively. That contains sustaining an emphasis on facilitating AI improvement by making datasets accessible, supporting coaching, and fostering personal funding. And it also needs to embrace performing on consultations primarily based on what the authorities really hears from stakeholders, not on what it wish to hear. The current reviews on the knowledgeable and the public response to the AI 30-day dash session didn’t absolutely replicate the responses.
Canada has a real alternative right here. We have AI expertise, rising public consideration to governance, and cross-party curiosity in getting this proper. The worst factor we may do is waste that chance on the flawed laws.