Legal Innovation Hub and LTEC Lab Present - AI in the Law Today: Challenges and Opportunity in Practice, Education, and Policy
- nandees
- Mar 24
- 8 min read
Authored by: Daina Elias, Windsor Law Student & LTEC Lab Research Assistant
March 18, 2025

On February 13th, 2025, the Legal Innovation Hub and LTEC Lab came together at Windsor Law to host the panel “AI in the Law Today” which was introduced by Austin Ratos, Legal Innovation Hub VP and 3L and Windsor Law Student, and which comprised:
Abdi Aidid, Assistant Professor at the University of Toronto Faculty of Law and Visiting Associate Professor at Yale Law School;
Annette Demers, Reference Librarian at the University of Windsor Faculty of Law;
Sam Ip, Partner at Osler, Hoskin, and Harcourt LLP’s Technology Group; and
Dr. Pascale Chapdelaine, Associate Professor at the University of Windsor Faculty of Law and Director of LTEC Lab (Moderator).
The event, titled “AI in the Law Today: Challenges and Opportunity in Practice, Education, and Policy,” involved a dynamic discussion on the impact of artificial intelligence (“AI”) featuring experts from a leading law firm and academia. AI is impacting legal practice and education, presenting ethical and regulatory challenges in need of desperate resolution. Thanks to their thoughtful expertise, audience members were able to grasp the importance of AI literacy and the evolving role of lawyers in shaping AI policy and governance.
While researching and teaching in the areas of torts, civil procedure, privacy law, and data governance[DE1] , Abdi Aidid uniquely blends his technological expertise into the conversation. For instance, in his most recent role as Vice President of Legal Research at Blue J, Abdi directed the creation of technology solutions for legal research, leveraging machine learning and analytics. An innovative career bridging rule of law principles with cutting-edge AI, ensuring justice and efficiency in legal tech solutions is nothing short of inspirational, and as demonstrated by Abdi, doable. While Abdi remains skeptical about deploying AI in all contexts, he urges us to consider its deployment “responsibly” within our particular legal environment.
As a legal information professional, Annette Demers has kept a close eye on AI and its progression, especially in legal research and writing. After suggesting the creation of a working group on AI to the Canadian Association of Law Libraries, Annette co-led the team, which included librarians from courthouses, government bodies, and law firms. Together, they are developing a procurement checklist to ensure library managers acquire AI products responsibly. Additionally, Annette joined the Standards Council of Canada, the Canadian counterpart to the International Standards Group (“ISO”), to observe international standards with respect to AI. Her website and the Don & Gail Rodzik Law Library feature three useful guides: an overview of new market technologies, a summary of the regulatory landscape, and a guide to help students use AI tools appropriately.
As a young lawyer searching for a unique niche, Sam Ip found AI to be a novel field that resonated with him, resulting in his engagement with the subject long before chat-based systems emerged. Over time, he carved out a distinct practice at his firm, becoming a commercial lawyer deeply involved in regulatory work, including contributions to Bill C-27, portions of which are also known as the AI and Data Act. Now leading Osler, Hoskin, and Harcourt LLP’s AI practice— with AI governance comprising roughly 50-60% of his focus— Sam guides businesses through ever-evolving challenges and aims to shape the legal perspectives of future practitioners.
Pascale Chapdelaine moderated the panel discussion. Her extensive years of experience in corporate, commercial, and intellectual property law prior to joining the University of Windsor Faculty of Law in 2014, surely served as inspiration to research the intersection of law, technology, and society, including AI’s impact on e-commerce and works protected by copyright. Some of Pascale’s recent work examines how algorithmic business practices and personal data extraction lead to price personalization and other forms of discrimination in e-commerce, media content, and social media.
Regardless of one’s interest in AI, Abdi calls to mind the difficulty encountered by law students and legal practitioners in trying to define relevant terms. AI in general is not clearly and completely defined, especially considering the various competing ideas outlining what it is. For instance: while Bill C-27 and E.U.’s AI Act both describe systems that generate outputs like predictions and recommendations, Bill C-27 emphasizes specific techniques for autonomous data processing, whereas E.U.’s AI Act highlights adaptive design and variable levels of autonomy that influence environments. Given the challenges in defining AI, Abdi finds it beneficial to examine historical techniques and current applications to understand the intended purposes and practical implications of AI developments.
According to Abdi, “generative AI” is envisioned to generate entirely new information through creative processes akin to human thought, while “machine learning” techniques— which constitute the first wave of AI tools— focus on unearthing patterns within vast amounts of existing data. These latter systems train algorithms on data sets that are too large or complex for manual analysis, effectively revealing insights that humans might otherwise miss or have trouble synthesizing, all whilst operating within cognitive limitations like our own.
The discussion naturally turned to large language models (“LLM”), where Abdi pointed to current efforts to restrict LLM outputs to vetted sources and instead develop domain-specific models. Unfortunately, challenges persist, such as the risk of “hallucination,” that is, the production of content that appears convincing, but is incorrect. Intelligent users inevitably detect the hallucination, and so, as Pascale jokingly remarks, it is no wonder that ChatGPT is subject to continued scolding in our quest for the perfect answer.
Offering a law firm perspective, Sam mentions AI’s two main usages, namely: document preparation and legal research. AI can deliver faster outcomes, which challenges the traditional billable hours model and reduces reliance on junior associates for routine tasks. Still, as anticipated, concerns are raised throughout law practice about how associates are to develop their skills and maintain accountability if AI’s usage is increased. Legal advice must remain accurate and confidential. For this reason, law firms and clients are increasingly demanding disclosures about their respective AI usage, underscoring what Sam refers to as the “need for strong AI governance” to mitigate risks while unlocking its benefits.
Annette also offered a refreshful recap on the unique industry variances between the academic world and law firms regarding AI utilization, delving into legal services such as Blue J, Alexi, vLex, CiteRight, and the like. In turn, students in attendance could begin to grasp the current state of AI tools and legal publishers, understand the limitations observed in their offerings, and evaluate the extent to which they have been responsive to feedback from the user community— much like the work Annette does.

Bearing in mind the potentiality of AI to improve our legal institutions, performance, efficiency, and costs, Pascale asked Abdi to summarize the main thesis of his book, The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better (2023). In doing so, Abdi places emphasis on the word “can,” highlighting that while AI has the potential to transform the legal field significantly, realizing these benefits requires concerted effort and caution, as the downsides are substantial. For this reason, the concept of “legal singularity” is introduced: a future state where legal uncertainty is nearly eliminated through on-demand, universally accessible law, and advanced legal protection.
AI tools like ChatGPT have ultimately shown promising potential, particularly for self-represented litigants. By offering accessible legal services, AI could enhance the quality of legal representation for those who might otherwise lack it, gradually transforming the legal landscape and improving outcomes over time. Moreover, there is a growing sense of anticipation among legal practitioners and law students about the evolving role of AI. As these technologies become increasingly integrated into legal practice, they are expected not only to level the playing field for students with varying academic strengths but also to foster a more efficient, accessible, and innovative legal system.
Further situating future practitioners in the conversation, Annette points to Rule 3.1.2 [4A] of the Law Society of Ontario’s Rules of Professional Conduct. This section highlights the necessity for lawyers to continuously update their technological expertise, specifically in understanding and utilizing AI legal tools, to maintain their professional competence and fulfill their ethical obligations to protect client confidentiality and navigate the complexities of modern legal practice. As expressed by the panelists, it is necessary to possess basic knowledge across those changes to adapt to any software or platform, regardless of their appearance. Abdi notes, “If we trained our students specifically on how to use Blue J, etc., we would be making bets on which ones are likely to have a long shelf life,” which, oftentimes, is not the case.
In discussing the ramifications of AI on legal frameworks and innovation economies, Sam highlighted the challenges posed by Canada’s first federal legislation, Bill C-27, particularly its failure to differentiate between high and low risk AI applications, which initially hampered innovation. Sam emphasized the influence of the E.U. AI Act, noting its adoption as a standard by many Canadian clients and stressing the importance of students understanding such frameworks as Canadian businesses operate globally. The discussion also touched on the role of standards, like the ISO/IEC 42001 AI management system, which serves as a regulatory proxy and is designed to be digested quickly.
The seminar series closed with a question period from audience members.

Austin Ratos commenced the Q&A by asking about the duty of competence for lawyers, especially when it comes to attributing liability using AI. To be specific, Austin asked whether lawyers must investigate AI processes, such as the sources of data or the algorithms' operations, to maintain their duty of competence. Panelist Sam responded that, in Ontario, a white paper on generative AI had led to consultations with law firms, including his own. He emphasized that AI, like any tool, does not alter the foundational rules of professional conduct; lawyers must still ensure the advice they provide is sound. Regarding our obligations, it is necessary, then, for the legal profession to be “technology-agnostic,” highlighting the importance of understanding AI within that technology rubric.
Annette added a perspective on practice variations, explaining that the impact of AI would differ significantly between sole practitioners and large firms with dedicated technology departments. Large firms might have departments that focus on AI procurement, assessing algorithm transparency, bias, copyright, and the scope of the data sets used. In contrast, sole practitioners need to manage these aspects themselves, often relying on their local law associations for support and advice.
Transitioning smoothly, 1L Windsor Law Student Alex Kelly questioned the impact of AI on access to justice, particularly the concern that rising costs might limit AI access to wealthier firms. In response, Sam affirmed the disparity between large tech companies and smaller legal practices, noting the significant expenses associated with data necessary for training AI models. He suggested that government initiatives and local law associations could help mitigate these disparities by facilitating access to essential data and resources, thus supporting smaller practices in harnessing AI without bearing prohibitive costs.
Sam also shared insights from ongoing litigation and consultations about copyright, highlighting the complexities of managing AI’s impact on content creation and the potential biases that may arise without access to diverse data sets. Pascale added that the legal clarity around copyright infringements through Generative AI systems remains murky. Ultimately future regulation will rest on policy will. There have been groundbreaking precedents in Europe, such as amendments to the Directive on Copyright in the Digital Single Market, particularly Article 17, that introduced a new communication right making content platforms such as YouTube liable for copyright infringement and creating a completely new regime in that area.
All in all, the seminar underscored the important need for legal professionals to adapt and integrate comprehensive AI understanding into their practice. In that, we ought not to solely keep pace with technological advancements but to leverage these tools ethically and effectively in our legal work.
Windsor Law’s Legal Innovation Hub and LTEC Lab extends a heartfelt thank you to all the panelists and attendees for sharing their passion, expertise, and valuable insights into this vital and ever-evolving area. This dialogue is shaping the future of legal education and practice in ways we can only begin to imagine. For those who wish to revisit the discussions or could not attend, the recording of the event is available here.
Comentarios