The Judiciary of Seychelles hosted a two-day training session focused on the responsible and ethical use of Artificial Intelligence (AI) in legal and judicial work, bringing together Judicial Officers, Judicial Research Councils, and administrative teams.
The training, which took place on Wednesday 13 and Thursday 14 May, was organised by the Judicial College (JUCOS) under the leadership of Supreme Court Judge, Melchior Vidot.
The Judiciary has observed the growing use of AI tools, with lawyers already relying on AI-assisted platforms in aspects of their work which highlights the importance of establishing clear guidelines and safeguards to ensure responsible use.
The sessions are being facilitated by Dr. Mugambi Laibuta, an Advocate of the High Court of Kenya, trained mediator, legislative and policy analyst, and certified privacy and data protection professional. Dr. Laibuta completed his PhD in Law at the University of the Witwatersrand, with research focused on privacy rights and data protection regulation in Kenya. He currently teaches Legal Writing and Legislative Drafting at the Kenya School of Law and advises institutions across the public and private sectors on internationally recognised data protection principles.
The training explores emerging issues surrounding AI in judicial environments, including AI-related litigation, ethical considerations, accountability, disclosure obligations, and the importance of strengthening AI literacy within institutions. Participants are also being guided on developing policies and safeguards to ensure the safe and responsible use of AI technologies.
Designed as an interactive engagement, the sessions aim to support the Judiciary in developing its own roadmap for navigating AI and technological advancement within the justice sector.
AI Related Suits 
During the training, some AI related court cases were shared in the session:

  1. Garcia v Character Technologies, Inc (United States, ongoing)

In Garcia v Character Technologies, Inc, the mother of a fourteen-year-old boy brought proceedings against the developers of the AI chatbot platform Character.AI following the death of her son by suicide after prolonged interactions with an AI chatbot.

The claim alleges that the chatbot engaged the teenager in emotionally manipulative and psychologically harmful conversations, including discussions that encouraged emotional dependency and isolation. According to the pleadings, the child developed an intense attachment to the AI character and increasingly withdrew from real-world relationships before his death.

The suit argues that the platform failed to implement adequate safeguards for minors, negligently designed emotionally persuasive AI systems, and failed to warn users and parents of foreseeable psychological risks.

The case has attracted significant international attention because it raises novel questions concerning the duty of care owed by AI developers to vulnerable users, product liability for conversational AI systems, online child safety, and the regulation of emotionally interactive AI technologies.

2. Mata v Avianca, Inc (S.D.N.Y., 2023)

In Mata v Avianca, Inc, lawyers representing a passenger in a personal injury claim used ChatGPT to prepare legal submissions and cited several authorities that were entirely fictitious. The AI generated realistic-looking case names, quotations, and citations, which counsel filed without independently verifying them. When the court and opposing counsel were unable to locate the cited authorities, the lawyers initially insisted the cases were genuine before eventually admitting that the authorities had been generated by AI.

The United States District Court for the Southern District of New York held that counsel had failed in their professional duty to verify legal authorities before presenting them to the court.

The court-imposed sanctions, emphasising that lawyers remain fully responsible for the accuracy and integrity of submissions even where AI tools are used.

3. Clarke v State of Queensland (Department of Education) [2025] QIRC 300,

In Clarke v State of Queensland (Department of Education) [2025] QIRC 300, the Queensland Industrial Relations Commission considered the professional and ethical implications of a lawyer’s use of generative artificial intelligence in legal drafting. The matter arose after submissions filed before the Commission contained inaccurate and apparently AI-generated legal authorities and propositions that had not been independently verified. The Commission emphasised that the use of artificial intelligence does not diminish or displace the professional duties owed by legal practitioners to the court or tribunal. The Commission reiterated that counsel remain personally responsible for ensuring the accuracy, authenticity, and reliability of all authorities and submissions filed, regardless of whether AI tools assisted in their preparation. The decision serves as an important warning that generative AI may produce plausible but incorrect legal material, and that uncritical reliance on such outputs may amount to a breach of professional obligations, undermine the administration of justice, and expose practitioners to disciplinary or procedural consequences.

4. State v Loomis 881 N.W.2d 749 (Wis. 2016)

State v Loomis concerned the use of the COMPAS algorithmic risk assessment tool during criminal sentencing in Wisconsin. The defendant argued that the use of COMPAS violated due process because the proprietary nature of the algorithm prevented scrutiny of how risk scores were calculated, and because the tool potentially incorporated racial and gender bias.

The Wisconsin Supreme Court upheld the sentence and permitted the use of COMPAS as one factor among many in sentencing, but stressed that the algorithm could not be determinative and that judges must exercise independent discretion.

The court also required warnings about the limitations and potential biases of the tool to accompany its use, making the case one of the leading authorities globally on AI-assisted judicial decision-making and due process concerns.

5. Ewert v Canada 2018 SCC 30

In Ewert v Canada, a federal inmate challenged the use of actuarial risk assessment tools by correctional authorities on the basis that the tools had been developed and validated primarily using non-Indigenous populations and therefore produced unreliable outcomes when applied to Indigenous offenders.

The Supreme Court of Canada held that correctional authorities had failed to demonstrate that the assessment tools were sufficiently accurate or valid for Indigenous inmates. The Court emphasised that administrative and correctional decisions relying on algorithmic or statistical tools must be evidence-based and demonstrably reliable for the populations to which they are applied.

The case is significant for recognising the potential discriminatory and unfair impacts of algorithmic systems when used without proper validation across diverse groups.

6. Thaler v Vidal 43 F.4th 1207 (Fed. Cir. 2022)

In Thaler v Vidal, the applicant sought to register patents naming an artificial intelligence system known as DABUS as the inventor. The United States Patent and Trademark Office rejected the application on the basis that only natural persons may be recognised as inventors under the Patent Act.

The United States Court of Appeals for the Federal Circuit upheld that decision, holding that the statutory framework clearly contemplates human inventors and that an AI system cannot qualify as an inventor under existing patent law.

The decision reinforced the principle that intellectual property systems remain grounded in human creativity and legal personality, notwithstanding advances in autonomous AI systems.

7. Thaler v Perlmutter 687 F Supp 3d 140 (D.D.C. 2023)

In Thaler v Perlmutter, Stephen Thaler challenged the refusal of the United States Copyright Office to register an artwork that had been generated entirely by an AI system without human creative input. The court upheld the Copyright Office’s refusal, holding that copyright protection under United States law requires human authorship.

The District Court reasoned that copyright law has historically protected the fruits of human intellectual and creative labour and that works generated autonomously by AI systems fall outside the current statutory framework. The decision is now a leading authority on the requirement of human authorship in copyright law in the context of generative AI.

8. Getty Images (US), Inc v Stability AI Ltd (ongoing)

In Getty Images v Stability AI, Getty Images alleges that Stability AI unlawfully copied millions of copyrighted images from Getty’s database to train generative AI image models without permission or licensing. Getty further claims that the AI system reproduced elements of Getty’s copyrighted material, including distorted Getty watermarks appearing in generated images. Stability AI disputes liability and argues that training AI models may fall within permissible legal exceptions such as fair use or fair dealing.

The litigation is closely watched internationally because it raises foundational questions about whether the use of copyrighted works for AI training constitutes infringement and how intellectual property law should apply to generative AI systems.

9. Authors Guild v OpenAI, Inc (ongoing)

In Authors Guild v OpenAI, authors and rights holders allege that OpenAI used copyrighted books and literary works without authorisation to train large language models such as ChatGPT. The plaintiffs contend that the copying and ingestion of their works during AI training infringes copyright and undermines authors’ economic rights. OpenAI has argued, among other things, that the training process is transformative and may fall within fair use principles.

The proceedings are significant because they may determine the legality of large-scale AI training on copyrighted textual materials and shape the future regulatory and commercial environment for generative AI development.

10. Andersen v Stability AI Ltd (N.D. Cal.,)

In Andersen v Stability AI, a group of artists brought a class action against Stability AI and related companies alleging that generative AI image systems had been trained using copyrighted artworks without consent and that the resulting outputs imitated the distinctive artistic styles of the claimants. The plaintiffs argue that the systems unlawfully reproduced and exploited their works and artistic identities. The defendants dispute liability and maintain that the systems learn general patterns rather than storing or reproducing protected expression.

The case is significant because courts are being asked to determine whether AI-generated outputs that mimic artistic styles amount to copyright infringement and how existing intellectual property doctrines should apply to machine-generated creative content.

11. Kohls v Ellison (D. Minn., 2024)

In Kohls v Ellison, legal submissions reportedly contained authorities and citations generated using artificial intelligence tools that could not be verified in official legal databases. The court identified inaccuracies and fabricated citations within the filings and expressed concern about counsel’s failure to independently authenticate authorities before relying on them in litigation. The matter became part of a broader pattern of judicial concern in the United States regarding hallucinated authorities produced by generative AI systems.

The proceedings reinforced the obligation of advocates to verify all legal materials submitted to the court and underscored the professional risks associated with uncritical reliance on AI-generated research.

12. Park v Kim (9th Cir., 2024)

In Park v Kim, an appellate brief filed before the United States Court of Appeals reportedly contained citations and legal propositions generated by AI that were either inaccurate or wholly fictitious. The court identified the irregularities and sanctioned counsel for failing to verify the authorities relied upon in the submissions. The matter further contributed to the growing body of jurisprudence and judicial guidance warning legal practitioners against blind reliance on generative AI tools in legal drafting and research.

The decision reaffirmed that lawyers owe continuing duties of candour, competence, and diligence to the court irrespective of whether AI tools are used in the preparation of legal documents.

13. R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058 (United Kingdom)

In Bridges v South Wales Police, the claimant challenged the police use of automated facial recognition technology in public spaces, arguing that the deployment violated privacy, data protection, and equality rights under the European Convention on Human Rights and UK data protection law.

The Court of Appeal held that the police use of facial recognition technology was unlawful because the legal framework governing its deployment was insufficiently clear and because the force had failed adequately to assess the risk of bias and discriminatory impacts. The Court also found deficiencies in the data protection impact assessment conducted by the police.

The decision is one of the most influential international authorities on facial recognition technology, algorithmic surveillance, and proportionality in public-sector AI deployment.

14. The Hague District Court – SyRI Case (NJCM c.s. v Netherlands, 2020)

The Dutch “SyRI” case concerned the Netherlands government’s use of the System Risk Indication (SyRI), an algorithmic welfare fraud detection system that combined large quantities of personal data from multiple public databases to identify individuals deemed likely to commit social security fraud. Civil society organisations challenged the system, arguing that it violated privacy and human rights protections because the algorithm operated as a “black box” with insufficient transparency or safeguards.

The Hague District Court held that the use of SyRI violated Article 8 of the European Convention on Human Rights because the system lacked adequate transparency, proportionality, and safeguards against abuse.

The Court emphasised that citizens could not meaningfully understand or challenge how the system generated risk profiles.

The case is now regarded globally as a landmark judgment on algorithmic governance, explainability, and human rights limits on state use of AI systems.

15. Brian Hood v OpenAI (Australia, ongoing defamation proceedings)

The Australian AI defamation proceedings involving Brian Hood arose after ChatGPT falsely stated that Mr Hood, the Mayor of Hepburn Shire Council, had been imprisoned for bribery offences relating to a corruption scandal. Mr Hood had not been convicted and had instead acted as a whistleblower in the matter. Mr Hood commenced defamation proceedings against OpenAI, arguing that the AI-generated output published false and highly damaging allegations that injured his reputation.

The matter is regarded as one of the earliest defamation actions globally arising directly from hallucinated AI-generated content. The proceedings raise significant legal questions concerning publisher liability, negligence, reputational harm, and the responsibility of AI developers for false information generated by large language models.

Other Resources on AI Use in the Legal Sphere

  1. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment (CEPEJ, Council of Europe, 2018)
  2. European Union Artificial Intelligence Act (EU AI Act)
  3. UK Judiciary Guidance for Judicial Office Holders on Artificial Intelligence (2023)
  4. Federal Judicial Centre – An Introduction to Artificial Intelligence for Federal Judges
  5. Singapore – Guide On the Use of Generative Artificial Intelligence Tools by Court Users
  6. Law Society of England and Wales – Generative AI Guidance for Legal Professionals
  7. UNODC Resource Guide on Artificial Intelligence and Criminal Justice – Artificial Intelligence
This training reflects the Judiciary’s continued commitment to professional development, and ensuring that emerging technologies are integrated thoughtfully, responsibly, and in a manner that upholds the integrity of the justice system.