
On radio the other day, the host mentioned an explosion of AI-generated music hitting the platforms and a growing chorus of artists who feel the ground shifting under their feet. It isn't just a vibe; the numbers are moving. In April 2025, Deezer estimated that around 18% of the tracks uploaded each day were AI-generated. The company also flagged a rise in "streaming farms" and impersonation, issues that can distort royalties at scale.
The core tension is simple: models learn from vast corpora of human work. That training enables astonishing new tools for composition, voice cloning and production. But it also raises a hard question: who gets to copy, who gets paid, and who gets credit?
AI, Creativity, and Intellectual Property: The First Front Line
The creative sector has become the canary in the coal mine for the coming clash between human originality and machine production.
- Authors vs Salesforce: A new lawsuit accuses the company of using copyrighted books to train its xGen models without permission.
- Anthropic's $1.5 billion settlement: Earlier this year, the firm resolved claims that its Claude model was trained on pirated works.
- AI-generated music and impersonation: Viral tracks imitating artists like Drake and The Weeknd forced streaming platforms to purge millions of songs and rewrite their policies on synthetic media.
These cases show what happens when innovation outruns the law. Artists are demanding new forms of ownership, transparency, and credit for data-derived creations.
AI in the Legal Profession: When the Law Meets Its Mirror
AI's impact on the legal industry is already visible. Many firms use AI for document review, legal research, contract drafting, and internal knowledge management.
- A Thomson Reuters survey found that nearly three-quarters of lawyers now use AI for research, review, or summarisation.
- Firms like A&O Shearman and Clifford Chance have deployed AI copilots such as Harvey to streamline drafting and analysis.
Yet, the promise of speed comes with risk. A UK barrister was recently sanctioned for citing fictitious cases generated by AI. In the U.S., a lawyer was publicly rebuked after a Supreme Court brief referenced imaginary precedents. These expose a structural tension between AI's statistical reasoning and law's demand for authority and truth.
Fault Lines: Where AI Collides with Legal Integrity
1. Hallucination and False Authority AI tools can generate plausible but non-existent citations or misapply case law. Accuracy and attribution — the cornerstones of legal credibility — are suddenly fragile.
2. Transparency and Explainability AI operates as a black box. When an algorithm recommends an argument or predicts a judgment, we often cannot trace how it reached that conclusion.
3. Bias and Inequality Models trained on historical data reproduce the patterns — and prejudices — of the past. If AI is used to assess bail risk, sentencing, or contract negotiation, bias can be quietly automated at scale.
4. Confidentiality and Client Privilege When lawyers input sensitive data into third-party AI systems, they risk breaching confidentiality or waiving legal privilege.
5. De-skilling and Dependence Overreliance on AI could erode professional judgment. Junior lawyers may lose the chance to develop core analytical skills if machines do the thinking.
6. Legitimacy and Public Trust Perhaps the deepest risk is societal. If citizens believe that court outcomes depend on proprietary algorithms rather than transparent reasoning, public trust in the rule of law itself could falter.
How the System Is Responding
Legal institutions are waking up — albeit unevenly.
- The American Bar Association issued its first formal ethics guidance on AI, insisting lawyers remain responsible for outputs.
- The UK Bar Standards Board and judiciary are exploring protocols on disclosure, transparency, and risk management.
- National court systems have published frameworks for "AI-in-court" use, emphasising fairness, validation, and human oversight.
- Governments are drafting targeted laws such as the Generative AI Copyright Disclosure Act (U.S.) and Europe's AI Act.
Future Scenarios: Three Possible Paths
1. AI as Supplement: The optimistic scenario: AI amplifies human expertise. Lawyers use it to automate drudgery and focus on reasoning, empathy, and advocacy.
2. Dual-Track Justice: AI powers low-cost, automated legal services for minor claims, while complex cases remain human-led. Justice becomes efficient — but stratified.
3. Algorithmic Adjudication: In some jurisdictions, AI could move beyond assistance to decision-making. Without strict oversight, this risks a slide into opaque, data-driven justice.
Building Guardrails: Ethics for the Algorithmic Bar
- Transparency and auditability: All AI outputs in legal contexts should carry source trails and reasoning metadata.
- Human accountability: Every AI-generated recommendation must be reviewed and verified by a qualified lawyer.
- Bias and fairness audits: Independent bodies should test AI systems for systemic bias and unintended discrimination.
- Data protection and privilege: Firms must adopt clear frameworks for how client information interacts with AI tools.
- Liability clarity: Regulators must define accountability between lawyers, vendors, and model developers when AI causes harm.
- Education and ethics literacy: Continuous training in AI ethics should become a professional requirement.
The Bigger Picture: Law as a Human Project
Ultimately, law is more than logic — it is a human conversation about fairness, empathy, and responsibility. The danger with AI is not just that it might make mistakes, but that it might make perfectly logical choices divorced from human values.
In the same way that artists now demand consent, credit, and compensation, societies must demand transparency, accountability, and humanity in how AI shapes the law.
As AI begins to draft our arguments, interpret our laws, and even shape our judgments, how will you ensure that justice remains a human enterprise?
Topics
Need guidance on AI governance?
If you're navigating AI ethics, governance challenges, or regulatory compliance, we can help clarify priorities and next steps.
Book a Readiness Consultation