De Jure Chambers – Artificial Intelligence (AI) Usage Policy
Jurisdiction: United Kingdom & European Union
Effective Date: 28 April 2025 (Version 1.1 – Revised)
1. Our Commitment to Professional Integrity and Client Trust
De Jure Chambers is a law firm regulated by the Solicitors Regulation Authority (SRA). We believe that the delivery of legal services must be grounded in human judgement, professional expertise and rigorous ethical standards. In light of recent instances in which generative AI tools led to the submission of fictitious authorities and erroneous outputs, and the guidance issued by the Law Society of England and Wales, we adopt a precautionary approach: generative AI shall not be used in any part of the handling of client matters (including drafting, research, submissions, pleadings, witness-statements or tribunal documents).
2. Prohibition on Generative AI in Client Work
2.1. The firm prohibits the use of generative AI tools (including but not limited to large-language-models, “chatbot” tools, automated drafting systems producing substantive legal content) in respect of client matters.
2.2. All legal work undertaken for a client must be performed, or supervised, by a qualified lawyer, who remains responsible for the outcome and all professional obligations.
2.3. The prohibition covers all phases of client work: research, drafting, editing, summarising, briefing or engagement with third-party tools which purport to deliver substantive legal content or legal analysis generated by AI.
2.4. Internal non-client administrative or operational uses may be permitted (see section 4) but must be strictly segregated from client work and subject to internal approval, governance and audit.
3. Regulatory, Ethical and Data-Protection Rationale
3.1. The SRA’s Principles (including Principle 7: act in the best interests of each client) and Codes of Conduct impose duties of competence, confidentiality, client care and proper supervision.
Law Society Communities
3.2. The Law Society’s article on “Compliance and the use of AI in law firms” emphasises that the use of AI in law firms creates risk, and firms must ensure that deployment is safe, appropriately governed and aligned with professional obligations.
Law Society Communities
3.3. Recent reports of fictitious authorities being submitted to court, generated by AI systems, underscore the risk to the administration of justice, the profession’s reputation and the risk of professional sanctions.
3.4. Data-protection obligations (UK GDPR / EU GDPR) require transparency, accountability, fairness and human oversight; many generative AI tools do not yet reliably provide audit trails or guarantee accuracy.
qanooni.ai
4. Permitted Internal Uses
4.1. The only permissible uses of AI tools within the firm are for purely internal administrative or operational tasks, provided they do not involve generating substantive legal content for a client matter. For example: transcription of meetings, summarising internal meeting notes, automated scheduling or resource-allocation may be permitted.
4.2. Before any internal use is deployed, the Head of Compliance must approve the specific tool, assess its data-security, ensure no client-matter data is uploaded unless approved and ensure a segregation between “client-work workflow” and “administrative workflow”.
4.3. Even for internal uses, outputs must be reviewed by a human, and full audit records maintained of tool usage, including date, user, purpose and any review comments.
5. Client Rights and Transparency
5.1. Clients may request clarification of whether any AI tool has been used in relation to their matter and may raise objections to such use.
5.2. Because generative AI is prohibited in client work, clients may be reassured that no drafting, research or substantive content in their matter has been produced by such tools.
5.3. In our engagement letter we will include a clause confirming this position and reminder that all work is performed by qualified legal professionals in the firm.
6. Confidentiality, Data-Security and Governance
6.1. The firm shall maintain strict controls around any tool that processes personal or sensitive data. All internal tools must comply with the firm’s data-protection, confidentiality and cybersecurity policies.
6.2. Client-matter data must never be uploaded to external generative AI models or platforms unless expressly approved in writing by the Head of Compliance and subject to a risk-assessment and contractual safeguards (none such approval will be given for generative AI use in client work).
6.3. The firm will maintain logs of all AI/automation tool usage (internal or otherwise), monitor compliance and conduct periodic audits.
7. Review and Governance
7.1. This policy will be reviewed at least annually, or earlier in the event of material changes in regulation, professional guidance or the firm’s systems.
7.2. Any breach of this policy may lead to disciplinary action, including referral to regulatory bodies or clients if necessary.
7.3. The Head of Compliance (or equivalent senior partner) is responsible for oversight of this policy and must report annually to the management board on tool usage, risk assessments and any incidents.
8. Effective Date & Scope
This version of the policy is effective from 28 April 2025 and replaces any prior AI-use statements. It applies to all partners, associates, trainees, support staff, contractors or any person performing services for or on behalf of the firm.
