Original author: Yi Haotian In the wave of AI, how can lawyers truly utilize AI without violating confidentiality obligations? Client contracts cannot be directlyOriginal author: Yi Haotian In the wave of AI, how can lawyers truly utilize AI without violating confidentiality obligations? Client contracts cannot be directly

How can lawyers use OpenClaw to build compliance workflows?

2026/03/06 21:39
15 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Original author: Yi Haotian

In the wave of AI, how can lawyers truly utilize AI without violating confidentiality obligations? Client contracts cannot be directly pasted into ChatGPT, otherwise disciplinary action may be taken. This article introduces my setup from the perspectives of lawyers' confidentiality obligations, precautions, and the selection of AI service providers.

How can lawyers use OpenClaw to build compliance workflows?

Lawyer's duty of confidentiality

1. China: Article 33 of the Lawyers Law

First, there is Article 33 of the well-known "Law of the People's Republic of China on Lawyers," which stipulates:

"Lawyers shall keep confidential any state secrets or trade secrets they learn in the course of their practice, and shall not disclose the privacy of their clients. Lawyers shall also keep confidential any information or circumstances that their clients or other persons do not wish to disclose that they learn in the course of their practice."

In China, the confidentiality obligation under the Lawyers Law has been elevated to the level of criminal liability . Article 309 of the Criminal Law stipulates the crime of disclosing case information that should not be made public. In addition, Article 38 of the Measures for the Administration of Lawyers' Practice also explicitly prohibits lawyers from disclosing trade secrets and personal privacy learned in the course of their practice.

Currently, local bar associations and the Ministry of Justice lack detailed guidelines for lawyers using generative artificial intelligence. Therefore, we can refer to the requirements of our American counterparts.

2. United States: ABA Model Rule 1.6 and NY RPC Rule 1.6

If you hold a New York State bar license (or any other U.S. state license), your obligation to keep client information confidential is not just a matter of professional ethics, but an enforceable disciplinary rule.

New York State RPC Rule 1.6 states :

"A lawyer shall not knowingly reveal confidential information... unless the client gives informed consent."

The term "confidential information" here is extremely broad—it is not limited to court secrets, but encompasses all information that a lawyer learns during the course of their representation, including the client's name, address, financial data, transaction terms, and business strategies, regardless of the source of the information.

More importantly, Rule 1.6(c) :

"A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client."

This means that we not only cannot proactively disclose customer information, but we must also take reasonable measures to prevent such disclosures.

In July 2024, the ABA officially released Formal Opinion 512 —the first comprehensive ethical guideline from the US legal profession regarding the use of generative AI. This opinion explicitly states:

Before inputting information related to client agents into a (generative AI) tool, lawyers must assess the likelihood that such information will be "disclosed to" or "accessed by" other individuals inside or outside the tool.

Opinion 512 likens AI tools to cloud computing services, requiring lawyers to:

  • The investigation focused on the reliability, security measures, and data processing policies of the AI ​​tools used.
  • Ensure the tool configuration protects confidentiality and security.
  • Confirm that the confidentiality obligation is enforceable (e.g., contractually binding).
  • Monitoring for violations or changes in provider policies

Simply put: We cannot directly paste customer contracts into ChatGPT unless we have conducted a thorough compliance assessment.

This means that regardless of which jurisdiction we practice in, the obligation of confidentiality is an inviolable bottom line.

3. Why does AI make confidentiality obligations more complicated?

When we enter customer contracts into consumer AI applications (ChatGPT, Claude, Kimi, etc.), the text is transmitted to a third-party server. Even if the provider claims not to use the data to train the model, the following risks still exist:

  • Data transmission : Customer PII (Personally Identifiable Information) leaves our control and enters third-party infrastructure.
  • Training Risk : Consumer-grade products may use inputs for model training (service agreements should be carefully reviewed).
  • Violation Exposure : We now rely on the provider's security measures to fulfill our own ethical obligations.
  • Audit gap : We are unable to verify what happened after the data was transmitted.
  • Informed consent : Obtaining customer consent for every AI interaction is not feasible in practice.

Most lawyers either avoid AI altogether (losing their competitive edge) or adopt a "use it first, then see" approach (risking disciplinary action). Neither is a good answer. I will discuss the relevant considerations in detail in Section 3.

OpenClaw: How to get started?

1. What is OpenClaw?

OpenClaw is an open-source multi-agent AI assistant platform. Simply put, it's an "AI gateway" running on our own hardware that can manage multiple AI agents simultaneously, each with its own role, memory, and tools.

2. Core Functions

3. How does it work?

OpenClaw runs as a "gateway" on our local device:

OpenClaw itself is free and open source, but we need:

  • A running device.

You can use any computer you don't need, even a Mac Mini. But I recommend using a Mac, mainly because the current OpenClaw ecosystem is mostly built around Mac/Linux. Although many people are developing Windows versions, Mac is currently more stable.

Rent a VPS from service providers like Alibaba Cloud or Tencent Cloud. Kimi recently launched OpenClaw, a tool for one-click deployment; if you want to try OpenClaw at a low cost, you can start with this.

  • API keys for AI models (e.g., when using cloud-based models)

You can choose to purchase directly from an LLM cloud service provider, such as Google Gemini, Alibaba Cloud, and Dark Side of the Moon. Besides the price, another advantage of purchasing APIs from developers is that some cloud service providers offer batch APIs. For large, but not urgent, tasks, cloud service providers offer a 50% discount in exchange for receiving a response within 24 hours.

The second option is a large model aggregation platform (LLM Aggregator), such as OpenRouter or Silicon Streaming. The advantage of this cloud service provider lies in its unified interface, offering a variety of LLM options, and incorporating routing functionality, allowing for automatic conversion between different LLMs.

Alternatively, you can install the Ollam+ open-source model locally (if you don't want to rely on the cloud). This option is flexible and depends on your host configuration.

4. Why do I use a Mac Mini?

  • Local operation and production environment isolation : A dedicated computer ensures OpenClaw won't malfunction and delete my important work files. Of course, renting a VPS can also achieve physical isolation. However, VPSs are usually Linux systems located in the cloud, so the user experience isn't as smooth as local operation. For tasks requiring a good network environment, a VPS is still a good choice. Moreover, rented VPSs are usually not high-spec, and renting high-spec ones is not cheap.
  • Apple Silicon Unified Memory : The unified memory architecture of the M4 chip allows large AI models to be loaded directly into memory and run without the need for an expensive GPU. This unified memory architecture combines the video memory and flash memory commonly found in Windows. This allows for flexible allocation of memory when running large models, and is cheaper than buying a separate graphics card.
  • 32GB of memory : sufficient to run a 35B parameter MoE model (such as Qwen 3.5 35B), with an inference speed of approximately 18 tokens/second.
  • Extremely low power consumption, compact size, and extremely low noise : The Mac Mini consumes approximately 5W in standby mode and 15-30W when running AI models at full load. Running it 24/7 for a month costs less than 10 yuan in electricity. The new Mac Mini is only palm-sized, easily placed on a bookshelf or in a corner of your desk. Even when running AI models at full load, it operates with extremely low noise.

What should we pay attention to when maintaining confidentiality?

When using OpenClaw or any AI tool for legal work, we need to pay attention to three levels of confidentiality.

1. Confidentiality of communication channels

Our communication channel with the AI ​​assistant is the first line of defense.

I recommend that highly confidential legal work should primarily use end-to-end encrypted software as the communication channel. Some might ask, "My clients usually contact me via WeChat, right?" That's correct. If a client primarily uses WeChat, then there's an implied consent involved—the client agrees to use WeChat as the information transmission channel. If we ourselves transmit confidential client information through unencrypted channels, then at the very least, we should obtain the client's written consent first.

2. Choosing an API Provider: Cost-Effective vs. Confidentiality

This is the most crucial yet most easily overlooked issue.

  • Coding Plan

In recent years, domestic cloud vendors have launched highly attractive "Coding Plans": providing top-level model API access at extremely low prices.

Taking Alibaba Cloud's Bailian platform as an example:

  • Lite plan: ¥7.9 for the first month, ¥20 for the second month, and ¥40/month thereafter.
  • Pro Plan: ¥39.9 for the first month, ¥100 for the second month, and ¥200/month thereafter.
  • Models included: Qwen3.5-Plus, Kimi K2.5, GLM-5, MiniMax M2.5

The price is indeed attractive. And with a subscription model, there's no need to worry about exceeding API spending limits. However, please note this sentence in the data policy of the Bailian Coding Plan:

"During the use of Coding Plan, the model inputs and the content generated by the model will be used for service improvement and model optimization."

This means that everything we input, including legal documents that may contain client information, will be used for model training and optimization. For lawyers, this directly violates their confidentiality obligations.

  • Key information to consider when choosing an API

Since Coding Plans cannot handle confidential information (and of course, cloud service providers don't offer Coding Plans for us to handle confidential information), directly purchasing API tokens may be a better option. When choosing an AI model API, lawyers must review the following points in the service contract:

  • Comparison of major API providers

It's crucial to emphasize that even if an API provider claims ZDR and that it's not used for training, lawyers still cannot fully verify the fulfillment of these promises. I believe cloud service providers won't grant individual users access to review such claims. Returning to ABA Opinion 512, lawyers should investigate the security measures of AI tools and verify the implementation of confidentiality. If we cannot verify the implementation of confidentiality measures, then I believe the API does not meet the requirements of Opinion 512. LLM is a black box; we cannot confirm exactly what happens to our data after it is transmitted.

3. The safest option: local model

If we have the highest requirements for confidentiality, the local operation model is the only solution that can guarantee 100% data confidentiality.

advantage:

• Data never leaves the device, 100% privacy.

• No API fees, no usage restrictions

• Not dependent on the network, always available

• Not affected by changes in provider policies

shortcoming:

• Slower inference speed (18 tok/s vs 100+ tok/s in the cloud)

• The model's capabilities are weaker than cutting-edge cloud models (GPT-4o, Claude Opus, etc.).

• Hardware costs are required.

• Context window is limited by memory

Recommended local model:

Note: MoE (Mixture of Experts) is a model architecture that, although it has a total of 35 bytes of parameters, only activates about 3 bytes of parameters during each inference, significantly reducing computational load and memory requirements. This is why a 35-byte model can run smoothly on a Mac Mini with 32GB of RAM.

My configuration

Given that I am a practicing lawyer in New York, the following is my actual configuration of OpenClaw based on Opinion 512.

1. Communication Channel

Signal (end-to-end encryption) serves as the primary channel for legal work. All conversations with the legal agent (Counsel) are conducted through Signal, ensuring complete encryption at the communication level. Routine, non-classified work is carried out via Telegram.

2. Model Configuration

I adopted a hybrid model strategy:

3. Core security process: Anonymized pipeline

This is the most important part of the entire configuration. When I need to use powerful cloud AI to draft or review sensitive documents:

Key point: The mapping.json file (a mapping table between real data and placeholders) never leaves our device . The cloud-based AI only sees "{COMPANY_1} acquires 30% of {COMPANY_2}"—it doesn't know, and cannot know, who the real parties are.

4. Why choose Claude Code, a consumer-grade AI, as a cloud-based editing tool?

  • Subscription : The Max plan is $100/month or $200/month, which is more economical than API pay-as-you-go billing.
  • Latest and most powerful models : Subscribers can directly use the latest released models (such as Claude Opus 4).
  • Comparing API pricing : Claude API input costs $3 per million tokens, and output costs $15 per million tokens. Reviewing a complex contract can consume millions of tokens, far exceeding the subscription fee when billed on a pay-as-you-go basis. If price isn't a factor, using the Opus API directly after encryption would offer a smoother experience, though it would be more expensive. Based on my current token consumption, using the Claude API exclusively would cost approximately $500+ per month.

This solution fundamentally meets all the requirements of ABA Formal Opinion 512 because the cloud AI has never received confidential information.

5. Hardware Configuration

6. Cost Calculation

In contrast, enterprise-level legal AI platforms such as Harvey AI are priced at $1,000-1,200 per user per month (approximately ¥7,200-8,600) and typically require a minimum of 20 seats.

7. Open source projects

I've open-sourced this configuration and workflow on GitHub:

VibeCodingLegalTools(https://github.com/Reytian/VibeCodingLegalTools)—Rule1.6-compliant AI workflow for legal practice

The project includes:

A complete anonymization/deanonymization tool (LDA)

  • OpenClaw Configuration Template
  • Agent Workspace Template
  • Customer Memory System Template
  • Detailed ethical compliance analysis

My thoughts on legal AI

1. Complete localization is ideal, but not realistic at present.

In an ideal world, lawyers should run AI entirely locally—all data remains on their own devices, with zero risk of leakage. But the reality is:

  • Model capability gap : There is a significant capability gap between locally runnable models (35-byte parameter level) and cutting-edge cloud models (trillion-byte parameter level). For simple legal consultations and information retrieval, local models are sufficient. However, for complex contract drafting, multi-round legal reasoning, and high-quality text generation, the performance of local models remains unsatisfactory.
  • Hardware costs : Running truly powerful local models (such as 70B+ parameters) requires 64GB or more of memory, causing hardware costs to rise rapidly. This is economically infeasible for independent lawyers and small law firms.
  • Model update lag : Open source models are always updated slower than commercially cutting-edge models.

2. Complete reliance on the cloud also has its problems.

On the other hand, relying entirely on cloud APIs is not a solution either:

Even if the API provider promises ZDR (zero data retention) and that the data will not be used for training, it is practically impossible for a lawyer to investigate any suspicious leaks .

LLM is a black box . We cannot open it to check whether our data was used for training. We can only trust the provider's promises.

As lawyers, "trust" is not a compliance strategy. Rule 1.6 requires "reasonable efforts"—not reasonable trust.

3. The hybrid model is currently the optimal solution.

This is why I chose the hybrid model strategy :

1. Routine Consultation → Local Model : Simple legal issues, information retrieval, and preliminary analysis are all completed locally.

2. Complex Tasks → Cloud APIs : Use trusted APIs when stronger reasoning capabilities are required, but avoid transmitting sensitive information.

3. Sensitive Documents → Anonymization Pipeline : When confidential documents require cloud-based AI processing, they are first anonymized locally, then handed over to the cloud for processing, and finally restored locally.

The core idea of ​​this solution is to bridge the trust gap using technological means (anonymization). We don't need to trust any AI provider to properly safeguard our customer data, because they have never received the customer data.

The AI ​​in the cloud will always see "{COMPANY_1}" and "{PERSON_1}", instead of our customers' real names.

Conclusion

AI will not replace lawyers. But lawyers who know how to use AI will eventually replace those who don't.

The key is not whether to use AI, but how to use it. Confidentiality is the cornerstone of legal practice; it should not be an obstacle to embracing AI, but rather a standard for choosing AI solutions.

What is Legal AI selling? I think there are two:

1. Knowledge;

2. Tools.

I believe that most lawyers already possess sufficient knowledge; they simply need a more suitable tool. When the price of a Mac Mini is less than a month's subscription fee to Harvey AI, building your own compliance tool might be a more pragmatic choice for independent lawyers.

A Mac Mini, an OpenClaw suite, and an encrypted communication channel—that's all it takes to create a compliant AI legal workstation.

This article does not constitute legal advice. Attorneys should evaluate the workflows described herein in accordance with the specific ethical rules of their jurisdiction and seek professional ethical guidance where necessary.

References:

  • ABA Model Rules of Professional Conduct, Rule 1.6
  • ABA Formal Opinion 512 — Generative Artificial Intelligence Tools (2024)
  • Lawyers Law of the People's Republic of China (2017 Revision)
  • OpenClaw (https://openclaw.ai/)
  • VibeCodingLegalTools—GitHub (https://github.com/Reytian/VibeCodingLegalTools)
Market Opportunity
Cloud Logo
Cloud Price(CLOUD)
$0.03747
$0.03747$0.03747
+1.18%
USD
Cloud (CLOUD) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Silver Prices Edge Closer to a Pivotal Support and Resistance Test

Silver Prices Edge Closer to a Pivotal Support and Resistance Test

The post Silver Prices Edge Closer to a Pivotal Support and Resistance Test appeared on BitcoinEthereumNews.com. The silver market, although experiencing recent
Share
BitcoinEthereumNews2026/03/07 11:29
QB Depth Chart And Injury Updates

QB Depth Chart And Injury Updates

The post QB Depth Chart And Injury Updates appeared on BitcoinEthereumNews.com. COLUMBIA, SOUTH CAROLINA – SEPTEMBER 13: LaNorris Sellers #16 of the South Carolina Gamecocks in action during the game against the Vanderbilt Commodores at Williams-Brice Stadium on September 13, 2025 in Columbia, South Carolina. (Photo by Brendan Ross/Vanderbilt University/University Images via Getty Images) University Images via Getty Images No player moves the college football betting line quite like the quarterback. The best can be worth more than a touchdown compared to the backup, and here’s a look at some of the notable Power 4 QB depth chart and injury updates heading into Week 4. Garrett Nussmeier LSU Tigers QB Garrett Nussmeier has been slowed by a torso injury, head coach Brian Kelly said. He does not appear to be in any danger of missing Saturday’s game against Southeastern Louisiana, but the Tigers should be able to cruise with or without Nussmeier. After tough games against the Clemson Tigers and Florida Gators already, this could be a great opportunity to limit his workload if LSU builds a big lead. If that happens, look for Mississippi State transfer Michael Van Buren to make his Tigers debut. Austin Simmons Ole Miss Rebels QB Austin Simmons reaggravated his left ankle injury in last weekend’s win over the Arkansas Razorbacks. He originally suffered the injury the previous game and did not start but entered when backup Trinidad Chambliss briefly exited. Head coach Lane Kiffin said he anticipates Simmons will start on Saturday against Tulane. If not, Chambliss is likely in line for his second consecutive start. LaNorris Sellers South Carolina Gamecocks QB LaNorris Sellers is listed as questionable on the SEC Availability Report heading into Saturday’s road matchup against the Missouri Tigers. Head coach Shane Beamer declined to say whether he suffered a concussion last weekend against the Vanderbilt Commodores but said he’s optimistic Sellers…
Share
BitcoinEthereumNews2025/09/19 05:17
Wormhole’s W token enters ‘value accrual’ phase with strategic reserve

Wormhole’s W token enters ‘value accrual’ phase with strategic reserve

Wormhole has moved beyond its distribution phase, initiating a new strategy. By allocating on-chain and off-chain protocol revenue to a dedicated treasury, the cross-chain protocol is creating a direct link between its commercial success and the value of its native…
Share
Crypto.news2025/09/18 03:05