AI Part II: When AI Gets Legal Questions Wrong and Why “Mostly Accurate” Can Still Create Risk
- Jackie Piscitello

- May 3
- 5 min read
AI tools are increasingly used to answer day-to-day legal questions: Is this clause enforceable? Do we need to notify customers? What are the penalties? Are we exposed?
They can be genuinely useful for getting oriented quickly. But there is a consistent failure mode that creates real risk. The answer can be technically correct and still wrong for your situation. Part I of this series focused on contract drafting risks. This Part II focuses on how AI is used to guide legal and business decisions, and where that approach breaks down.
The problem is not that AI is wrong. It is that it is incomplete.
Most legal questions are not single-rule lookups. Outcomes depend on facts, contracts, incentives, and how issues play out in practice. AI tends to compress that into a clean, readable answer. That makes it useful for orientation, but risky for decisions.
Where this shows up in practice
Teams often rely on AI to answer questions like:
Is this clause enforceable?AI may summarize general legal principles, but enforceability often depends on jurisdiction, recent regulatory developments, and how the clause interacts with the rest of the agreement. A provision that looks standard in isolation may not hold up when applied to your facts.
Do we need to notify customers?AI may describe general notification frameworks, but the answer depends on the specific data involved, contractual obligations, timing requirements, and how the issue is characterized internally and externally.
What are the penalties for this issue?AI may correctly cite statutory penalties, but miss where real exposure comes from, including litigation risk, indemnity obligations, or downstream commercial consequences.
Are we exposed here?AI can outline general risk categories, but it cannot assess exposure without understanding the full context, including contracts, facts, and how responsibility is allocated between parties.
Is this provision standard?AI often compares language to generalized patterns. But “standard” does not mean appropriate for your business model, your risk tolerance, or your industry.
These are not abstract questions. They are decision points. Small misunderstandings at this stage can lead to real financial, operational, and regulatory consequences.
Why “mostly accurate” answers create risk
The issue is not that AI gets the rule wrong. It is that it answers the question you asked, not the one that actually determines your risk.
In many cases, the framework is clear. The contract specifies governing law. The high-level rule is easy to find. But outcomes are rarely driven by the rule alone.
What actually matters is how the facts, incentives, and contract structure interact in the real world.
Facts drive outcomes. Small details can change the analysis entirely. What was said, when it was said, how something was implemented, and whether conduct was repeated or documented properly can all shift exposure in ways a generic answer will not capture.
Claims are not limited to the obvious theory. A narrow legal question can have broader consequences. What starts as a compliance issue can lead to multiple claims, including ones that create leverage through fee-shifting, statutory damages, or other remedies that are not obvious from the initial question.
Contracts often create the real exposure. Indemnities, limitations of liability, termination rights, and other provisions can expand or shift risk in ways that are not reflected in a high-level legal answer.
Incentives and coverage matter. How a situation plays out often depends on economics. For example, a threatened employment claim may look high-risk in the abstract, but if the company has employment practices liability insurance, that changes how the claim is evaluated, defended, and potentially resolved. AI does not account for insurance coverage, defense dynamics, or how costs and leverage are actually allocated.
Risk is often hidden at the outset. The most significant exposure is not always in the clause or issue being analyzed, but in how that issue interacts with other provisions, other claims, or the practical realities of enforcement.
Example
A team asks whether a particular compliance issue carries “low penalties.” The AI correctly cites a statute with modest per-violation fines, but does not identify the more significant exposure, including potential litigation, indemnity obligations, and defense costs. The result is a risk assessment that looks reassuring but is incomplete in the ways that matter.
AI does not build this kind of picture. It delivers an answer based on generalized patterns and the information provided. That is useful for orientation, but it is not the same as evaluating how a situation will actually unfold.
AI can help you understand the rule. It cannot tell you how that rule will apply to your specific facts, contracts, and business realities. That is the difference between starting the analysis and finishing it.
The disclaimer problem
AI tools often state that they are not providing legal advice, while still offering suggested next steps, risk assessments, or conclusions. This creates a false sense of security. The disclaimer signals caution, but the content can function like advice, and teams may rely on it as if it were vetted.
How to use AI effectively without relying on it as your lawyer
AI can be a valuable tool when used deliberately:
Use it to understand terminology and get initial orientation
Ask it to identify issues and surface what might matter
Use it to generate questions you should be asking
Treat outputs as a starting point, not a conclusion
Then validate with experienced counsel when decisions affect cost, customer commitments, compliance, or risk allocation.
A simple rule of thumb
If your question is effectively “What should we do?” or “Are we safe?”, you are no longer just learning. You are making a decision. That is where context matters most and where unvalidated answers create the most expensive mistakes.
Bottom Line
AI is a powerful tool for speed and efficiency. It can help teams get oriented, move faster, and prepare better questions. But it does not replace legal judgment.
The costliest mistakes are not obvious errors. They are decisions based on answers that look right but fail when applied to real facts, real contracts, and real-world incentives.
Companies that use AI effectively treat it as the beginning of the analysis, not the end. They move quickly, but they validate where it matters.
Our team works with companies that are already using AI in their workflows. We help assess where the analysis holds up, where it breaks down, and what actually matters for the business. If you are relying on AI to move faster, we can help make sure you are also moving in the right direction.
About the Author Jacqueline Piscitello is a Founding Partner of ExecutiveGC, LLP, where she and her team provide practical, business-focused legal counsel to growing companies. Contact Jacqueline at jackie@executive-gc.com to discuss how ExecutiveGC can support your business.
This article is for general informational purposes and does not constitute legal advice.



Comments