The Year In-House Caught Up
For two years after ChatGPT's launch, most corporate legal departments watched from the sidelines while BigLaw experimented. By early 2025, that posture had become untenable. GCs were fielding questions from boards, CEOs, and their own business clients. They needed answers — and fast.
We spent six months interviewing forty general counsels at companies ranging from $500M to $80B in revenue. What emerged was not a single playbook but a taxonomy of approaches — and a clear picture of which strategies are actually working.
Key Takeaways: (1) Centralized AI governance with decentralized use works best. (2) Pilots fail most often due to change management, not technology. (3) "AI-ready" contract data is the bottleneck most teams underestimate.
What GCs Are Actually Buying
The most common first purchase is a contract analysis and management tool. Of the forty GCs we interviewed, thirty-one had deployed or were piloting CLM-AI functionality. The runner-up, at twenty-two GCs, was a legal research tool. A distant third was document drafting, cited by fourteen.
The procurement process has shifted significantly from 2023. In 2023, most GC AI purchases were championed by a single enthusiast and bypassed normal IT security review. By 2025, the majority of purchases we saw followed a structured RFP process with IT, infosec, and data governance sign-off.
The compliance cost of the informal approach has become undeniable after several high-profile incidents involving AI tools that lacked appropriate data residency and access controls.
The Governance Structures That Work
The clearest pattern in our interviews: the GCs with the most successful deployments had created a small AI governance body early — typically a committee of 3-5 people including the GC, a senior IT partner, a data privacy officer, and at least one senior business stakeholder.
This committee's job is narrow: review proposed AI deployments against a standardized checklist, approve or reject, and maintain the AI inventory. It is not a policy-writing body. It does not approve prompts. It does the minimum governance needed to prevent the worst outcomes.
The failure mode at the other end of the spectrum: GCs who attempted to write comprehensive AI use policies before deploying anything. Universally, these teams are still writing the policy.
Why Pilots Fail
Of the forty GCs, twenty-three reported at least one AI pilot that was terminated or never scaled. When we asked what went wrong, the most common answer was not "the technology didn't work." It was "we couldn't get the lawyers to use it."
The change management challenge in legal is acute for a specific reason: lawyers are trained to be risk-averse about their own process. Introducing a new tool into a live workflow feels like introducing risk. The tools that have succeeded in scaling are the ones that were introduced on the least-risky matters first — standard contracts, routine research — and let lawyers build comfort before moving to high-stakes work.
The deployment teams that consistently fail are the ones that start with a high-visibility, high-stakes pilot to demonstrate ROI quickly.
The Bottleneck Nobody Talks About
Ask any GC what surprised them most about their AI deployment and a majority will cite some version of: "We didn't realize how bad our contract data was."
AI contract analysis tools require structured, searchable contract data. Most legal departments have years of contracts in formats that are not OCR-readable, not consistently organized, and not reliably categorized. Getting the data AI-ready is often a larger project than the AI deployment itself.
The GCs who moved fastest were the ones who had already invested in a document management system with consistent metadata. The ones who are still struggling two years in are, almost universally, the ones who had not.
What the Leaders Are Doing Next
The GCs who are 12-18 months ahead of their peers on AI deployment share a consistent next-step focus: agentic workflows. Rather than AI that answers questions, they are piloting AI that takes actions — drafts and routes contracts, flags compliance issues and creates tickets, identifies outside counsel billing discrepancies and escalates for review.
The agentic frontier is real but narrow. The workflows that are actually in production are highly scoped, well-audited, and bounded by clear human review gates. Nobody is running unchecked AI agents on live legal work. But the infrastructure for doing so carefully is being built now.
