Generative AI Act 2
Most Generative AI pilots prove the technology works. The problem is many never delivered lasting business value. In this post, we explore the shift from “Act I” experimentation to “Act 2” enterprise systems and how organizations can design AI around real workflows, decisions, and measurable outcomes.
Generative AI Act 2: What changes when you build from the customer back
If you felt a wave of pressure in 2023 to “do something with GenAI,” you were not alone. Many teams moved fast to ship pilots and prove that large language models could draft content, answer questions, and summarize documents.
But launching quickly meant that teams and organizations faced challenges with adoption, trust, and sustained value. Companies need to now bring their focus back to why they even implemented AI in the first place.
In an article by Sequoia Capital, they frame this shift as “Act I” moving to “Act 2.” Act I is about discovering the potential of a new technology like GenAI. Act 2 is where teams solve real problems end-to-end and deliver value with sustainable, user-centric applications.
This article builds on that idea, with a practical lens on what you should do differently in an enterprise setting right now.
A breakdown of Act I vs Act 2
Generative AI Act I
In Act I, teams typically start with the model and look for places to apply it. Organizations move quickly to ship lightweight apps that show what GenAI can do, often generating early excitement and “nice demo” reactions.
But many of these projects struggle with retention and repeatability. Sequoia pointed out that engagement across many GenAI apps remained weak, with a median DAU/MAU of just 14%.
The focus during this stage is experimentation and innovation. Teams explore what is possible with the technology, sometimes without fully considering the bigger picture. That can lead to limited long-term value, inflated expectations, and concerns around governance, ethics, or security.
Generative AI Act 2
Act 2 flips the starting point. Instead of beginning with the model, teams start with the workflow and the human outcome they want to improve.
From there, they design a full solution that includes the supporting data, user interface, approvals, integrations, and monitoring. The goal is to embed AI directly into how work already happens so the solution becomes part of the process rather than an optional tool.
In this model, the AI itself is treated as an engine powering the system, not the product itself.
Why so many enterprise GenAI pilots stalled
When we review GenAI pilots that did not make it past a proof-of-concept, the failure mode usually looks like one of these.
1. The pilot optimized novelty, not throughput
A chat interface impressed leadership, then quietly faded because it did not reduce cycle time, rework, or risk.
If you cannot tie GenAI output to a downstream action, you basically just have a writing assistant not a true innovative product.
2. Teams treated “knowledge retrieval” as the whole job
Retrieval-augmented generation (RAG) helped reduce hallucinations and improve relevance. But many implementations stopped at the point where a user asks a question and receives an answer.
Act 2 requires you to finish the job. The system needs to route the answer into a decision, generate the artifact required by the workflow, trigger the next system action, and log what happened for auditability.
3. No one owned the workflow end-to-end
It’s a tale as old as time in the technology space even before AI! If IT owns the model and the business owns the process, but no one owns the combined system, you get a pilot with no path to production.
What Generative AI Act 2 looks like in a real enterprise build
Here is the shift that separates Act 2 teams from everyone else:
You design the interface around decisions instead of conversations
A plain old empty chat box invites open-ended prompts, but enterprise work rarely behaves that way.
Act 2 interfaces usually guide users through the task with structure interactions such as:
If your GenAI tool cannot explain what it needs from the user to do the job well, you will end up with inconsistent inputs and inconsistent outputs.
You don’t build an AI feature. Instead, you build an AI system.
Successful enterprise implementations treat AI as one component inside a larger operational system. That system typically includes:
Sequoia describes this as moving from demos to whole product experiences and, increasingly, agentic systems that can solve problems end-to-end.
You optimize for sustained value
The most successful Act 2 programs treat GenAI as a living product rather than a one-time project.
That means running evaluation cycles frequently, embedding feedback capture directly into the workflow, and maintaining model and prompt versioning over time. Teams also implement regression testing for critical scenarios so changes do not introduce unexpected failures.
If a team launches a pilot and immediately moves on, Act 2 can feel difficult until that operating cadence changes.
A practical Act 2 playbook you can use this quarter
If you want to move from experimentation to measurable outcomes, start with a structured sequence.
Step 1: Pick one workflow with a clear owner and a measurable constraint
Good candidates:
Examples include support ticket routing, contract redline triage for standard agreements, sales proposal first drafts with compliance checks, or procurement intake paired with supplier risk summaries.
Step 2: Map the workflow as decisions and artifacts
Do not start with prompts. Start with identifying:
You will usually find that the AI should not “answer questions.” It should “produce the next artifact the process needs.”
Step 3: Add routing and guardrails early
Routing helps match tasks to the right level of capability:
Guardrails ensure the system behaves safely and predictably. These typically include policy filters, data boundary enforcement, citation requirements for sourced claims, and safe failure behaviors that escalate uncertain results
Step 4: Build evaluation like you build software tests
Create a structured test set that includes common scenarios, difficult cases, edge conditions, and examples that previously caused failures.
Run this evaluation set every time the system changes. If quality cannot be measured consistently, it cannot be improved.
Step 5: Ship into the system of record
If the work lives in ServiceNow, Salesforce, SharePoint, SAP, or another enterprise platform, deploy the AI capability directly inside that system.
Adoption tends to follow gravity. People rarely want to adopt an entirely new tool if the work already exists somewhere else.
Start With the Workflow, Not the Copilot
If your GenAI strategy starts with “we need a copilot,” you will likely build another chat interface that competes with your own process. If your strategy starts with “we need to remove 20% of cycle time from this workflow without adding risk,” you will design a system people keep using.
Act 2 rewards teams that treat GenAI like product engineering plus process engineering plus governance. That combination takes effort, but it is also where defensible value lives.




