What counts as useful proof
Good proof names the workflow, baseline, data boundary, human review point, outcome and limitation. It should help a buyer inspect whether a narrow AI-assisted workflow was useful, not imply that every AI pilot succeeds.
- Workflow named clearly enough to understand the operational context.
- Baseline captured before the sprint where possible.
- Outcome linked to time, turnaround, error, rework, risk-control or service quality.
- Human review and decision ownership kept visible.
- Limitations and sample size kept beside the claim.
What can be shared externally
External proof must be permission-led. If named permission is not recorded, use anonymised sector learning only and keep buyer, client, member, matter or citizen details out of the claim.
- Named case studies need explicit permission.
- Quotes should be approved word for word.
- Metrics should include caveats and sample context.
- Anonymised proof should not make the buyer identifiable by accident.
- Public wording should avoid implying regulatory approval or autonomous decision-making.
How proof is used in a sprint
Proof is collected to make the closeout decision clearer. The useful outcome may be scale, improve, pause or stop. A stop decision can still be valuable if it prevents a weak AI project from consuming attention.
- Separate internal evidence from public claims.
- Agree permission before using proof in proposals, posts or website copy.
- Keep data-boundary and review notes with the evidence.
- Use proof to improve the next workflow hypothesis.
- Retire proof when tools, policy, buyer context or claims change.