AI-use register
Record use cases, owners, tools, data sensitivity, review requirements, and status.
AI Governance
Governance should help people use AI safely, not bury them in policy. The aim is practical control: what data can be used, who reviews outputs, what gets logged, and when work needs a private environment.
Core controls
Record use cases, owners, tools, data sensitivity, review requirements, and status.
Make public, internal, confidential, and personal-data boundaries easy for staff to follow.
Keep judgement and sign-off explicit where client, regulatory, or reputational risk is present.
Capture workflow maps, approval notes, limitations, test results, and adoption decisions.
Set practical rules for staff experimentation, approved tools, and escalation.
Track hallucination, confidentiality, bias, supplier, operational, and auditability risks.
These are the signs that informal AI use has moved beyond curiosity and needs a practical operating model.
Next step
Start with a practical governance pack, then use it to qualify the first workflow worth testing.