Your Client Wants Claude Co-Work. Use This Governance Checklist First.

Most MSPs are asking the wrong question.
Not: "Should we allow Claude?"
The right one: "Under what controls do we allow it?"
If you ban AI tools outright, users route around you. If you allow everything, you get silent data exposure and no ownership when something breaks.
This is the middle path that actually works.
The risk is not "AI". It is scope and access to data.
Treat these as separate topics:
Some examples that might help you see what I mean:
1. Web chat usage
User pastes content into a browser chat. Main controls: data classification, training/privacy settings, DLP, approved accounts.
2. Desktop app with local file access
Tool can interact with local files, directories, and workflows. Main controls: endpoint policy, folder scope, app controls, monitoring, role restrictions.
3. Agentic coding workflows
Tool can run commands, modify code, and automate tasks. Main controls: permission mode policy, sandboxing, project boundaries, audit trail, least privilege.
When your client says "we want to use Claude," force this differentiation first. The risk conversations get easier immediately. This same risk-scoping approach applies to Microsoft Copilot Co-Work rollouts and any agentic AI tool hitting your clients' environments.
Why "no" fails in real client environments
MSP owners already know this from shadow IT:
- Users install tools anyway
- Personal accounts get used for work
- Security team loses visibility
- Incidents get discovered after the fact
A blanket no creates hidden use.
A governed yes creates:
- Visibility
- Ownership
- Enforceable policy
- Clean escalation path
Your value as an MSP is not saying no. It is building the decision system. If you're fuzzy on where MSP compliance liability starts and ends when AI agents act on behalf of client users, sort that out before you green-light anything.
The 4-question gate to run before any Claude Desktop rollout
Use this in every client conversation.
1) What data classes are allowed?
Define 3 tiers in plain language:
- Tier 1: Public (marketing copy, public docs)
- Tier 2: Internal (process docs, non-sensitive operations)
- Tier 3: Restricted (PII, financials, client secrets, regulated data)
Rule: no Tier 3 in any unapproved AI workflow.
2) What deployment mode is requested?
- Browser only?
- Desktop app with local file access?
- Agentic coding workflows?
Never approve "AI" as a single bucket. Approve by mode.
3) What controls are enforced before access?
Minimum baseline:
- SSO-required business account
- Approved user groups
- Documented retention/training settings
- Endpoint visibility enabled
- Acceptable use policy signed
4) Who owns output risk and incident response?
Pick owners up front:
- Technical owner (IT/security)
- Business owner (department leader)
- Incident contact and response SLA
If no owner exists, there is no rollout.
A simple policy structure MSPs can deploy fast
You do not need a 50-page policy to start. You need one page that people can follow.
Section A: Approved tools and plans
List exactly what is allowed:
- Approved vendor
- Approved subscription tier
- Approved auth method
- Approved user groups
Anything not listed is blocked by default.
Section B: Data handling rules
Map your data tiers to allowed actions. Example:
- Tier 1: allowed in approved business AI tools
- Tier 2: allowed with human review and no customer identifiers
- Tier 3: prohibited unless explicit exception approved by security + legal
Section C: Use case guardrails
Acceptable examples:
- Drafting internal documentation
- Summarizing public vendor docs
- Creating first-pass SOP outlines
Prohibited examples:
- Uploading client contracts to personal AI accounts
- Generating final compliance statements without human sign-off
- Using AI output as sole source of truth for security actions
Section D: Incident reporting
One channel, one SLA, one owner.
Example: "If sensitive data was entered in the wrong tool, report within 1 hour to [contact]."
Section E: Review cadence
Set a recurring review schedule. Quarterly minimum. Immediately on:
- New feature release
- Policy or regulatory changes
- Any AI-related incident
Technical controls MSPs should prioritize first
If your client is in Microsoft stack, start with controls they already own.
Endpoint and app visibility
- Detect where desktop tools are installed
- Restrict unauthorized installs where possible
- Track usage patterns for approved vs unapproved apps
Identity and consent governance
- Require SSO for approved tool access
- Restrict who can grant third-party app consent
- Audit high-privilege delegated access regularly
DLP and egress safeguards
- Block or warn on restricted data classes headed to unapproved AI endpoints
- Prefer approved enterprise plans and known retention settings
Permission mode policy for agentic workflows
- Default to ask/approve mode
- Allow higher autonomy only in sandboxed, scoped environments
- Never normalize full bypass modes for broad end-user populations
Client conversation script that works
Use this language in QBRs and steering calls:
"We are not blocking innovation. We are implementing a controlled rollout model. You get productivity gains without blind data risk. We approve by data tier and deployment mode, not by hype cycle."
Then show the 4-question gate and ask for decisions. Most clients respond well because this is practical, not theoretical.
Packaging this as an MSP advisory offer
This is highly valuable to most SMBs. This should not be free. Turn it into a paid engagement, with deliverables:
AI Governance Sprint (example package)
Week 1: Current-state AI usage inventory, shadow AI risk snapshot
Week 2: One-page acceptable use policy, deployment mode matrix (browser vs desktop vs agentic)
Week 3: Control plan and owner assignment, pilot group launch checklist
Week 4: Rollout decision memo + executive readout
Outputs are clear. Scope is clear. Client sees value quickly.
Common failure patterns to avoid
- Policy without enforcement: Great PDF. Zero controls. No behavior change.
- Controls without ownership: Tools configured, nobody accountable.
- Approval without mode separation: Web and desktop treated as same risk.
- No incident playbook: Team knows what to prevent, not what to do when prevention fails.
- One-time setup mentality: AI feature surface changes monthly. Governance must be iterative.
Final takeaway
If your client asks for you to purchase, or install Co-Work... do not answer "Yes" or "No."
Turn their question into a risk conversation.
- What data is allowed?
- In which tools?
- Under which controls?
- With which owner?
That is the framework that prevents shadow AI, keeps clients moving, and positions your MSP as a strategic advisor instead of a tool gatekeeper. If you want to turn this advisory work into recurring revenue, client technology roadmaps are the natural next step: AI governance rolls into quarterly roadmap reviews, and that's where the real value stacks up.
If your client asks for Claude Co-Work this week, run the 4-question gate first. It will give you a better answer than yes or no.
For HIPAA-regulated clients, the governance bar is higher. Our HIPAA compliance guide for MSPs covers the specific controls and documentation requirements you need before any AI tool touches PHI.


