Skip to content
MSP

Your Client Wants Claude Co-Work. Use This Governance Checklist First.

Scopable Team6 min read
Your Client Wants Claude Co-Work. Use This Governance Checklist First.

Most MSPs are asking the wrong question.

Not: "Should we allow Claude?"

The right one: "Under what controls do we allow it?"

If you ban AI tools outright, users route around you. If you allow everything, you get silent data exposure and no ownership when something breaks.

This is the middle path that actually works.

The risk is not "AI". It is scope and access to data.

Treat these as separate topics:

Some examples that might help you see what I mean:

1. Web chat usage

User pastes content into a browser chat. Main controls: data classification, training/privacy settings, DLP, approved accounts.

2. Desktop app with local file access

Tool can interact with local files, directories, and workflows. Main controls: endpoint policy, folder scope, app controls, monitoring, role restrictions.

3. Agentic coding workflows

Tool can run commands, modify code, and automate tasks. Main controls: permission mode policy, sandboxing, project boundaries, audit trail, least privilege.

When your client says "we want to use Claude," force this differentiation first. The risk conversations get easier immediately. This same risk-scoping approach applies to Microsoft Copilot Co-Work rollouts and any agentic AI tool hitting your clients' environments.

Why "no" fails in real client environments

MSP owners already know this from shadow IT:

  1. Users install tools anyway
  2. Personal accounts get used for work
  3. Security team loses visibility
  4. Incidents get discovered after the fact

A blanket no creates hidden use.

A governed yes creates:

  1. Visibility
  2. Ownership
  3. Enforceable policy
  4. Clean escalation path

Your value as an MSP is not saying no. It is building the decision system. If you're fuzzy on where MSP compliance liability starts and ends when AI agents act on behalf of client users, sort that out before you green-light anything.

The 4-question gate to run before any Claude Desktop rollout

Use this in every client conversation.

1) What data classes are allowed?

Define 3 tiers in plain language:

  1. Tier 1: Public (marketing copy, public docs)
  2. Tier 2: Internal (process docs, non-sensitive operations)
  3. Tier 3: Restricted (PII, financials, client secrets, regulated data)

Rule: no Tier 3 in any unapproved AI workflow.

2) What deployment mode is requested?

  1. Browser only?
  2. Desktop app with local file access?
  3. Agentic coding workflows?

Never approve "AI" as a single bucket. Approve by mode.

3) What controls are enforced before access?

Minimum baseline:

  1. SSO-required business account
  2. Approved user groups
  3. Documented retention/training settings
  4. Endpoint visibility enabled
  5. Acceptable use policy signed

4) Who owns output risk and incident response?

Pick owners up front:

  1. Technical owner (IT/security)
  2. Business owner (department leader)
  3. Incident contact and response SLA

If no owner exists, there is no rollout.

A simple policy structure MSPs can deploy fast

You do not need a 50-page policy to start. You need one page that people can follow.

Section A: Approved tools and plans

List exactly what is allowed:

  1. Approved vendor
  2. Approved subscription tier
  3. Approved auth method
  4. Approved user groups

Anything not listed is blocked by default.

Section B: Data handling rules

Map your data tiers to allowed actions. Example:

  1. Tier 1: allowed in approved business AI tools
  2. Tier 2: allowed with human review and no customer identifiers
  3. Tier 3: prohibited unless explicit exception approved by security + legal

Section C: Use case guardrails

Acceptable examples:

  • Drafting internal documentation
  • Summarizing public vendor docs
  • Creating first-pass SOP outlines

Prohibited examples:

  • Uploading client contracts to personal AI accounts
  • Generating final compliance statements without human sign-off
  • Using AI output as sole source of truth for security actions

Section D: Incident reporting

One channel, one SLA, one owner.

Example: "If sensitive data was entered in the wrong tool, report within 1 hour to [contact]."

Section E: Review cadence

Set a recurring review schedule. Quarterly minimum. Immediately on:

  1. New feature release
  2. Policy or regulatory changes
  3. Any AI-related incident

Technical controls MSPs should prioritize first

If your client is in Microsoft stack, start with controls they already own.

Endpoint and app visibility

  1. Detect where desktop tools are installed
  2. Restrict unauthorized installs where possible
  3. Track usage patterns for approved vs unapproved apps

Identity and consent governance

  1. Require SSO for approved tool access
  2. Restrict who can grant third-party app consent
  3. Audit high-privilege delegated access regularly

DLP and egress safeguards

  1. Block or warn on restricted data classes headed to unapproved AI endpoints
  2. Prefer approved enterprise plans and known retention settings

Permission mode policy for agentic workflows

  1. Default to ask/approve mode
  2. Allow higher autonomy only in sandboxed, scoped environments
  3. Never normalize full bypass modes for broad end-user populations

Client conversation script that works

Use this language in QBRs and steering calls:

"We are not blocking innovation. We are implementing a controlled rollout model. You get productivity gains without blind data risk. We approve by data tier and deployment mode, not by hype cycle."

Then show the 4-question gate and ask for decisions. Most clients respond well because this is practical, not theoretical.

Packaging this as an MSP advisory offer

This is highly valuable to most SMBs. This should not be free. Turn it into a paid engagement, with deliverables:

AI Governance Sprint (example package)

Week 1: Current-state AI usage inventory, shadow AI risk snapshot

Week 2: One-page acceptable use policy, deployment mode matrix (browser vs desktop vs agentic)

Week 3: Control plan and owner assignment, pilot group launch checklist

Week 4: Rollout decision memo + executive readout

Outputs are clear. Scope is clear. Client sees value quickly.

Common failure patterns to avoid

  1. Policy without enforcement: Great PDF. Zero controls. No behavior change.
  2. Controls without ownership: Tools configured, nobody accountable.
  3. Approval without mode separation: Web and desktop treated as same risk.
  4. No incident playbook: Team knows what to prevent, not what to do when prevention fails.
  5. One-time setup mentality: AI feature surface changes monthly. Governance must be iterative.

Final takeaway

If your client asks for you to purchase, or install Co-Work... do not answer "Yes" or "No."

Turn their question into a risk conversation.

  1. What data is allowed?
  2. In which tools?
  3. Under which controls?
  4. With which owner?

That is the framework that prevents shadow AI, keeps clients moving, and positions your MSP as a strategic advisor instead of a tool gatekeeper. If you want to turn this advisory work into recurring revenue, client technology roadmaps are the natural next step: AI governance rolls into quarterly roadmap reviews, and that's where the real value stacks up.

If your client asks for Claude Co-Work this week, run the 4-question gate first. It will give you a better answer than yes or no.

For HIPAA-regulated clients, the governance bar is higher. Our HIPAA compliance guide for MSPs covers the specific controls and documentation requirements you need before any AI tool touches PHI.

Ready to stop guessing?

Scopable automates quoting, roadmaps, and QBRs for MSPs. Join the alpha and help shape the platform you actually want.

Quote Your Next Project In Minutes

Get MSP insights weekly

No spam. Unsubscribe anytime.