Most teams discover the problem after the fact. Someone pastes a client contract into ChatGPT to get a summary. Someone uploads a spreadsheet of employee records to generate a report. Someone sends an internal memo through an AI writing assistant to clean up the language.
None of this feels risky in the moment. It feels like using a tool.
Here is the question worth asking first: Where does this data go, and who might read it?
Why the Question Matters
Cloud-based AI tools process your input on remote servers. Depending on the provider's data handling terms, that content may be:
- Used to improve the model (training data)
- Reviewed by human annotators as part of quality processes
- Stored temporarily in ways that persist beyond your session
- Subject to legal requests or security incidents outside your control
For everyday tasks — drafting a blog post, summarising public news — this is usually fine. For sensitive business information, it deserves a second look.
What Counts as Sensitive
Not all data carries the same risk. These categories warrant particular care:
Client and customer information
Names, contact details, transaction records, or anything covered by a client confidentiality agreement. Pasting this into a third-party AI tool may breach your obligations, even accidentally.
Employee records
Salaries, performance notes, medical information, or disciplinary matters. Most privacy regulations treat this as a protected category.
Commercially sensitive material
Unreleased product plans, pricing models, acquisition discussions, or internal forecasts. Once sent to an external system, you have limited control over what happens to it.
Legally privileged communications
Advice from lawyers or material prepared in anticipation of litigation. Sharing this externally can affect privilege.
Regulated data
Health records, financial data, and personal identifiers often fall under regulations such as the Privacy Act, GDPR, or sector-specific frameworks. These carry compliance obligations that do not pause because a tool is convenient.
The Practical Check
Before pasting anything into an AI tool, ask:
- Would I be comfortable if this text appeared in a data breach disclosure?
- Does my organisation have a policy on what can be shared with third-party AI services?
- Could this identify a specific person, client, or confidential business matter?
If the answer to any of these is yes or uncertain, pause before proceeding.
A Better Path for Sensitive Work
The alternative is not to stop using AI tools. It is to use them in a way that keeps sensitive data inside your control.
InBay is designed for exactly this situation — a local AI environment that runs on your own infrastructure, processes data without sending it to external servers, and gives your team the productivity benefits of AI without the exposure.
This is particularly useful for organisations handling documents covered by professional privilege, regulatory requirements, or client confidentiality — common situations in legal, financial, healthcare, and government contexts.
Summary
The question is not whether AI tools are useful. They are. The question is whether the data you are about to share with one deserves more careful handling.
One habit — pausing to ask where this data goes — is enough to catch most of the situations that would otherwise cause problems.
If your organisation regularly works with sensitive documents and wants to explore a private, local AI setup, get in touch.