INFLECT
Back to Insights

Insight

KI-Beratung7 min

Data Privacy & AI: How Swiss SMEs Protect Their Data When Using Artificial Intelligence

Artificial intelligence is more accessible than ever for Swiss SMEs. ChatGPT, Claude, Gemini and others promise massive productivity gains – from automated quote generation to intelligent customer communication. But one question holds many businesses back: What happens to our data? Data privacy in AI means maintaining control over which data flows to which AI systems, where it is processed, and who has access to it. This guide shows you as an SME decision-maker exactly how to use AI in a privacy-compliant way – with practical measures you can implement immediately.

The Swiss Data Protection Act (nDSG) and AI

Since 1 September 2023, Switzerland's revised Data Protection Act (nDSG) has been in force. For SMEs using AI, the following principles are particularly relevant:

Transparency: Data subjects must be informed when their data is processed by AI. If you enter customer data into an AI tool, this must be disclosed in your privacy policy.

Purpose limitation: Data may only be used for the purpose for which it was collected. Customer data collected for order processing may not simply be used for AI training without further basis.

Data minimisation: Only as much data as necessary for the purpose may be processed. Rather than loading complete customer dossiers into an AI tool, only relevant information should be submitted.

Proportionality: The use of AI must be proportionate. Automated decisions that significantly affect individuals require particular care.

For SMEs with EU customers, the GDPR (General Data Protection Regulation) also applies, which imposes even stricter requirements in many areas – such as the right to explanation for automated decisions. In practical terms: if you as an accounting firm enter customer data into ChatGPT to create a tax analysis, you must ensure that data processing is transparent, purpose-bound and proportionate.

Server Locations: Where Is Your Data Processed?

The server location of an AI provider determines which legal framework your data is subject to – and who potentially has access to it. This is not a theoretical question: the US Cloud Act allows US authorities to access data stored by US companies – regardless of where the servers are physically located.

Key AI Providers and Their Server Locations

  • OpenAI (ChatGPT): Primarily US servers (Microsoft Azure). Enterprise customers can choose EU data processing. The Free and Plus versions process data in the USA.
  • Anthropic (Claude): Primarily US servers (AWS/Google Cloud). The API processes data in the USA by default. Enterprise options for regional data processing are available.
  • Google (Gemini): Global infrastructure with selectable regions. EU regions available, Swiss data centre in Zurich for Google Cloud.
  • Microsoft (Copilot): Azure regions freely selectable, including Switzerland (Zurich and Geneva). For Microsoft 365 customers, existing data processing agreements apply.
  • Local open-source models: Full control – data never leaves your own infrastructure.

For Swiss SMEs with sensitive data, the recommendation is: check the server location before using any AI tool in production. For highly sensitive data (personal data, financial data, health data), EU/CH-based solutions or local models are the safer choice.

Customer Data and AI: What Goes In, What Doesn't?

Not all data is equally sensitive. A pragmatic categorisation helps create clear guidelines for AI use:

Safe – freely usable:

  • General questions and research without personal reference
  • Publicly available information
  • Already anonymised or synthetic data
  • Internal process descriptions without confidential details

Caution – only with safeguards:

  • Internal business data (revenue figures, strategy papers)
  • Aggregated customer data without individual reference
  • Email templates with placeholders instead of real names

Off-limits – never without anonymisation:

  • Personal customer data (names, addresses, phone numbers)
  • Health data and particularly sensitive personal data
  • Financial data (account details, credit card information)
  • Passwords, access credentials, internal security information
  • Confidential contracts and legal documents

Is Your Data Used for AI Training?

A key risk: many AI providers use input data to improve their models. This means your business data could potentially flow into future model responses.

  • ChatGPT Free/Plus: Inputs are used for training by default. Opt-out is available via settings but not pre-set.
  • ChatGPT Team/Enterprise/API: No use for training. Contractually guaranteed.
  • Claude (Anthropic) API: No use for training. Clearly documented in the terms of service.
  • Claude.ai Free: Data may be used for training unless deactivated.
  • Local open-source models: No data leaves your infrastructure – no training risk.

Three Rules for Employees

  • Rule 1: Never enter real customer names, addresses or contact details into AI tools – use placeholders instead.
  • Rule 2: Before using a new AI tool, check: Is data used for training? If yes, activate opt-out.
  • Rule 3: When in doubt, anonymise – better once too often than once too few.

Open Source vs. Closed Source AI: The Data Privacy Perspective

Open source in AI means that the source code and often the model weights are publicly accessible. The model can be downloaded, run on your own servers and customised. Closed-source models, by contrast, can only be used via the provider's servers.

For data privacy, this distinction is fundamental:

Comparison: Open Source vs. Closed Source

  • Data control – Open Source: Full control, locally operable, no data leaves the company. Closed Source: Data is transmitted to and processed on provider servers.
  • Hosting – Open Source: Own servers, own cloud or on-premise. Closed Source: Provider cloud, location depends on provider.
  • Costs – Open Source: Hardware and hosting costs, no licence fees. Closed Source: Subscription or API costs, no hardware needed.
  • Quality – Open Source: Good to very good, depending on model and task. Closed Source: Generally state-of-the-art for complex tasks.
  • Setup effort – Open Source: High, IT expertise or external partner needed. Closed Source: Low, immediately usable via browser or API.
  • Updates – Open Source: Manual, new versions must be deployed yourself. Closed Source: Automatic, always the latest version.
  • Data privacy – Open Source: Maximum, no data leaves your own infrastructure. Closed Source: Dependent on contract, terms of service and server location.

Specific Models at a Glance

Open-source models:

  • Llama 3.1 / 3.2 (Meta): Powerful models that can be run locally. Ideal for data-sensitive business applications. Available in various sizes (8B to 405B parameters).
  • Mistral / Mixtral (Mistral AI, France): EU-based company with strong performance. Particularly interesting for European SMEs that value GDPR compliance.
  • Qwen (Alibaba): Powerful and open source, but Chinese provider – running the model locally gives full control; cloud usage falls under Chinese data protection laws.

Closed-source models:

  • ChatGPT / GPT-4 (OpenAI): Market leader with broad functionality. Enterprise version with contractual privacy guarantees. API usage without training on customer data.
  • Claude (Anthropic): Strong focus on safety and responsible AI. API usage contractually excludes training on customer data. Particularly strong with long documents and complex analyses.
  • Gemini (Google): Integration into the Google Workspace ecosystem. Seamlessly usable for existing Google customers, but data processing via Google infrastructure.

Our Recommendation: The Hybrid Approach

For most SMEs, a hybrid approach is optimal: use closed-source models like ChatGPT or Claude for general, non-sensitive tasks – text drafts, research, brainstorming. Deploy open-source models locally for tasks involving sensitive data – customer data analyses, internal documents, confidential communications. This combines the user-friendliness of closed-source services with the data control of open-source models.

Anonymisation Architecture: Processing Sensitive Data Safely with AI

The most elegant solution for data privacy in AI is anonymisation: sensitive data is masked before being passed to an AI model. After processing, the placeholders are replaced with the real data again. This way you use the full power of an AI model without exposing sensitive data.

Practical Example: n8n Workflow with Claude

n8n is an open-source automation tool that is perfect for anonymisation workflows. It can be self-hosted – your automation logic and data stay on your own servers. Here's how a typical anonymisation pipeline works:

Step 1 – Data input: A customer enquiry arrives by email. It contains the customer name, contract number and a specific question about a product.

Step 2 – Automatic anonymisation: The n8n workflow detects sensitive data fields and replaces them with placeholders. "Mr Müller, Contract CH-2024-4582" becomes "[CUSTOMER_1], Contract [CONTRACT_1]". Names, addresses, phone numbers, emails and IDs are systematically masked.

Step 3 – AI processing: The anonymised data is sent to the Claude API. Claude analyses the enquiry and generates a response – without ever seeing the real customer data.

Step 4 – Re-identification: The n8n workflow replaces the placeholders in Claude's response with the real data. "Dear [CUSTOMER_1]" becomes "Dear Mr Müller".

Step 5 – Output: The finished, personalised response is delivered – via email, to the CRM or to another system.

Why n8n?

  • Open source and self-hostable: Your workflows and data stay on your servers – including the anonymisation logic.
  • Visual workflows: No programming skills needed. Workflows are created via drag-and-drop and are transparent for the entire team.
  • Flexible integration: n8n connects to over 400 tools – from email and CRM to AI APIs like Claude, ChatGPT or local models.
  • Audit trail: Every workflow run is logged – important for accountability requirements under the nDSG.

Further Anonymisation Approaches

  • Microsoft Presidio: Open-source tool specifically for detecting and anonymising personally identifiable information (PII). Automatically recognises names, addresses, phone numbers and other data categories.
  • Custom regex patterns: For simple cases, regular expressions (regex) can be used to detect and mask known patterns like email addresses, AHV numbers or phone numbers.
  • Differential privacy: Advanced technique where data is noised so that individuals can no longer be identified, while statistical insights are preserved.

Practical Checklist: Using AI in a Privacy-Compliant Way

These seven steps help your SME use AI securely and compliantly:

  • 1. Create a data inventory: Document which data flows into which AI tools. Create a simple overview: Tool → Data type → Sensitivity → Server location.
  • 2. Vet providers: Check for each AI provider: Where are the servers? Is data used for training? Is there a data processing agreement (DPA)? How is data encrypted?
  • 3. Define internal guidelines: Create an AI usage policy for your employees. Clearly define which data may be entered into which tools – and which may not.
  • 4. Implement anonymisation: Set up an anonymisation workflow for all processes where sensitive data needs to be processed with AI. Tools like n8n make this possible even without a development team.
  • 5. Train employees: Conduct an AI workshop that also covers the data privacy aspect. Employees need to understand why certain data doesn't belong in AI tools.
  • 6. Review contracts: Ensure a data processing agreement (DPA) exists with every AI provider. This regulates what the provider may do with your data – and what they may not.
  • 7. Audit regularly: Review quarterly which AI tools are in use, whether guidelines are being followed, and whether providers' privacy terms have changed.

Frequently Asked Questions

Can I use ChatGPT for customer data as an SME?

Not in the free version – inputs there flow into training. With ChatGPT Enterprise or the API (with training deactivated), privacy-compliant use is possible, provided you conclude a data processing agreement and update your privacy policy. For particularly sensitive data, we still recommend prior anonymisation.

Are open-source AI models automatically more secure?

Not automatically – but they offer more control. An open-source model running insecurely on a public server is less secure than ChatGPT Enterprise. The advantage of open source lies in having full control over the infrastructure and never having to transmit data to third parties.

What happens if my SME violates the nDSG?

The revised nDSG provides for fines of up to CHF 250,000 – against the responsible natural person, not the company. This means: management is personally liable. Additionally, there are reputational damages and loss of customer trust.

Do I need a data protection consultant for AI use?

For simple use cases (e.g., ChatGPT for text drafts without personal data), internal guidelines suffice. Once you systematically use AI with customer data, we recommend professional data protection consulting – ideally combined with AI strategy consulting so that privacy and benefit are optimised together.

How do I recognise whether an AI provider is GDPR/nDSG compliant?

Check: Does the provider have a data processing agreement (DPA)? Where are the servers? Is data used for training? Is there a data protection impact assessment? Reputable providers like Anthropic (Claude) or OpenAI (Enterprise) make this information transparently available.

Using AI Securely – With the Right Strategy

Data privacy and AI are not contradictory – quite the opposite. Swiss SMEs that integrate data privacy into their AI strategy from the start gain not only compliance security but also their customers' trust. The combination of clear guidelines, the right choice between open source and closed source, and a well-designed anonymisation architecture makes it possible to fully exploit AI's productivity gains – without risk.

At INFLECT, we help Swiss SMEs develop AI strategies that consider data privacy at their core. From analysing your data flows to implementing anonymisation workflows to training your employees – we accompany you every step of the way.

Start with a free initial consultation and discover how your SME can use AI securely and in a privacy-compliant way.

Share article

Ready to get started?

Let's find out together how we can move your business forward.