Background gradient
Recent article

Data protection in the age of AI: What companies need to know now

While the use of AI continues to experience a strong upswing in companies, one question is increasingly coming to the fore: what impact do these tools have on data protection and how can and must companies react to this? The rules are particularly strict in Switzerland and the EU. We show the risks and the appropriate solutions.

Data protection in the age of AI: What companies need to know now

There is no doubt that the AI boom is here to stay. ChatGPT alone now has 400 million users worldwide every week. In turn, companies spent 13.8 billion US dollars on such tools last year, according to venture capitalist Menlo Ventures. In 2023, the figure was 2.3 billion. That corresponds to growth of 500%.

And even if productivity gains have not yet reached the level hoped for, companies seem satisfied with their AI initiatives: according to the consulting firm Deloitte, 74% of the organizations surveyed believe they have achieved or exceeded their goals. 78% intend to increase their expenditure this year.

There is a simple reason for the strong interest: AI tools can speed up many daily tasks. Even in large amounts of data, for example, they can find the desired facts and figures in a matter of seconds. They can compare and summarize information. And last but not least, they create content of all kinds, whether it's a draft for an email or an illustration for a presentation.

Despite all the enthusiasm for the digital helpers, there is one important question: what about data protection in the brave new world of AI?

In a nutshell, the answer is: not good. Companies in particularly strictly regulated industries such as the financial sector need to be aware of several problem areas.

Background: the regulatory framework

When using AI, companies are operating in a complex regulatory environment. Two sets of regulations are particularly relevant in this country: the relatively new EU AI Act and the already familiar General Data Protection Regulation (GDPR) and its Swiss counterpart, the Data Protection Act (DPA).

The aim of the EU AI Act is to ensure the safety, transparency and trustworthiness of AI systems. This is primarily aimed at the providers of these services, but not only: AI users also have obligations. This is particularly true when it comes to so-called high-risk AI, such as for tasks in the HR department or when granting loans. Annex III of the EU AI Act lists exactly which applications fall into this category.

According to Article 26, companies and institutions must then, among other things:

  • ensure that the systems are used correctly,
  • Check input data for quality and relevance,
  • and make it transparent to those affected that AI is involved.

In turn, the GDPR and DPA are essentially about the handling of personal data, and this is also highly relevant when using AI.

Some key points:

  • Purpose limitation: Data may only be processed for the intended purpose.
  • Data minimization: Only what is absolutely necessary may be processed.
  • Legal basis: All data processing requires a legal basis such as consent or a legitimate interest.

For more details, see Article 5 and Article 6 of the GDPR and Article 6 of the DPA.

In all of this, companies and organizations must not only ensure that they themselves comply with these regulations, but also that the service providers they commission do so. US providers therefore regularly pose a problem here, as data protection is handled differently in America than in Europe. Agreements between the EU and the USA, which were supposed to solve this problem, have failed in court several times. The latest version, called the Data Privacy Framework, is viewed with similar criticism.

Providers from Europe are the better choice from this point of view. In the AI sector, for example, this would be the French company Mistral. But even then, the data transfer must of course comply with the above-mentioned guidelines.

Practical risks and solutions

In view of this regulatory framework, it is highly problematic, for example, to upload personal data to a service such as ChatGPT without treating it. But organizations also need to be careful in other areas.

In the following, we take a closer look at two key risks and possible solutions:

Accidental violations by employees

In many companies, employees use AI-based tools to work more efficiently. It is important to raise awareness of the issue of data protection. Otherwise, it is easy for internal information, confidential customer data or even business secrets to be fed into an AI system.

Providers such as OpenAI promise their paying customers that their chats will not be used to train future models. The information uploaded in this way should therefore not suddenly become part of the "world knowledge" of an AI such as ChatGPT. But even if this is true, it is irrelevant from a data protection law perspective: simply transferring this information in this form may already be illegal.

This is why organizations need clear AI guidelines that clearly regulate the use of these tools. Appropriate training is also essential.

The data hunger of AI tools

Another risk is that sensitive company data can be found unintentionally. This can be triggered by incorrectly configured document shares or unsecured file storage locations.

This is not immediately apparent in everyday life. Search engines such as Google, for example, voluntarily comply with website operators' specifications as to what they may and may not record. This is regulated by a file called robots.txt, among other things.

However, the situation is different for AI providers. Their hunger for data is immeasurable, as this is the foundation of their business. Their tools only learn their skills and knowledge through enormous amounts of data. One example is the "Common Crawl" project, which regularly crawls billions of websites.

And as it turns out: In the race for supremacy in the hot AI market, restrictions in arobots.txt are being ignored.

It is therefore more important than ever to pay close attention to how documents are stored and made available. This is where solutions such as those from DSwiss can help: Thanks to consistent encryption and access control, sensitive documents remain protected even if a storage location is technically accessible. Unlike publicly accessible cloud folders, such systems can only be opened via authorized shares. They are simply invisible to web crawlers. This prevents confidential information from unintentionally ending up in large training data sets.

Key Takeaways

Ultimately, it remains to be said: Proactive data protection is already a must anyway, but it will become even more urgent in the AI era. One reason: AI systems don't forget anything. Once it has got into them, it is almost impossible to get it back. This makes it all the more important to act now before minor negligence ends up causing major problems. 

Jan Tißler

Jan Tißler

Author

More knowledge that takes you further

Discover relevant insights on current topics, challenges and solutions that will help you move forward.

End of the "Privacy Shield": Data protection authorities go on the offensive

End of the "Privacy Shield": Data protection authorities go on the offensive

European Data Protection Seal: The new European data protection certificate explained

European Data Protection Seal: The new European data protection certificate explained

CLOUD Act and co.: How trustworthy are US cloud offerings?

CLOUD Act and co.: How trustworthy are US cloud offerings?

NIS 2: The new EU rules on cyber security explained

NIS 2: The new EU rules on cyber security explained

Digital banking: customer expectations are rising - as are costs

Digital banking: customer expectations are rising - as are costs

Ready to revolutionize your document handling?

  • Highest quality & safety standards
  • GDPR-compliant
  • Developed & hosted in Switzerland

Ready to revolutionize your document handling?

author avatar
João Salvado
People Operations

This is the default text value

Schedule a demo