Google's GenAI faces privacy risk assessment scrutiny in Europe
The European Union's top privacy regulator has launched an investigation into Google's compliance with the bloc's data protection laws over its use of personal information to train its generative AI. The study looks specifically at whether the tech giants should conduct data protection impact assessments (DPIAs) to proactively investigate risks their AI technology poses to the rights and freedoms of people whose information is used to train their models. Generative AI tools are notorious for creating credible lies.
This tendency, coupled with the ability to provide personal information upon request, creates significant legal risk for their creators. It uses the technology to power AI chatbots, improve web searches, etc. At the heart of these consumer AI tools is Google's LLM, called PaLM2, which was introduced at last year's I/O developer conference.
Training GenAI models typically requires vast amounts of data, and the type of information LLM makers obtain, and how and where they obtain it, has come under increasing scrutiny in relation to a range of legal concerns, including copyright and privacy. In the latter case, any information used as training material for an AI that contains personal data of EU citizens will be subject to EU data protection rules, regardless of whether it is obtained from the public internet or directly from users.
This is why many LLMs are already facing questions around privacy, and GDPR compliance enforcement, including OpenAI, the creators of GPT (and ChatGPT), and Meta, which is developing the Llama AI model. Elon Musk-owned X has also attracted GDPR complaints and the wrath of the DPC for using people's data to train its AI, leading to a court case in which X undertook to limit data processing but has not been sanctioned. However, X could face GDPR fines if the DPC finds that X's processing of user data to train its AI tool Grok breaches GDPR. The DPC DPIA investigation into Google's GenAI is the latest regulatory action in this area.
"The legal investigation aims to examine whether Google has complied with its obligations under Article 35 of the General Data Protection Regulation to carry out an assessment (data protection impact assessment) before processing personal data of EU data subjects that are related to EU developments and its underlying AI model, Pathways Language Model 2 (PaLM 2)," the DPC said in a press release. This indicates that DPIA is important to ensure that the basic rights and freedom of the individual are appropriate during the processing of personal data, and that it will probably lead to high risk.
"This is a law of demand, part of a wide range of DPC's efforts, functioning in combination with the regulatory authorities EAE / EEZ (European Economic Area], regulating the processing of personal data on EU / EE in development. In addition to the DPC, we have achieved the best way to apply GENAI's Life, with the DPC added. Google did not interact with questions about the data source used to teach the Genai tool, but Jay Table's representative wrote the statement of Google: