Thursday, September 12, 2024

Top 5 This Week

Related Posts

Google’s Pathways Language Model 2 Under Investigation by EU Privacy Watchdog


Ireland’s Data Protection Commission (DPC) is currently investigating Google’s Pathways Language Model 2 (PaLM2) over concerns regarding the use of personal data in the development of the artificial intelligence model. The DPC, which serves as the regulatory body for companies headquartered in Ireland, announced its investigation on September 12th. The purpose of the inquiry is to determine whether Google has complied with the General Data Protection Regulation (GDPR) rules in relation to PaLM2.

The DPC is collaborating with partners in the European Economic Area to regulate the processing of personal data belonging to EU users in the development of AI models and systems. The commission will particularly focus on assessing whether Google has properly evaluated the potential risks to the rights and freedoms of individuals in the EU resulting from the data processing involved in PaLM2.

PaLM2 is described by Google as a “next-generation language model with improved multilingual, reasoning, and coding capabilities.” It is built upon the company’s previous research in machine learning and AI. The model has been trained on a vast amount of data, including webpages, source code, and other datasets. It boasts the ability to translate languages, perform mathematical tasks, answer questions, and even write computer code.

Google’s plan is to integrate PaLM2 into over 25 new products and features, including its email and Google Docs services. This move aligns with the ongoing trend of adopting and expanding the use of AI in various industries. However, the DPC’s investigation shines a light on the potential risks associated with the use of EU user data in training generative AI models.

The DPC has previously raised concerns about the use of EU user data in training AI models with other major tech companies. For instance, the watchdog recently concluded proceedings against Elon Musk’s social media platform, X, after the company agreed to permanently cease processing European user data for its generative AI chatbot, Grok. This action by the DPC marked the first time it had taken such measures.

The DPC has been actively working to address issues related to the use of personal data in AI models throughout the industry. It has sought an opinion from the European Data Protection Board (EDPB) to initiate discussions and gain clarity on this complex subject. The EDPB’s opinion will explore various aspects, including the extent to which personal data, both first-party and third-party, is processed during the training and operation of an AI model.

In addition to Google and X, the DPC has engaged with Meta Platforms regarding its plans to use content posted by European users to train its latest version of a large language model. As a result of the regulator’s intensive engagement, Meta Platforms decided to pause its training plans.

The ongoing investigations by the DPC highlight the increasing scrutiny surrounding the use of personal data in the development of AI models. The protection of individuals’ fundamental rights and freedoms is a key concern, and regulators are taking steps to ensure compliance with GDPR rules. It remains to be seen how these investigations will impact the future use of EU user data in AI model training, and whether additional regulations or guidelines will be put in place to address the privacy risks associated with these technologies.

(Note: The insights and analysis in this narrative are based on the provided content and do not reflect actual news events or statements from the mentioned companies or organizations.)

Popular Articles