Legislative Response To Swift’s Deepfakes
After Taylor Swift’s deepfake scandal, US lawmakers are ramping up efforts to introduce comprehensive legislation targeting the production and dissemination of deepfake content using AI. The incident, which saw explicit fake photos of the renowned singer circulating across various social media platforms, has sparked widespread outrage and calls for urgent action.
Representative Joe Morelle, a vocal advocate for privacy rights, has spearheaded the push for legal measures to combat the spread of deepfakes. Morelle’s proposed legislation, dubbed the “Preventing Deepfakes of Intimate Images Act,” suggests criminalizing the creation and distribution of non-consensual deepfake content.
In a statement released on social media platform X, Morelle condemned the dissemination of the fabricated images as “appalling” and emphasized the need for swift legislative action to address the growing threat posed by deepfake technology. Democratic Representative Yvette D. Clarke echoed Morelle’s sentiments, highlighting the long-standing issue of women falling victim to deepfake manipulation.
Clarke emphasized that artificial intelligence (AI) advancements have made it increasingly accessible and affordable for malicious actors to create and disseminate deceptive content, posing significant risks to individuals’ privacy and safety.
X Takes Action
In response to the outcry over the Taylor Swift deepfake incident, social media platforms like X have implemented stringent measures to curb the spread of non-consensual intimate images. The platform has pledged to actively remove all identified deepfake content and ban accounts responsible for posting such material.
The incident has also reignited discussions surrounding the regulation of deepfake technology globally. In the United Kingdom, the sharing of deepfake pornography was recently criminalized as part of the Online Safety Act, underscoring the growing recognition of the need for robust legal frameworks to combat digital manipulation.
A 2023 report disclosed that the majority of deepfake content shared online revolves around pornography, with about 99% of the individuals targeted in such material being women. The World Economic Forum (WEF) has also sounded an alarm on the adverse impacts of AI technologies, highlighting it as part of the unintended negative consequences of advancements in generative AI.
Similar concerns about the proliferation of AI-generated content have also been echoed by international organizations such as the United Nations and Canada’s primary national intelligence agency, the Canadian Security Intelligence Service (CSIS).
With deepfake technology becoming increasingly sophisticated and accessible, the battle against digital manipulation will likely intensify in the years to come, underscoring the importance of proactive measures to protect against the misuse of AI technologies.
Italy’s Data Protection Authority Fines City Over AI Misuse
Meanwhile, the Italian city of Trento is at the center of a growing controversy over artificial intelligence (AI) and data privacy. The municipality’s recent fine of approximately $54.4K by the Italian Data Protection Authority marks the first instance in Italy where a city has been penalized for the misuse of AI technology.
The incident stemmed from Trento’s engagement in two AI-related research projects. The Italian privacy watchdog found glaring violations of data protection legislation in implementing the research about these projects.
According to the Italian Data Protection Authority, the data collected for these projects was inadequately anonymized and improperly shared with third parties, raising significant concerns about infringing individuals’ constitutional privacy rights. This verdict is a warning for municipalities and organizations worldwide delving into AI-driven initiatives. Trento’s case also demonstrates the importance of robust data protection policies.