Microsoft’s New AI Technology Raises Concerns About Deepfakes
Microsoft recently showcased their latest framework, VASA-1, which has the ability to create realistic videos of people speaking scripted words using only a still image, audio sample, and text script. The AI-generated videos can convincingly animate individuals to speak in a cloned voice, leading to concerns about potential misuse of the technology.
The US Federal Trade Commission has warned about the risks of impersonation fraud that could arise from such advanced technology. While Microsoft’s research team initially developed the technology, they made the decision not to release it to the public due to ethical considerations.
The focus of the research was on generating virtual interactive characters rather than impersonating real individuals. Kevin Surace, Chair of Token, sees potential positive applications for personalizing emails and other business communication. However, concerns remain about the inability to effectively regulate the use of deepfake technology.
Countries such as Canada, China, the UK, and the US have already implemented regulations addressing deepfake technology. Suggestions have also been made to create civil claims for victims of non-consensual deepfake images. A Senate hearing held in April further highlighted the potential dangers of deepfakes in undermining public trust and democracy.
Despite the concerns surrounding deepfake technology, there are also potential marketing applications for the use of such advanced AI technology. However, Microsoft’s decision not to release VASA-1 to the public reflects the company’s recognition of the potential risks and ethical implications associated with the misuse of deepfake technology.
“Zombie enthusiast. Subtly charming travel practitioner. Webaholic. Internet expert.”