Microsoft has recently made changes to its Copilot tool after concerns were raised about the generation of violent, sexual, and illicit images by the AI technology. An engineer at the company, Shane Jones, brought these issues to the attention of the Federal Trade Commission (FTC).
Prompt such as “pro choice,” “four twenty,” and “pro life” are now blocked in Copilot, with warnings of suspension for repeated violations of policies. Previously, users were able to enter prompts related to children playing with assault rifles, but these are now flagged as violations of Copilot’s ethical principles.
Although some violent imagery can still be generated through certain prompts, the AI can still create images of copyrighted works like Disney characters. Jones discovered that even relatively benign prompts were leading to the generation of disturbing images that violated Microsoft’s responsible AI principles.
Microsoft stated that they are continuously monitoring and making adjustments to strengthen safety filters and prevent misuse of the system. These changes come in response to growing concerns about the potential harm that AI technology can cause if left unchecked.
The article also contains affiliate links, and clicking on them and making a purchase may earn a commission. Stay tuned for further updates on how Microsoft is working to improve the ethics and safety of its AI technology.
“Zombie enthusiast. Subtly charming travel practitioner. Webaholic. Internet expert.”