Microsoft Software Engineer Raises Alarm About Security Vulnerability in AI Image Generation Tool
Microsoft Corp. software engineer Shane Jones has raised concerns about a security vulnerability in the company’s AI image generation tool, Copilot Designer. In letters sent to Microsoft’s board, lawmakers, and the Federal Trade Commission, Jones highlighted a flaw he discovered in OpenAI’s DALL-E image generator model, allowing for the bypassing of guardrails meant to prevent the creation of harmful images.
Jones urged Microsoft to remove Copilot Designer from public use until better safeguards are implemented. He specifically pointed out the tool’s potential to generate inappropriate, sexually objectified images as well as harmful content such as political bias, underage drinking, and conspiracy theories. The Federal Trade Commission has confirmed receiving the letter but has declined to comment further.
Microsoft has stated that they are investigating reports of disturbing responses from their Copilot chatbot and are committed to addressing employee concerns regarding technology safety. Jones believes that transparency regarding AI risks, especially when targeting children, should be a priority for companies like Microsoft.
Despite Jones expressing concerns to the company for three months and writing to lawmakers urging an investigation into the risks associated with AI image generation technologies and responsible AI practices of companies, OpenAI did not respond to comment requests. Lawmakers contacted by Jones have not immediately responded for comment.
It is clear that the issue raised by Jones has significant implications for the use of AI technology, especially in terms of safety and responsible practices. As companies continue to develop and implement AI tools, ensuring the security and ethical use of such technology is crucial.
“Prone to fits of apathy. Devoted music geek. Troublemaker. Typical analyst. Alcohol practitioner. Food junkie. Passionate tv fan. Web expert.”