• Home
  • >
  • Technology
  • >
  • Microsoft Engineer Warns of AI Tool Creating Harmful Imagery

Microsoft Engineer Warns of AI Tool Creating Harmful Imagery

  • Paul Smith

    An AI engineer at Microsoft released a letter on Wed saying that the company’s artificial intelligence image source lacks standard precautions against producing violent and sexualized pictures. In the letter, Microsoft engineer Shane Jones says that his recurrent efforts to admonish Microsoft management about the troubles failed to lead to some action. Jones stated he sent the content to the Federal Trade Commission and Microsoft’s directorate.

    A Microsoft representative refused that the company neglected safety consequences, saying it has “rich internal reporting channels” to cope with creative AI tool troubles. Jones should have straightaway responded to an asking for comment.

    The letter concentrates on the consequences of Microsoft’s Copilot Designer, a program that can make pictures based on content actuates and is hopped up by OpenAI’s DALL-E 3 AI system. It is among a lot of productive artificial intelligence picture makers that have founded over the previous year, piece of a blast time for the industry that has as well raised interests over artificial intelligence being utilized to circulate disinformation or beget misogynist, racist and intense content.

    Copilot Designer holds “systemic troubles” with producing abusive content, Jones says in the letter and had better be bumped off from public use till the company sets the turnout. Jones specifically indicates that Copilot Designer misses appropriate limitations on its utilization and inclines to get pictures that sexually depersonalize women still when given entirely unrelated moves.

    Microsoft Copilot


    Microsoft laid claim that it has devoted teams who value expected safety consequences and that the company alleviated meetings for Jones with its Office of Responsible Artificial Intelligence.

    “We are devoted to dealing with whatever and all interests employees have in conformity with our company policies and value the employee’s attempt in analyzing and screening our up-to-date technology to promote and enhance its safety,” a representative for Microsoft stated in a command given to the media.

    Microsoft set up its Copilot “artificial intelligence companion” a year ago and has intemperately promoted it as a radical way to integrate AI tools into business concerns and generative endeavors. The company promoted Copilot as an approachable product for public utilization and boasted it last month in a Super Bowl advertisement with the punchline “Anyone. Anywhere. Any device.” Jones indicates that telling consumers that Copilot Designer is suitable for anyone to utilize is unaccountable and that the company is conking out to expose long-familiar risks linked with the program.

    This is not the 1st time Jones has publically publicized his interests. He stated that Microsoft initially proposed he take his determinations straight away to OpenAI.

    Once that did not work, he as well publically sent a letter to OpenAI on Microsoft-owned LinkedIn in Dec, heading a director to inform him that Microsoft’s legal team “needed that I erase the post, which I reluctantly acted,” as per his letter to the board of directors.

    A count of amazing artificial intelligence image-generators 1st came on the spot in the year 2022, including the 2nd generation of OpenAI’s DALL-E 2. That, along with the future brought out of OpenAI’s chatbot ChatGPT, set off public captivation that put commercialized pressure on ethical technology giants such as Microsoft and Google to bring out their personal editions.

    Simply without efficient precautions, the technology faces risks, including the relief with which users can get bad “deep fake” pictures of governmental figures, war districts, or non consensual nudeness that falsely come out to show actual people with identifiable faces. Google has temporarily frozen it’s Gemini chatbot’s power to beget pictures of people accompanying outrage across how it was describing race and ethnicity, specified as by casting people of color in Nazi-era soldierly uniforms.

    Related Articles


    Tik Tok Ban Bill Passed in the US? How it will affect the Users and Application
    Microsoft Engineer Warns of AI Tool Creating Harmful Imagery
    Rule the audio world with these 5 best podcast apps
    Apple’s Vision Pro Review: Apple’s Debut Headset Lacks Polish and Purpose