Microsoft has changed its policy Ban US police departments from using generative AI for facial recognition through the Azure OpenAI service, the company’s fully managed, enterprise-focused wrapper around OpenAI technologies.
Language added Wednesday to the Azure OpenAI Service terms of service prohibits integrations with the Azure OpenAI Service from being used “by or for” police departments for facial recognition in the US, including integrations with OpenAI text and speech analysis models.
A separate new bullet point covers “any law enforcement globally” and explicitly prohibits the use of “real-time facial recognition technology” on mobile cameras, such as body cameras and dash cams, to attempt to identify a person in situations “uncontrolled, in the middle.” -“Wild” environments.
The changes in terms come a week after Axon, a maker of technology and weapons products for the military and law enforcement, announced a new product which leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out potential dangers, such as hallucinations (even the best current generative AI models make up facts) and racial biases introduced from training data (which is especially concerning given that people of color are The police are much more likely to stop you. than their white peers).
It is unclear whether Axon was using GPT-4 through the Azure OpenAI service and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft, and OpenAI and will update this post if we hear back.
The new terms leave room for maneuver for Microsoft.
The complete ban on the use of the Azure OpenAI service applies only to the US., Not international, police. And it does not cover facial recognition performed with stationary cameras in reviewed environments, such as an administrative office (although the terms prohibit any use of facial recognition by US police).
This follows the recent focus of Microsoft and its close partner OpenAI towards AI-related defense and law enforcement contracts.
In January, a Bloomberg report revealed that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities, a departure from the startup’s previous ban on providing its AI to the military. Elsewhere, Microsoft has proposed using OpenAI’s imaging tool, DALL-E, to help the Department of Defense (DoD) create software to run military operations. by The interception.
The Azure OpenAI service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features aimed at government agencies, including law enforcement. in a blog entryCandice Ling, senior vice president of Microsoft’s government division, Microsoft Federal, promised that the Azure OpenAI service would be “submitted for additional authorization” to the Department of Defense for workloads that support DoD missions.
Update: After publication, Microsoft said that its original change to the terms of service contained an error and, in fact, the ban applies only to facial recognition in the US. It is not a blanket ban on police departments using the service.