OpenAI has entered into an agreement with the U.S. Department of Defense, claiming to implement protective “red lines” to ensure cautious use of artificial intelligence technology. However, users and experts remain unconvinced, expressing concerns about hidden risks within the agreement’s terms. Following the announcement, a notable decline in ChatGPT users was observed, while Anthropic’s cloud app store app emerged as the most downloaded alternative. OpenAI, a leading AI company, has amassed a large user base through its ChatGPT service. The deal aims to provide advanced AI technology to defense agencies, accompanied by specific ethical and safety guidelines. Despite this, users worry the Pentagon partnership could threaten OpenAI’s transparency and user privacy. As AI adoption rapidly grows, agreements between companies and government agencies are becoming more common, yet their often complex and opaque conditions foster user distrust. In this environment, Anthropic’s cloud service has gained preference for its ethical standards and user-friendly policies. Experts warn that without effective enforcement of protective measures, AI misuse could increase, posing risks to user data. They recommend that companies prioritize full transparency and user protection in their agreements.
Source: decrypt