The United States Federal Trade Commission (FTC) opens an investigation into Open AI, the maker of ChatGPT of whether the technology has been harming consumers who were giving their data, the New York Times reported on Thursday.
The FTC sent a 20-page letter to the company informing them that they are looking into the company’s security practices. According to NYTimes, the FTC has asked the company multiple questions including how it trains its Artificial Intelligence models and personal data.
OpenAI release ChatGPT in November, since then the technology has attracted users who has used it to write articles, poetry and other literary words. A judge in Pakistan has used the technology to make a decision. However, the technology is not entirely reliable, experts has said its impact remains unclear.
The technology uses neural network, a technology that is used to translate between languages on Google Translate. The technology learns the skills by looking at data and analysing it.
The FTC’s decision is the first of its kind towards the Microsoft-backed startup that led to more talks about generative artificial intelligence.
Concerned voices within the US has been vocal about the impact of the technology. Senate Majority leader Chuck Schumer has called for “comprehensive legislation” to ensure that AI is being regulated.
Last March, OpenAI faces issues inItaly, where the regulator had ChatGPT taken offline over accusations that the company violated the European Union’s GDPR – a wide-ranging privacy regime enacted in 2018. It was later reinstated after OpenAI has agreed to use the verification features and let European users block their information for being used to train the AI model.