OpenAI researchers alerted board to dangers of their upcoming AI before firing Sam Altman

admin
3 Min Read

Days before Sam Altam was fired from OpenAI, the company’s researchers wrote a letter to the board of directors expressing concern about the dangers of its upcoming AI; among them, that “could threaten humanity”Has revealed Reuters.

According to the aforementioned media, who has corroborated the information with two sources familiar with the matter, the investigators warned that Altman wanted to launch very powerful AI products on the market without first evaluating the consequences, among other series of complaints. This could have led the board to make the decision to fire him — although he has ultimately returned to his position. The board, in fact, confirmed his dismissal on November 17, alleging that it had lost confidence in Sam Altman.

One of the products that OpenAI, under Altman, wanted to release without taking into account how harmful it could be to humanity, appears to be an artificial general intelligence (AGI). The company has been developing it through an internal project called Q* (pronounced Q-Star).

This AI, in particular, would be capable of perform autonomous tasks advanced enough to replace humans in different sectors.

The AI developed through the Q* project stands out for being capable of Solve mathematical problems. For now, yes, at the level of an elementary school student. It is, however, a considerable advance considering that artificial intelligence models are not so good at this type of operations, but rather stand out, above all, for their writing, content generation and translation capabilities.

Therefore, if AI knows how to understand and solve mathematical problems, where there is only one correct answer, It would be practically at the same level of intelligence as that of a human being. This is something that has worried OpenAI researchers. In fact, in the letter sent to the board of directors, they warn that they have been debating for a long time about the danger of such intelligent AI. Also wondering, among other things, if these models could decide that the destruction of humanity is in their best interest.

According to Reutersseveral teams focused on the development of artificial intelligence models were also working on improving the capabilities of current models so that they reason better and are even capable of carrying out scientific work.

Share This Article
By admin
test bio
Please login to use this feature.