OpenAI, DeepMind Workers AI Risks In Open Letter | Silicon UK

admin
5 Min Read

Current and former staff members of OpenAI and Google DeepMind warn of lack of safety oversight in AI industry

Current and former workers at “frontier AI companies” have published an open letter to warn of the “serious risks” posed by these technologies.

The open letter was signed by 13 former workers at Google DeepMind and OpenAI, and was endorsed by leading AI experts including Professor Yoshua Bengio, Professor Stuart Russell, and Dr Geoffrey Hinton, who is often touted as the godfather of AI.

In the letter the current and former workers warned about a lack of protective measures, safety oversight, and the chance of it being used for “manipulation and misinformation.”

Open letter

The open letter began by recognising the “potential of AI technology to deliver unprecedented benefits to humanity”, but also outlined the “serious risks posed by these technologies.”

The letter said these risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.

This risk has been echoed previously by the likes of Elon Musk and Steve Wozniak, as well as the late Professor Stephen Hawking. In May this year Musk reportedly said that AI would take all of our jobs.

Professor Hawking in 2015 had warned artificial intelligence could spell the end of life as we know it on Planet Earth. Professor Hawking also predicted that humanity has just 100 years left before the machines take over.

Now the letter said the authors are “hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”

The letter stated that AI companies possess “substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.”

“However, they currently have only weak obligations to share some of this information with governments, and none with civil society,” the letter warned. “e do not think they can all be relied upon to share it voluntarily.”

Whistleblower protections

The letter stated that so long as there is “no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.”

The letter said that ordinary whistleblower protections are “insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.”

The letter calls upon advanced AI companies not to enter into or enforce any agreement that prohibits “disparagement” or criticism of the company; facilitate a verifiably anonymous process for current and former to raise risk-related concerns; support a culture of open criticism; and not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.

Safety concerns

The letter comes after two top OpenAI employees, co-founder Ilya Sutskever and key safety researcher Jan Leike, resigned from OpenAI last month.

After his departure, Leike alleged that OpenAI had abandoned a culture of safety in favour of “shiny products”.

Last week OpenAI said it had formed a safety committee to oversee the safety of ‘superintelligent’ AIs, after it had disbanded its ‘superalignment’ team.

Read also : Nvidia, AMD Hawk AI Chips As Computex Opens

Share This Article
By admin
test bio
Please login to use this feature.