Ex US NSA Chief Paul Nakasone Joins OpenAI’s Safety Committee

admin
3 Min Read

Retired U.S. Army General Paul M. Nakasone joined OpenAI’s Board of Directors as part of the Safety and Security Committee, as per a blog post from the tech giant on June 13, 2024. The Committee is responsible for making recommendations to the full Board on critical safety and security decisions for all OpenAI projects and operations.

According to the blog, Nakasone was the former head of the National Security Agency (NSA), where he worked to safeguard the United States’ digital infrastructure. He was the one in charge of creating the U.S. Cyber Command and was the longest-serving leader of USCYBERCOM. Nakasone worked with cyber units in the United States, the Republic of Korea, Iraq, and Afghanistan.

Earlier in February, a Washington Post opinion piece on the ex-Army man where he argued that the US government should re-approve the Foreign Intelligence Surveillance Act, which was reauthorized on April 20, 2024. He specifically focused on Section 702 in the Act which permits the government to conduct targeted surveillance of foreign persons located outside the United States, with the compelled assistance of electronic communication service providers, to acquire foreign intelligence information.

Regarding joining the Board Nakasone said, “OpenAI’s dedication to its mission aligns closely with my own values and experience in public service. I look forward to contributing to OpenAI’s efforts to ensure artificial general intelligence is safe and beneficial to people around the world.”

In recent months, there has been a noticeable shift in OpenAI’s stand on the interplay of Artificial Intelligence and military forces. OpenAI even relaxed its usage policies for military and warfare purposes and confirmed its partnership with the US’s Defence Advanced Research Projects Agency (DARPA) earlier in January this year. In March 2024, there were reports of the US military testing its battlefield effectiveness by experimenting with OpenAI’s generative AI tools. From an Indian perspective, this may be concerning since the country’s data protection law currently doesn’t include publicly available personal data. With this data one can train AI generative tools to target people.

Share This Article
By admin
test bio
Please login to use this feature.