AI Safety Summit: what to expect – Information Age

admin
5 Min Read

Government officials and tech companies are set to come together for the world’s first AI Safety Summit on the 1st and 2nd November

Spread over two days, the AI Safety Summit at Bletchley Park is set to explore how artificial intelligence can be kept secure and safe for users through regulation as it evolves, as well as how businesses in the space can stay compliant.

The first day, which most tech firms and government officials are set to attend for, will be hosted by UK technology secretary Michelle Donelan, with the second day being led by Prime Minister Rishi Sunak and focused on political implications of AI, reported The Times.

The event guest list includes leaders from the most prominent AI start-ups — including ChatGPT developer OpenAI, Google’s DeepMind and Claude creator Anthropic — as well as representatives from key AI investors like Amazon, Meta and Microsoft.

Additionally, government officials set to attend include US vice-president Kamala Harris, and French President Emmanuel Macron.

UK government calls for AI infrastructure access ahead of global summit — As big tech AI innovation continues, government officials push for under-the-hood access to key start-ups’ technology ahead of the world’s first AI safety summit.

Regulation of AI innovation, while already seeing substantial discussion across governments previously, has surged up the legislation agenda alongside increasing use of generative AI tools such as ChatGPT, which saw public release in November last year.

This has led to the set-up of a global summit dedicated to ensuring long-term safety regarding the technology, with risks including misinformation and bias to be addressed.

A statement by the Department for Science, Innovation and Technology said the conference at Bletchley Park “builds on a wide range of engagements leading up to the summit to ensure a diverse range of opinions and insights can directly feed into the discussions”.

According to Nicklas Lundblad, director of public policy at Deepmind, two key outcomes should be sought after: “an international understanding of the opportunity and risk; and mechanisms to co-ordinate.”

Lundblad added: “It’s hard for issues such as climate change and poverty — but if we can at least get to a first understanding between the participating countries that these are the mechanisms, these are the principles, that would be a huge win.”

Natalie Cramp, CEO of Profusion, commented: “The AI Safety Summit is a very welcome initiative and it has the potential to be a very productive event, however, it really should just be the start of ongoing serious debate in the UK about how we want AI to develop.

“It’s critical that we move forward with putting adequate rules in place now to reduce the risk of AI getting out of control. We saw the damage that has been done through lax regulation of social media – it’s very hard to put the genie back in the bottle.

“If the UK Government is serious about using AI to drive forward an economic revolution, businesses, innovators and investors need certainty about what the rules of the game will be. Otherwise, the most exciting AI tech start-ups will simply go to the EU or US where there is likely to be much more legal clarity.”

The future of private AI: open source vs closed source — As regulation of artificial intelligence evolves, the future of AI could be private in nature – here’s how adoption of open and close source capabilities would compare.

Ahead of next week’s AI Safety Summit, 23 experts in artificial intelligence have co-signed policy proposals calling for liability for harms caused by AI vendors, reported The Guardian.

Academics involved include two of the three 2018 Turing Award winners and “godfathers of AI”, Geoffrey Hinton and Yoshua Bengio.

Hinton resigned from his position at Google Brain earlier this year in order to more freely discuss the possible risks of AI, while Bengio previously stated that AI development needs state control in an interview with Information Age.

Policies recommended in the open document, addressed to governments globally, include:

Stuart Russell, professor of computer science at the University of California, Berkeley, said: “It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”

Share This Article
By admin
test bio
Please login to use this feature.