AIhub monthly digest: October 2023 – probabilistic logic shields, a responsible journalism toolkit, and what the public think about AI – ΑΙhub

admin
6 Min Read

Welcome to our October 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, find out about recent events, and more. This month, we talk AI, bias, and ethics with Aylin Caliskan, learn more about probabilistic logic shields, knowledge bases, and sparse reward tasks, and find out why everyone should learn a little programming.

AIhub ambassador Andrea Rafai met with Aylin Caliskan at this year’s International Joint Conference on Artificial Intelligence (IJCAI 2023), where she was giving an IJCAI Early Career Spotlight talk, and asked her about her work on AI, bias, and ethics. In this interview they discuss topics including bias in generative AI tools and the associated research and societal challenges.

In their IJCAI article, Safe Reinforcement Learning via Probabilistic Logic Shields, which won a distinguished paper award at the conference, Wen-Chi Yang, Giuseppe Marra, Gavin Rens and Luc De Raedt provide a framework to represent, quantify, and evaluate safety. They define safety using a logic-based approach rather than a numerical one; Wen-Chi tells us more in this blog post.

Maurice Funk, and co-authors Balder ten Cate, Jean Christoph Jung and Carsten Lutz, were also recognised with an IJCAI distinguished paper award for their work SAT-Based PAC Learning of Description Logic Concepts. In this interview, Maurice tells us more about knowledge bases and querying, why this is an interesting area for study, and their methodology and results.

The 26th European Conference on Artificial Intelligence (ECAI 2023) took place at the beginning of October in Krakow, Poland. Xuan Liu, winner of an outstanding paper award at the conference, told us about her work on selective learning for sample-efficient training in multi-agent sparse reward tasks. You can also find out what participants got up to at the conference in our round-up.

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage. Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

Seven symposia comprised this year’s AAAI Fall Symposium, which took place at the end of October. We were able to attend virtually, and covered the plenary talk by Patrícia Alves-Oliveira, on human-robot interaction design, which was part of the symposium on Artificial Intelligence for Human-Robot Interaction (AI-HRI).

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

On 30 October, President Biden issued an Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence”. A fact sheet from the White House states that the order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

An evidence review from the Ada Lovelace Institute examines what the public think about AI, and highlights the importance of meaningfully involving people in decision-making when it comes to AI safety and governance.

With a new interactive resource from the Swiss Institute of Bioinformatics and the University of Basel, you can navigate through catalogued natural proteins. The Protein Universe Atlas is a sequence similarity network and contains around 53 million unique protein sequences.

In an essay entitled The Artificiality of Alignment, Jessica Dai asks how we are actually “aligning AI with human values”? “For all the pontification about cataclysmic harm and extinction-level events, the current trajectory of so-called “alignment” research seems under-equipped — one might even say misaligned — for the reality that AI might cause suffering that is widespread, concrete, and acute.”

In this article for Rest of World, Victoria Turk writes about an analysis of 3,000 AI images which shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities.

The Leverhulme Centre for the Future of Intelligence, University of Cambridge has put together a responsible journalism toolkit. The resource aims to empower journalists, communicators, and researchers, to responsibly report on AI risks and capabilities.

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.