Can Chatbots Help You Build a Bioweapon?

9 Min Read

Human extinction, mass unemployment, cheating on exams — these are just some of the far-ranging fears when it comes to the latest advances in artificial intelligence chatbot capabilities. Recently, however, concern has turned toward the possibility that a chatbot could do some serious damage in another area: making it easier to construct a biological weapon.

Human extinction, mass unemployment, cheating on exams — these are just some of the far-ranging fears when it comes to the latest advances in artificial intelligence chatbot capabilities. Recently, however, concern has turned toward the possibility that a chatbot could do some serious damage in another area: making it easier to construct a biological weapon.

These fears are based in large part on a report from a group at the Massachusetts Institute of Technology, as well as testimony in the U.S. Congress from Dario Amodei, the CEO of AI company Anthropic. They argue that chatbots could provide users with step-by-step instructions to genetically engineer and produce pathogens, such as viruses or bacteria. Armed with such information, the thinking goes, a determined chatbot user could go as far as to develop and deploy a dangerous bioweapon without the need for any scientific training.

The threat here is serious — chatbots can indeed make it easier to find and interpret highly technical scientific information. At the same time, thinking of chatbots as the gatekeepers of information overstates how high the barrier to this information really is. As policymakers consider the United States’ broader biosecurity and biotechnology goals, it will be important to understand that scientific knowledge is already readily accessible with or without a chatbot.

Scientific knowledge, particularly online, is indeed plentiful for an interested learner. And usually for good reason: Open, transparent, and accessible science can push advances in biotechnology and medicine. Education and outreach can make a real difference when it comes to basic science literacy and increasing engagement in science, technology, engineering, and mathematics.

I learned just how helpful clear, accurate information is in a lab during my Ph.D. research in biochemistry. The training that I received, supplemented by information I found online, taught me everything from how to use basic laboratory equipment to how to keep different types of cells alive. I’ve also learned that it doesn’t take a chatbot — or even a graduate degree — to find this information.

Consider the fact that high school biology students, congressional staffers, and middle-school summer campers already have hands-on experience genetically engineering bacteria. A budding scientist can use the internet to find all-encompassing resources. Helpful YouTube video playlists cover everything from how to hold a pipette and balance a centrifuge to how to grow living cells and visually inspect samples for contamination. When experiments don’t go as planned, researchers can crowdsource troubleshooting help from message boards such as ResearchGate, a resource that I found to be a lifesaver in graduate school.

For those who dig a little deeper, online instructions go well beyond the basics. Scientists meticulously detail how they conduct their experiments in order to help other researchers repeat their work and verify their findings — a major underlying tenet of the scientific method. Showing your work is important since unreliable results can waste time and resources; for instance, a 2015 study estimates that U.S. companies and research institutions alone spend $28 billion per year on unreproducible preclinical research.

To be sure, finding the information to create and assemble a biological weapon is not as straightforward as in the examples above. Making or modifying a virus, for example, uses different steps, resources, and terminology than genetically engineering bacteria, like the high school students and congressional staffers did. For some users, a foundational scientific education will impart enough technical skill, confidence, and scientific literacy to attempt these more difficult experiments. For others, a chatbot could help to overcome this initial learning curve.

In other words, a chatbot that lowers the information barrier should be seen as more like helping a user step over a curb than helping one scale an otherwise unsurmountable wall. Even so, it’s reasonable to worry that this extra help might make the difference for some malicious actors. What’s more, the simple perception that a chatbot can act as a biological assistant may be enough to attract and engage new actors, regardless of how widespread the information was to begin with.

If the barrier to information is already low, what can we do to actually make things safer?

For starters, it’s still useful to come up with guardrails during chatbot development. Preventing a chatbot from detailing how to make anthrax or smallpox is a good first step, and some companies are already starting to implement safeguards. However, a comprehensive biosecurity strategy should account for the fact that users can jailbreak these safety measures, as well as the fact that the relevant information will still be available from other sources.

Secondly, we can think more critically about how to balance security with scientific openness for a small subset of scientific results. Some scientific publications contain dual-use research that was conducted for legitimate reasons and with proper oversight but could enable a malicious actor to do harm. While current policies that regulate so-called risky research include directives to responsibly communicate results, experts have identified a need for more clearly defined and uniform publication policies.

Finally, we can add or strengthen barriers in other places, including the acquisition of physical materials. For example, some scenarios for biological misuse involve designing and ordering custom strands of DNA from mail-order companies. Placing tighter requirements on companies to screen orders could keep potentially risky DNA sequences out of malicious actors’ hands, regardless of whether they obtained the information through a Google search, a chatbot, or an old-fashioned scientific journal.

The new executive order on AI signed by U.S. President Joe Biden on Oct. 30 is a big step in the right direction, as it includes a DNA screening requirement for federally funded research. To be truly effective, however, such screening requirements should apply to all custom DNA orders, not just those funded by U.S. agencies.

Furthermore, biosecurity is only one concern to be balanced with several others when it comes to equitable information access and biotechnology. Some experts expect the global bioeconomy to grow to $4 trillion annually by 2032, including creative solutions to climate change, food insecurity, and other global ills.

To achieve this, countries such as the United States need to engage the next generation of biological inventors and bolster the biomanufacturing workforce. Overemphasizing information security at the expense of innovation and economic advancement could have the unforeseen harmful side effect of derailing those efforts and their widespread benefits.

Future biosecurity policy should balance the need for broad dissemination of science with guardrails against misuse, recognizing that people can gain scientific knowledge from high school classes and YouTube — not just from ChatGPT.

Share This Article
By admin
test bio
Leave a comment