Council Post: Dr. OpenAI, Or How I Learned To Stop Worrying And Love AI

6 Min Read

Click to save this article.You’ll be asked to sign into your Forbes account.

CEO and cofounder of SenseIP. Technologist and Intellectual Property expert who’s passionate about making business sense of IP.

An upcoming stage production of Doctor Strangelove in London reminded me of the 1964 movie — a favorite of mine — and made me ponder: Are we nearing the point of no return in the way we develop and use artificial intelligence (AI)? Are the somewhat apocalyptic prophecies and warnings — some made by industry leaders — something we should take into account? Should we use AI more or hold back and reassess the potential risks versus the benefits?

One of the earliest pioneers of machine learning was Alan Turing, who proposed a concept for a machine that could simulate any human intelligence task (the Turing Test) in the 1950s. In the 1960s and 1970s, the focus on AI was on building expert systems, mostly using overwhelming sets of rules. In the 1980s, the pitfalls of such systems were evident, and research started focusing again on machine learning, introducing neural networks to imitate how the human brain works. The 1990s showed great advancements in natural language processing, which was even further improved in the early 2000s using deep neural networks.

In the last few years, the research around AI has exploded. The introduction of human-like assistance that started with voice agents like Siri and Alexa paved the way and readied the public for the introduction of ChatGPT.

It seems like all we talk about is AI these days. Millions of people have tried and experienced AI themselves by using ChatGPT, and that doesn’t even include other AI-enabled technologies such as DALL-E and autonomous cars. The more glimpses the public has of AI, the bigger their demand. And this has ignited the imagination of inventors and entrepreneurs all over the world.

With advancements in reinforcement learning, quantum computing and neuromorphic computing, we see more and more areas and everyday use cases that implement AI capabilities. Most such solutions are very helpful for tedious human tasks, some improving on human expertise and creating amazing new opportunities for mankind (for example, the automatic detection of cancer tissues in radiology photos). But some might be a bit worrisome, playing on our deepest fears.

Keeping to the movie theme, back in 1984, The Terminator showed us a scenario in which the machines become self-aware and retaliate against their creators. The “consciousness” of a machine — or the ability of a computer to invent and think for itself — is correlated to AI and, specifically, to innovation. Are we there yet?

The intellectual property world has been in an interesting debate in the last few years about whether AI can be considered an inventor. Can an AI system actually create and generate something completely novel that’s worth patenting?

In 2019, Dr. Stephen Thaler created the AI system DABUS (Device for the Autonomous Bootstrapping of Unified Sentience). In patent applications filed by Thaler, he credited DABUS as the inventor. The U.S. Patent and Trademark Office (USPTO) rejected this application for the lack of a valid inventor. After a few reviews and appeals, the case landed in the Federal Circuit in 2022, which made it clear that the invented can’t be the inventor.

This debate over AI inventorship continued in 2023, as the USPTO published a request for comments regarding AI and inventorship from the public. So, we’re likely to keep hearing news and rulings on this subject.

For now, I believe both the USPTO and the Federal Circuit captured the right approach. For all their powers and capabilities, AI engines are just another layer in a product of a framework. This usually starts and ends with a human being. Even when talking about intellectual property, there are systems that leverage AI to generate patents automatically, but the process must start with an idea, or the essence of an idea, generated by a human being.

Will humankind remain in control over AI or will the machines rise to think for themselves? I asked GPT-4 that question and was given the following answer: “AI machines are not capable of thinking independently in the same way as humans. They cannot have original thoughts or creative ideas outside their pre-programmed capabilities. It’s important to note that while AI can mimic human-like decision-making and creative processes, it doesn’t truly understand these concepts as humans do. It’s still an active research area as to how far AI can evolve in terms of independent and innovative thinking.”

So, at least for now, I’m not afraid.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.