ALBANY — State government is moving into a new frontier by confronting the extraordinary promise as well as the potential threats of artificial intelligence.
More than a dozen active bills and Gov. Kathy Hochul’s State of the State speech in January will seek to nurture the computer technology that can greatly advance health care, create more creative jobs for people now toiling in repetitive ones and perform mundane tasks such as household chores and driving.
At the same time, lawmakers also will be looking to guard against the potential dangers of artificial intelligence: a deeper gulf in income inequality, an erosion of privacy rights, and the broader threat of uncontrolled, self-aware computers.
“Government is probably behind on this, but that’s understandable,” said Steven Skiena, a distinguished professor of computer science and director of the Institute for AI-Driven Discovery and Innovation at Stony Brook University. “I think everybody is behind on this, and the people building the technologies are probably unaware of all the societal impacts of their technologies.
“These machine-learning-based systems are doing amazing things, and they will only get better and better at it and it’s clear that a lot of the things it can do will make the world a better place,” Skiena added. “On the other hand, it’s a potentially disruptive technology that will have consequences — some unintended, some probably unforeseeable — so it is important that government keeps an eye on this and is trying to regulate it. But it’s a tough thing to get right.”
Artificial intelligence, or A.I., uses computer systems to simulate human intelligence. A.I. can learn, make decisions and write and converse at a level so high that it spawned a burgeoning computer industry to detect abuses. A.I. already can perform repetitive or creative tasks that long had been done by people.
Proponents see the benefits of A.I. as including an emerging job market for people to develop and monitor the technology; collecting data without error to free up employees for higher-level tasks; executing quicker and more effective responses to disasters, and advancing health care, including overcoming disabilities and diagnosing diseases.
As for the potential jobs, “We’re working on it now,” Hochul said this month. “Working on how … can I bring those A.I. jobs to New York. That’s how I view the A.I. revolution, as a job-creating opportunity to fuel the energy in all of our universities and working with the private sector.”
The governor said she will propose artificial intelligence policy in her State of the State address in January.
But even advocates of expanding the use of the technology understand the potential hazards.
Critics worry A.I. will decimate the job market. Additional concerns include the lack of transparency and accountability. If a thinking machine creates a computer system that fails, such as a self-driving car that crashes, who is responsible? Privacy concerns are many, ranging from machines controlling sensitive personal data to data collected and overheard by at-home assistants such as Amazon’s Alexa app, researchers said.
Sen. Kristen Gonzales (D-Queens), chairwoman of the Senate Internet and Technology Committee and a former tech worker, has proposed the Legislative Oversight of Automated Decision Making in Government Act. The bill addresses potential biases and discrimination in decisions made by machines based on “race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability.”
“There are some all-out red lines to protect privacy, to protect us from surveillance, and also protect our rights to our data,” Gonzales said of her goals. “We are trying to set up guardrails for implementing new technology.”
These concerns are behind more than a dozen active bills in Albany that may be debated in the 2024 legislative session beginning in January. Among them are measures to:
Government needs to act and “not wait until the damages are too great,” said Jason Zenor, a communications professor at SUNY Oswego.
Artificial intelligence “is not new, but this latest generation of A.I. is more powerful and now has the ability to create and to work independently once it is set in motion,” Zelnor told Newsday. “This new generation of A.I. is also easier for all of us to access and to use. So, this means there will be more bad actors who can abuse it — [and] we certainly want to stop them. But probably more significantly, there will be unintended consequences because of the scale of its use, and that is why we need principles and guidelines.”
Other states and the federal government already have take some action.
Fifteen states and Puerto Rico adopted legislation and resolutions concerning A.I., according to the National Conference of State Legislatures. Connecticut required state agencies to make sure A.I. isn’t resulting in “unlawful discrimination or disparate impact” on people; Maryland created a grant program to help small- and medium-sized manufacturers use A.I.; Texas, North Dakota, Puerto Rico and West Virginia created panels to study and monitor artificial intelligence used by agencies.
North Dakota passed a law that legally defines a person, “specifying that the term does not include environmental elements, artificial intelligence, an animal or an inanimate object,” according to the National Conference of State Legislatures.
In October, President Joe Biden signed an executive to create restrictions on A.I. Biden said the order aims to protect national security and consumer rights to make sure use of artificial intelligence can be trusted. “To realize the promise of A.I. and avoid the risk, we need to govern this technology,” he said.
In Congress, House and Senate committees held hearings this fall to question top tech company executives and to explore legislative parameters for the development of artificial intelligence. The European Union also is trying to establish rules to nurture artificial intelligence as an industry while restricting abuses.
New York must catch up, said Assemb. Clyde Vanel (D-Queens), chairman of the Assembly’s Subcommittee on Internet and New Technology. He sponsors and/or co-sponsors most of the A.I. bills in the Legislature and is a member of the MIT AI Policy Forum.
He said he is wary of the potential threats of A.I., but said a widespread role for artificial intelligence in everyday life is inevitable, and that legislation to give it guardrails is essential.
“Since the first man grabbed a rock and used a rock as a tool, technology has replaced tasks and jobs,” Vanel told Newsday. “But before that, I don’t know how productive we were at night or when it was cold.”
“We have to make sure that we work with the technology,” he said. “What’s really important when it comes to this technology is, if we don’t get it right, the technology is not going to wait for us.”
But others are sounding the alarm.
“I see the threat of A.I. … as by far the biggest threat to humanity in history,” said Tam Hunt, an affiliate guest in psychology in the Memory, Emotion, Thought Awareness Lab at the University of California at Santa Barbara. “I see the rapid development and runaway AI and it’s extremely important that people are approaching this as a real threat.”