But the legislator leading the state's efforts on AI regulation hopes that a shifting national landscape will make a difference in the 2025 legislative session.
State Sen. James Maroney, D- Milford, co-chairman of the legislative General Law Committee, has led efforts to put together a consortium of representatives from all states to discuss the issue and prevent a patchwork approach if the federal government won't take the lead on regulations.
Maroney said that, as of this week, only North Dakota, Alabama and Indiana are missing from that effort and he has potential interest from lawmakers in two of those states.
"We have been meeting twice a month with legislators from around the country to try to come up with a common framework," he said.
A spokeswoman for Lamont's office said his position has not changed since the spring and he remains in support of federal regulation as opposed to "a patchwork set of laws in the states."
Lamont said in May he was "just not convinced that you want 50 states each doing their own thing." He also said he did not want to see the state curb AI's potential before fully recognizing what it could be. He said that if the measure would not be addressed on a federal level that he would like to see cooperation between other state houses on developing regulations.
"If you don't think the feds are going to take the lead on this, maybe the states should, but you shouldn't have one state doing it. You should have us do this as a collective," he said.
Maroney said the consortium of lawmakers is hopeful that over a dozen states will pass regulatory bills in their upcoming legislative sessions, adding to a federal consensus around consistent guidelines. Four states — California, Colorado, Minnesota and Illinois — passed laws that include some component of the regulations included in the 2024 Connecticut bill this year, he said.
Although California Gov. Gavin Newsom vetoed that state's own AI safety bill, calling the measures included in the bill too stringent and restrictive of innovation, he signed a different bill requiring AI developers to disclose information about the data used to train their models. Maroney called it a "first step" to ensuring data privacy protections.
"The saying is 'garbage in, garbage out,'" he said. "The most important first step is understanding what data your model was trained on."
The bill vetoed by Newsom, Maroney said, concerned artificial intelligence in high-risk situations, whereas the ambitions of the consortium are focused on "known harms," such as algorithms being used to discriminate in housing and hiring.
In Connecticut, new provisions increasing data privacy protections for minors went into effect on Oct. 1. There were included in state statute that passed in 2023. Maroney said it's one area where Connecticut has been a leader for other states, who are seeking to adopt similar provisions.
Colleen Bielitz, associate vice president for strategic initiatives and outreach at Southern Connecticut State University, said the state has done a "good job" in taking some measures, such as Lamont signing a bill in 2023 that establishes some artificial intelligence oversight.
However, she said she believes that if the state and nation do not go further in enacting other regulations and oversight that it introduces more risks to the erosion of data privacy.
"There's a number of arguments for more regulation, and they're all the same things you hear about when people talk about ethical biases," she said.
Bielitz said artificial intelligence developers have mostly been shielded from accountability by Section 230 of the Communications Act of 1934, passed in 1996 before the field looked the way it does today. Because the models are trained on data on the internet, the models often have biases expressed on the internet baked in, she said.
"You can't go back now that the model is already built," she said.
Bielitz said she recently discovered from a friend that she had been automatically opted-in to sharing her data with a popular social media site, including sharing search queries and hashtags, and it was incumbent upon her as a user to opt out.
"It's my data and I should be able to say whether my data is being used to make these companies even more money," she said. "There are no requirements that these companies have to be accountable or liable."
Bielitz said a lack of oversight and accountability can also mean that proprietary and personal information can be revealed through artificial intelligence if someone poses a question to an online algorithm that has trained itself on conversations had by industry competitors. She said the state of the industry is an "arms race" between nations and "it's based on speed and scale, not accountability."
With data being collected in many places, including facial recognition software, through transcription of virtual meetings and tracking social media activity, Bielitz said it raises concerns about monopolistic practices.
"If we required bias audits or tests to determine whether AI is discriminatory or not and then make the results public, that way we hold the company accountable. But what are they doing to protect your data right now?" she said. "If there were accountability questions in place and companies were forced to think about their liability, they wouldn't be so fast. How are these companies going to help guard us against the threat they unleashed?"
(c)2024 The Middletown Press, Conn. Distributed by Tribune Content Agency, LLC.