Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Should Pennsylvania Regulate AI?

Pittsburgh has a new policy that bars city employees from using generative AI tools with sensitive data from residents. But every state and locality has their own set of rules as there is no federal law to regulate the development and use of AI.

For years Pittsburgh has been the city that builds artificial intelligence systems, not one that regulates them, but that is starting to change.

The city itself has a new policy barring city employees from using generative AI tools like ChatGPT with sensitive data from residents. Allegheny County has gone a step further, banning all use of generative AI for county employees while a task force deliberates.

At the state level, employees in the office of administration are encouraged to use ChatGPT Enterprise, a chatbot they got access to through a first-of-its-kind partnership with OpenAI in January. Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI's worst tendencies.

"There's a few folks [in Pittsburgh ] that have actually provided either testimony or directly work with legislators and legislative staff to talk about this type of stuff, but they're few and far between," Lance Lindauer, cofounder and director of Partnership to Advance Responsible Technology, told me as he drove back from this week's Nvidia summit in Washington. "We absolutely need to get a little bit more influential."

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation's top AI developers, most of which are based in the Golden State.

"We cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Mr. Newsom wrote in a letter explaining his decision. "But [regulation] must be based on empirical evidence and science."

He said bills regulating specific, known risks posed by Al, are better than blanket legislation that could "give the public a false sense of security," without addressing some of the most insidious uses of AI. He also noted that broader regulation could stifle innovation.

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is "too important a technology not to regulate."

Mr. Pichai said he's hoping for an "innovation-oriented approach" that mostly leverages existing regulations rather than reinventing the wheel.

Mr. Lindauer told me that kind of debate is playing out in Washington and across the globe, with a whole host of competing interests and ideas.

"There's obviously corporate influence on top of that as well. So I don't know what the exact right answer is," he said.

There are also unanswered questions of liability — who's to blame when an AI makes a mistake — that could be leveraged to make the field safer, Jessica Kuntz explained. She worked for years as a foreign service officer before joining the University of Pittsburgh's cyber lab as a policy director.

She said tracking AI use could be a key first step for Pennsylvania and other states.

Regulators could require companies to disclose how they use AI, then use that information to build a database. Rep. Rob Mercuri, who represents Allegheny County, proposed such a registry in 2022, arguing it would "enable law enforcement and Pennsylvania citizens to know and respond should anything nefarious occur." The only issue with that approach is that so many companies are already using AI, it would be like tracking the use of Excel.

A more useful form of mandated reporting could focus on incidents.

In the autonomous vehicle space, companies are required to share crash reports with federal regulators. A group called the Responsible AI Collaborative is taking a more scrappy, crowdsourced approach, compiling a master list of all AI incidents, from deepfakes to scams to corporate failures. The project is "dedicated to indexing the collective history of harms or near harms" from AI.

Consulting editor Daniel Atherton pulled out a few Pennsylvania examples for me, including the high-profile case of bias in the Allegheny Family Screening Tool, which made national news in 2022 and prompted a Department of Justice investigation, and a Tesla crash last year that involved a car on autopilot. That report in the database was based on a Post-Gazette story.

Another way to at least start tracking AI is to mandate labels or watermarks for computer-generated content.

Experts from CMU and Google support the watermark approach. And in April, Pennsylvania passed a law requiring disclosure of all AI-generated content for consumer goods. The state also banned sexually explicit deep fakes.

Still, many policymakers want to know more about the potential harms of AI. Philadelphia city councilors are looking to schedule a series of hearings to learn more about the technology. Pittsburgh's AI policy is described as "interim" and subject to change.

"It is expected that as AI continues to evolve, these usage standards will change as well," the city code states.



(c)2024 the Pittsburgh Post-Gazette. Distributed by Tribune Content Agency, LLC.
TNS
TNS delivers daily news service and syndicated premium content to more than 2,000 media and digital information publishers.