Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

California tech industry divided over proposed Bill to regulate AI

CALIFORNIA: California is considering a new law to regulate artificial intelligence that seeks to rein in what has been described as potentially catastrophic consequences of the powerful technology.
At a recent AI conference, California Senator Scott Wiener, who sponsored the Bill, made the case that now is the time for regulation. 
Explaining why such laws are needed, he used the example of social media, saying that society gave up on regulating it, leading to data privacy issues.
The proposed law’s stated goal is to “mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist”.
Wiener threw out scenarios that could arise from misuse of AI, including the shutting down of the electric grid and facilitating the creation of chemical, biological and nuclear weapons.
“If that risk is there and there are reasonable steps you can take to reduce the risk, you should do so,” he said.
The proposed law would only apply to very large AI models, that, for example, cost over US$100 million to train. It would include the need for safety testing and plans for a so-called kill switch if the model gets out of hand. 
The Bill has already cleared the state’s legislature, and California Governor Gavin Newsom has until the end of the month to make a high-stakes decision that could play a major role in determining the future of AI.
If the proposed law – called SB 1047 – goes into effect, it would be the US’ strongest AI regulation to date and could set a template for other states. The country has yet to see the type of broad AI laws that have gone into effect in places like the European Union and China.
However, many in California’s powerful tech industry are concerned. They have warned of unintended consequences that could slow down the state’s thriving AI ecosystem, which includes companies like ChatGPT-maker OpenAI and tech giant Google.
“In the case of 1047, the companies that create the models are liable for what you do with the models. And so a big part of the debate has been, is that an appropriate form of liability, and what impact will that have on the … AI industry,” said Jeremy Nixon, CEO of AI hackathon society AGI House.
The Bill’s opponents believe that if it is signed, it would have a chilling effect on AI investment and development in California, opening the door for countries like China to overtake the US in the technology.
California officials would have a lot more say in how AI is developed, with the legal right to sue companies if they are not in compliance. 
Open source models, like Meta’s Llama – with code made freely available – may become particularly risky since the developer could be liable for criminal activity stemming from modifications to its models. Some believe that could stifle competition.
“As a hacker, you want to make sure you have access to the best tools. There’s some sense that the best tools may live longer (and) be open source,” said Nixon.
“So closed companies like OpenAI and Anthropic, which don’t give you access to their model, would dominate the world of state-of-the-art models.” 
San Francisco-based Anthropic is a rival to OpenAI and is backed by Amazon and Alphabet.
Still, the Bill has diverse supporters including Tesla CEO Elon Musk. The proposed law is also getting a boost on social media from celebrities like Hollywood actor Joseph Gordon-Levitt.

en_USEnglish