Silicon Valley Fights 'Doomer' Bill, Taylor Swift Actually Didn't Endorse Trump

Though we’ve heard AI-company chiefs from OpenAI’s Sam Altman to Grok founder Elon Musk say there should be guardrails around generative AI and that they welcome government regulation, the devil is always in the details. 

AI Atlas art badge tag

That’s why Silicon Valley companies, and some lawmakers including Rep. Nancy Pelosi, have voiced opposition to a California bill that aims to regulate how gen AI technology is developed and deployed. SB 1047 has been proposed by state Sen. Scott Wiener, a Democrat who represents San Francisco, where high-profile AI companies including OpenAI and Anthropic are based. The bill includes provisions for having third-party auditors review risks and safety procedures at AI companies doing business in California, and it offers whistleblower protections to employees and contractors who call out AI abuses by their employers.   

The text of the bill, which is called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act and was proposed in February, can be found here. Wiener has said the “legislation is necessary to protect the public before advances in AI become either unwieldy or uncontrollable,” according to Reuters. And — using a term some AI insiders apply to peers who fear the tech could go horribly wrong — he told The Atlantic that he objects to anyone describing SB 1047 as “doomer” legislation. Like “any powerful technology,” he told the publication, AI “brings benefits and also risks.”

The bill would “mandate safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power,” Reuters reported. “Developers of AI software operating in the state would also need to outline methods for turning off the AI models if they go awry, effectively a kill switch.”

Pelosi, citing opposition by other California legislators, said that SB 1047 is “well-intentioned but ill informed” because it would have “significant unintended consequences that would stifle innovation.” Stanford University computer scientist Fei-Fei Li, described as the “godmother of AI,”  says that while she endorses “governance that minimizes potential harm and shapes a safe, human-centered AI-empowered society,” SB 1047 “falls short” and could harm the ecosystem around AI innovation in the US. (Li created an AI startup called World Labs in four months that already has a valuation of $1 billion, according to the Financial Times.)

Venture capitalists such as Andresseen Horowitz, which has invested over a billion dollars into AI startups, also oppose the bill. 

And Open AI, maker of ChatGPT, said in an Aug. 21 letter to Wiener that the company is opposed to SB 1047 because Open AI believes regulation should happen at the federal level and not come from states, with a few exceptions. 

“A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the U.S. to lead the development of global standards,” OpenAI’s chief strategy officer, Jason Kwon, wrote. “States can develop targeted AI policies to address issues like potential bias in hiring, deepfakes, and help build essential AI infrastructure, such as data centers and power plants, to drive economic growth and job creation. OpenAI is ready to engage with state lawmakers in California and elsewhere in the country who are working to craft this kind of AI-specific legislation and regulation.”  

Wiener responded to OpenAI’s letter last week.

“Instead of criticizing what the bill actually does, OpenAI argues this issue should be left to Congress. As I’ve stated repeatedly, I agree that ideally Congress would handle this. However, Congress has not done so, and we are skeptical Congress will do so,” he wrote. 

“OpenAI claims that companies will leave California if the bill passes. This tired argument — which the tech industry also made when California passed its data privacy law, with that fear never materializing — makes no sense given that SB 1047 is not limited to companies headquartered in California,” he added. “Rather, the bill applies to companies doing business in California. As a result, locating outside of California does not avoid compliance with the bill.”

The California Senate passed the bill in May. Lawmakers in the California Assembly are expected to vote on SB 1047 by the end of the month.  

Here are the other doings in AI worth your attention.

AI spending to top $600 billion, AI continues to prompt job cuts

Market research firm IDC predicts that global spending on AI, including software, infrastructure and business services, will more than double, to $632 billion, by 2028, according to a new analysis. Spending on gen AI — which is broken out as distinct from machine learning, deep learning and automatic speech recognition technology — will account for 32%, or $202 billion, of overall AI spending in four years, IDC said.

The financial services industry will spend the most on AI solutions between 2024 and 2028, followed by software and information services, and retail, IDC said. As for the top AI use cases, the researcher said No. 1 is augmented claims processing, followed by digital commerce, automated sales planning and investments in smart factory floors.   

In another report, data compiled by Stocklytics yields the prediction that gen AI “will make up over 40% of the total AI industry market size by 2030, or twice as much as this year.” 

Citing data from research firm Statista, Stocklytics noted that “while ChatGPT is still the most popular showcase of generative artificial intelligence, other AI tools like Character.ai, DeepL, Quillbot, Midjourney, and Capcut have also played a significant role in user growth in the AI industry. Statista expects close to 315 million people to use AI tools in 2024, or 60 million more than last year, and generative AI has a huge part in this massive user base. With an average of 65 million people embracing AI solutions and tools annually, the industry will reach close to 730 million users by 2030.”

Meanwhile, tech companies continue to cut jobs as they shift their investments into technology including AI and as they wait to capitalize on AI trends.

Cisco Systems said it was cutting 7% of its employees in its second round of cutbacks this year, according to the Associated Press. Earlier in the month, chipmaker Intel said it would cut about 15,000 jobs — or 15% of its workforce — to save money as it works to compete with rivals including Nvidia and AMD. “Our revenues have not grown as expected — and we’ve yet to fully benefit from powerful trends, like AI,” Intel CEO Pat Gelsinger wrote in a blog post.  

Talking to chatbots via voice queries may be the next thing 

After playing with a trial version of ChatGPT’s new and controversial voice mode (actress Scarlett Johansson wasn’t a fan), CNET’s AI reporter, Lisa Lacy, found that the chatbot seemed to engage in more-natural conversations and interactions. Which may or may not be a good thing.

“OpenAI acknowledges that the more natural interaction could lead to anthropomorphization — that is, users feeling the urge to start treating AI chatbots more like actual people,” Lacy reported. “In a report this month, OpenAI found that content delivered with a humanlike voice may make us more likely to believe hallucinations, or when an AI model delivers false or misleading information.”

Signup notice for AI Atlas newsletter Signup notice for AI Atlas newsletter

How far down the rabbit hole did Lacy get with OpenAI’s new GPT-4o AI model? “I felt the impulse to treat ChatGPT more like a person — especially since it has a voice from a human actor. When ChatGPT froze up at one point, I asked if it was OK. And this isn’t one-sided. When I sneezed, the AI said, ‘Bless you.'”

We can expect that AI users will have more voice conversations with chatbots as OpenAI makes voice mode more widely available. In addition, Google this month announced support for voice queries and long conversations in its Gemini gen AI system as it works to make its Google Assistant more compelling than Apple’s Siri and Amazon’s Alexa.

“The rise of generative AI, or AI models that can generate content and provide conversational responses to prompts, has resulted in a virtual assistant renaissance,” said CNET reviewer Lisa Eadicicco. “Google is determined to show that Gemini isn’t just Google Assistant 2.0 but an entirely new type of helper that can hold lengthy conversations, understand context and interpret multiple types of inputs like sights, sounds and text.” 

Trump fakes Taylor Swift endorsement with AI deepfakes

As of this writing, Taylor Swift hasn’t endorsed any US presidential candidate. But that didn’t stop former President Donald Trump from saying, on his Truth Social platform, that he accepted her support, as he posted some AI-generated deepfake images of Swift and her fans, known as Swifties, seemingly showing support for the Republican presidential nominee. 

What’s the big deal? First, Swift is an international star with millions of fans, and any endorsement from her could sway some voters. The fake endorsement and fake AI images could fool them into thinking the singer has made her pick.

“Swift, who was not political for the first decade of her career, has been vocally supportive of Democratic candidates and policies in recent years. She is an advocate of the LGBTQ community and has spoken frequently about women’s rights and reproductive health,” CNN noted, adding that the singer has been critical of Trump in the past. 

Neither the Trump campaign nor Swift responded to requests for comment. In an interview with Fox News, Trump said he didn’t know how the images were created. “I don’t know anything about them other than somebody else generated them. I didn’t generate them,” Trump told Fox News. “These were all made up by other people.” Trump didn’t say who had shared the images with him. He told Fox that using AI to impersonate people is “always very dangerous.” 

The bigger deal is that the use of deepfakes by high-profile politicians may be an “easy way to sow doubt about basic facts,” Politico wrote. The AP said Trump has been using AI fakes as part of a trend to “create illusions of support around his own campaign … to score political points and satisfy his base by prompting alternate realities.”

In August, Trump falsely claimed that his rival, Vice President Kamala Harris, used AI to inflate crowd sizes in photos from her rallies. He also posted an AI-generated fake image of Harris with a communist banner via social media.

“It’s been argued that all this stuff is more like meme-generation than a genuine attempt to distort the record. Even so, it has a lot of power in the hands of a figure like Trump, who has long blurred truth and fiction in a way that gets dizzying to untangle,” Politico added.

The use of AI-generated fake images in politics highlights the fact that neither Republicans nor Democrats have issued guidelines on how candidates should or shouldn’t use AI. There’s no federal law or regulations concerning AI-generated political content (though the Federal Communications Commission did ban AI-generated robocalls after fraudsters faked President Joe Biden’s voice around the New Hampshire primary earlier this year). 

Added Politico: “Proposals to put guardrails on such content have gathered steam in Congress, at the Federal Communications Commission and at the Federal Elections Commission — but they all face stiff Republican pushback.” 

AI bot loses race for mayor in Wyoming — by a lot

Though AI might be working its way into some campaigns, voters in Wyoming rejected a mayoral candidate who’d proposed having an AI bot run the local government of Cheyenne.

Candidate Victor Miller “quickly made headlines after he decided to run with his customized ChatGPT bot, named AI (Virtual Integrated Citizen), and declared his intention to govern in a hybrid format, in what experts say was a first for US political campaigns,” The Guardian reported.

Miller had proposed that Vic AI could be used to come up with data insights and plans to help solve problems in Cheyenne, which has a population of about 65,000 people. Miller would serve as the official mayor and make sure that “all actions are legally and practically executed,” The Guardian said.

Cheyenne residents were having none of that and Miller lost — by a lot. He received 327 votes out of the 11,036 cast, according to a tally released by Laramie County

Miller conceded the outcome, and in a post on X said he was happy to be “the first person to put artificial intelligence directly on the ballot, offering voters the novel choice of AI governance” and that “our campaign has marked a historic moment in politics and technology.” He’s now working on a new effort to “create a framework where AI can take on the full responsibility of decision-making in public office” and to find more “Rationally Bound Delegates,” like himself, who will run for office and “commit to deferring 100% of their decision-making to AI-systems.”

Miller wasn’t actually the first person to put an AI on a ballot. In the UK, businessman Steve Endacott ran “AI Steve” in local elections in Brighton in July. The public was even less impressed than in Wyoming, with AI Steve garnering less than 1 percent of the total votes cast — or about 179 votes.



Fuente