technology

OpenAI firebomber was trying to kill boss Sam Altman: prosecutors

  • - Anti-AI protests - No one was injured in the home and office attacks, which came as Altman's profile has risen with the increasing use of AI and ethical concerns surrounding its use. 
  • A man who allegedly threw a Molotov cocktail at Sam Altman's luxury California home was trying to kill the boss of artificial intelligence giant OpenAI and in possession of an anti-AI document, US officials said Monday.
  • - Anti-AI protests - No one was injured in the home and office attacks, which came as Altman's profile has risen with the increasing use of AI and ethical concerns surrounding its use. 
A man who allegedly threw a Molotov cocktail at Sam Altman's luxury California home was trying to kill the boss of artificial intelligence giant OpenAI and in possession of an anti-AI document, US officials said Monday.
The claims came as prosecutors levied federal charges against Daniel Moreno-Gama, 20, over the attack on Friday in San Francisco.
The Department of Justice said Moreno-Gama had travelled from his home in Texas to carry out the attack on Altman, whose company is behind the popular ChatGPT chatbot.
"Violence cannot be the norm for expressing disagreement, be it with politics or a technology or any other matter," said Acting Attorney General Todd Blanche. 
"These alleged actions -– which damaged property and could well have taken lives -– will be aggressively prosecuted."
Prosecutors say that after lobbing a firebomb at the gates of Altman's home, Moreno-Gama fled on foot to the San Francisco headquarters of OpenAI, where he tried to smash the glass doors of the building with a chair.
He "stated that he had come to burn down the location and kill anyone inside," prosecutors said in the federal criminal complaint.
According to the complaint, when police arrived, they found Moreno-Gama with a jug of kerosene, a lighter and a document entitled "Your Last Warning" which "advocated against AI and for the killing and commission of other crimes against CEOs of AI companies and their investors."
The three-part document was allegedly authored by Moreno-Gama, and listed "names and addresses that purported to belong to multiple CEOs and investors."
Another part of the publication dealt with the "purported risk AI poses to humanity," according to the compaint.
Prosecutors say he ended the document, which included an admission he was trying to kill Altman, with the phrase: "If by some miracle you live, then I would take this as a sign from the divine to redeem yourself." 
Moreno-Gama faces one charge of damage and destruction of property by means of explosives, and one of possession of an unregistered firearm.
It is the latest high-profile attack in the US allegedly involving a call to arms against executives or influential figures.

Anti-AI protests

No one was injured in the home and office attacks, which came as Altman's profile has risen with the increasing use of AI and ethical concerns surrounding its use. 
The CEO and his firm have become targets for people protesting the technology as a threat to society.
Detractors have been particularly troubled by OpenAI's decision to provide its know-how to the US Department of Defense.
In a rare post on his personal blog in the aftermath of the attack, Altman shared a photo of his husband and their baby "in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house."
The OpenAI chief defended his convictions and called for a de-escalation of rhetoric on the topic.
"I empathize with anti-technology sentiments and clearly technology isn't always good for everyone," Altman wrote.
"But overall, I believe technological progress can make the future unbelievably good, for your family and mine."
OpenAI last month said it was valued at $852 billion after a funding round that raised $122 billion.
The figure reflects the surging costs of computing power and came amid lingering questions about whether OpenAI and rival companies can generate sufficient revenue to cover expenses.
ChatGPT claims the top position in consumer AI, with more than 900 million weekly active users and some 50 million subscribers.
Use of ChatGPT's online search engine has tripled over the course of a year, according to OpenAI.
hg/dw/aks/mtp

history

Inside the fireproof vault housing US movie history

BY MATTHEW PENNINGTON

  • They also tapped the personal collections of film icons like movie impresario and silent era star Mary Pickford and motion pictures inventor Thomas Edison, whose early studio produced hundreds of films.
  • Once upon a time in the golden days of Hollywood, the movies were bigger, the stars brighter and the celluloid they were filmed on was, well, explosive.
  • They also tapped the personal collections of film icons like movie impresario and silent era star Mary Pickford and motion pictures inventor Thomas Edison, whose early studio produced hundreds of films.
Once upon a time in the golden days of Hollywood, the movies were bigger, the stars brighter and the celluloid they were filmed on was, well, explosive.
Which is why the US Library of Congress maintains a special, fireproof vault in Virginia, near Washington, DC.
There, the highly combustible nitrate film used from the dawn of cinema in the 1890s until the early 1950s has a permanent home, rarely accessed by the public but toured by AFP.
Lost movies on the volatile but durable medium are still being discovered and preserved in the facility. And thanks to digitization, the lost treasures can also be safely viewed for the first time in decades.
Some 145,000 film reels are stored in strictly fireproof conditions in a vast, chilly vault at the library’s National Audio-Visual Conservation Center in Culpeper, Virginia.
It is crammed with cinematic treasures that rekindle warm memories of an era when movies ruled.
The vault's leader, George Willeman, reeled off the names of classics with negatives there: "Casablanca," Frank Capra-directed films like "Mr. Smith Goes to Washington," and the grand-daddy of all action movies, "The Great Train Robbery" from 1903.
Down a spartan corridor so long it seemed to recede into the distance, he unlocked a series of cell-like steel doors.
Inside each of the 124 cells -- there's one dedicated just to the Disney archive -- were floor-to-ceiling cubby holes.
Each one held film canisters containing negatives and prints, all arranged meticulously: packed tight to prevent canisters from opening, but far enough apart to prevent any fire from spreading.
Since being set up in 2007 in a former US Federal Reserve building in the foothills of the Blue Ridge Mountains, the vault has maintained a perfect no-fire record.

Film nerds' delight

Nitrate film is just part of the center's collection of more than six million items of moving images and recorded sound. They also have supporting scripts, posters and photos.
Willeman, who sports a button badge with the invocation to "Experience Nitrate," said the Library of Congress began preserving the medium when in the 1960s, "it was discovered that so much film was being lost" due to fires and defunct companies throwing negatives away.
With the American Film Institute, the library began collecting and copying nitrate film, including the holdings of big Hollywood studios – RKO, Warner Brothers, Universal, Columbia and Walt Disney.
They also tapped the personal collections of film icons like movie impresario and silent era star Mary Pickford and motion pictures inventor Thomas Edison, whose early studio produced hundreds of films.
"We're 50 some years in, and it (the collection) just keeps growing," Willeman said.
With the arrival of digital media, the mission has expanded beyond preservation for purists and cinema historians -- who say movies just look better on nitrate footage -- to putting old films online.
"Now we can make them available for everybody, which to me, being the film nerd I've been since, like, third grade, is just amazing."
Nitrate film made by early artisans often preserves better than the later safety film, said Courtney Holschuh, nitrate archive technician.
At a workstation with no light bulbs or exposed batteries -- either of which could ignite dust or gas from vintage film -- Holschuh recounted how last September she carefully peeled apart a cache of 10 vintage reels donated by a retired schoolteacher.
There were 42 different titles on the reels -- only 26 of which have been identified. They included a lost film, "Gugusse and the Automaton," by French cinema pioneer Georges Melies.
"So much of our early film history is still out there for us to see and to experience," Willeman said.
msp/sms

OpenAI

OpenAI firebomber was trying to kill boss Sam Altman: prosecutors

  • Prosecutors say that after lobbing a firebomb at the gates of Altman's home, Moreno-Gama fled on foot to the San Francisco headquarters of OpenAI, where he tried to smash the glass doors of the building with a chair.
  • A man who allegedly threw a Molotov cocktail at Sam Altman's luxury California home was trying to kill the boss of artificial intelligence giant OpenAI, US officials said Monday.
  • Prosecutors say that after lobbing a firebomb at the gates of Altman's home, Moreno-Gama fled on foot to the San Francisco headquarters of OpenAI, where he tried to smash the glass doors of the building with a chair.
A man who allegedly threw a Molotov cocktail at Sam Altman's luxury California home was trying to kill the boss of artificial intelligence giant OpenAI, US officials said Monday.
The claims came as prosecutors levied federal charges against Daniel Moreno-Gama, 20, over the attack on Friday in San Francisco.
The Department of Justice said Moreno-Gama had travelled from his home in Texas to carry out the attack on Altman, whose company is behind the popular ChatGPT chatbot.
"Violence cannot be the norm for expressing disagreement, be it with politics or a technology or any other matter," said Acting Attorney General Todd Blanche. 
"These alleged actions -– which damaged property and could well have taken lives -– will be aggressively prosecuted."
Prosecutors say that after lobbing a firebomb at the gates of Altman's home, Moreno-Gama fled on foot to the San Francisco headquarters of OpenAI, where he tried to smash the glass doors of the building with a chair.
The complaint says that when police arrived they found Moreno-Gama with a jug of kerosene, a lighter and a document entitled "Your Last Warning" which "advocated against AI and for the killing and commission of other crimes against CEOs of AI companies and their investors."
Prosecutors say he ended the document, which included an admission he was trying to kill Altman, with the phrase: "If by some miracle you live, then I would take this as a sign from the divine to redeem yourself." 
Moreno-Gama faces one charge of damage and destruction of property by means of explosives, and one of possession of an unregistered firearm. 
No one was injured in the home and office attacks, which came as Altman's profile has risen with the increasing use of AI. The CEO and his firm have become targets for people protesting the technology as a threat to society.
Detractors have been particularly troubled by OpenAI's decision to provide its know-how to the US Department of Defense.
In a rare post on his personal blog in the aftermath of the attack, Altman shared a photo of his husband and their baby "in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house."
The OpenAI chief defended his convictions and called for a de-escalation of rhetoric on the topic.
"I empathize with anti-technology sentiments and clearly technology isn't always good for everyone," Altman wrote.
"But overall, I believe technological progress can make the future unbelievably good, for your family and mine."
OpenAI last month said it was valued at $852 billion after a funding round that raised $122 billion.
The figure reflects the surging costs of computing power and came amid lingering questions about whether OpenAI and rival companies can generate sufficient revenue to cover expenses.
ChatGPT claims the top position in consumer AI, with more than 900 million weekly active users and some 50 million subscribers.
Use of ChatGPT's online search engine has tripled over the course of a year, according to OpenAI.
hg/dw

religion

Trump deletes Jesus post of himself after outcry

BY MALCOLM FOSTER

  • Trump has previously used religious images in his posts.
  • US President Donald Trump on Monday deleted a social media image apparently depicting him as Jesus after an outcry from religious leaders that he was being blasphemous.
  • Trump has previously used religious images in his posts.
US President Donald Trump on Monday deleted a social media image apparently depicting him as Jesus after an outcry from religious leaders that he was being blasphemous.
The image posted on Trump's Truth Social platform showed him in flowing red and white robes, touching the forehead of what appeared to be a sick man and with light shining from his hand and head.
An American flag waved in the background while various figures gazed up at the president in reverence.
The AI picture was posted late Sunday and removed Monday.
Asked about the post, Trump denied that he was trying to look like Jesus Christ.
"I did post it, and I thought it was me as a doctor and had to do Red Cross," he told journalists. "It's supposed to be me as a doctor, making people better. And I do make people better. I make people a lot better."
The post generated an outcry from a series of prominent conservative Christians who are among Trump's biggest backers.
"I don’t know if the President thought he was being funny or if he is under the influence of some substance or what possible explanation he could have for this OUTRAGEOUS blasphemy," Megan Basham, a conservative journalist and commentator wrote on X.
"He needs to take this down immediately and ask for forgiveness from the American people and then from God."
Trump has previously used religious images in his posts. During his 2023 bank fraud trial, he shared a sketch from a supporter that showed him sitting next to Jesus in the courtroom.
His advisors have also repeatedly cast him in a Jesus-like role. 
During an Easter lunch event at the White House earlier this month, Paula White-Cain, a televangelist who has served as his spiritual advisor, likened Trump to Jesus. "You were betrayed and arrested and falsely accused. It's a familiar pattern that our Lord and Savior showed us."

'Spared' for a reason

Trump has more avidly embraced his perceived messianic role after the July 2024 assassination attempt, said Matthew Taylor, a visiting scholar at the Center on Faith and Justice at Georgetown University who studies Christian nationalism.
"Many people have told me that God spared my life for a reason, and that reason was to save our country and to restore America to greatness," Trump told supporters in his victory speech after his 2024 election win.
The Jesus image post could further fracture Trump's base at a time when they are questioning the Middle East war, particularly Catholics offended by his public spat with Pope Leo, who has criticized the US bombing of Iran, Taylor told AFP.
"A lot of right-wing supporters were already pushing back against the war in Iran. The rift was already emerging for a lot of his Catholic base, and with the denunciations of Pope Leo this does threaten to alienate that crowd," Taylor said.
But Kristin du Mez, a historian at Calvin University, doesn't see the support among his die-hard fans wavering.
His conservative Christian supporters "are keeping their distance from what would clearly count as blasphemy," she said.
"But I also see a lot of dodging. Yes, blasphemy is bad, this is inappropriate, he should take this down," du Mez told AFP. "What I’m not seeing is in any way suggesting that they’re not going to continue supporting the man."
mjf/sms

music

Imagine Dragons frontman chases childhood video game dream

  • But when Imagine Dragons "just blew up" while they were at university, Dan, a fan of "Starcraft" and "League of Legends", went with the flow, enlisting Mac along the way as manager.
  • A childhood dream of making video games is becoming a reality this week for Imagine Dragons' singer Dan Reynolds, as his company's debut title "Last Flag" is released Tuesday.
  • But when Imagine Dragons "just blew up" while they were at university, Dan, a fan of "Starcraft" and "League of Legends", went with the flow, enlisting Mac along the way as manager.
A childhood dream of making video games is becoming a reality this week for Imagine Dragons' singer Dan Reynolds, as his company's debut title "Last Flag" is released Tuesday.
Games had been a passion of Reynolds and his brother and band manager Mac long before the group became a global name.
Now the pair have used some time away from music to build a team-based shooter inspired by the games of Capture the Flag they played in the woods as young Boy Scouts.
"Last Flag" is "not a passion project, (we've) been working on it now for five-plus years," Reynolds told journalists during a virtual news conference.
Their roughly 30-strong studio, Night Street Games, has been working on "Last Flag" since its 2020 founding.
The game sorts players into two teams of five who can battle online, competing to hide their own flag and snatch the opposing team's banner.
"I grew up in a family of eight boys and one girl, and we were all nerdy kids," 38-year-old Dan Reynolds remembered.
Creating their own game had been "this dream that we talked about all the time" as they learned skills such as programming and 3D modelling.
But when Imagine Dragons "just blew up" while they were at university, Dan, a fan of "Starcraft" and "League of Legends", went with the flow, enlisting Mac along the way as manager.
Tracks such as "Believer", "Thunder" and "Radioactive" have made the band one of the most popular pop rock groups worldwide.
It has sold 74 million albums and racked up 160 billion streams, according to record label Warner Music Group.
The band's ride has been "just incredible", Dan said.
"But we talked all the time during that about 'what if?'" the brothers had gone through with their gaming dream, he added.
When the time finally came, they devised a brightly coloured world filled with seventies stylings.
"Last Flag" bears visual similarities to the genre juggernaut "Fortnite", but the Reynolds say their title stands out from the pack with a focus on playing the objective -- not simply eliminating opponents.
Several high-value productions in the team shooter genre have fallen flat in recent years, with titles such as "Concord" or "Highguard" quickly taken offline after failing to win a loyal player base.
"Even though there's a ton of competition, I think we've seen even recently that a new game... can break through if it provides something different," Mac Reynolds said.
kf/tgb/js

training

'Stop hiring humans'? Silicon Valley confronts AI job panic

BY BENJAMIN LEGENDRE

  • In his view, coding is not an obsolete skill -- AI has simply made it available to more people.
  • AI industry insiders want workers to code smarter, think harder and lean into their humanity -- but still dodge the question of how many jobs artificial intelligence will destroy.
  • In his view, coding is not an obsolete skill -- AI has simply made it available to more people.
AI industry insiders want workers to code smarter, think harder and lean into their humanity -- but still dodge the question of how many jobs artificial intelligence will destroy.
The reassurance rang out across HumanX, a four-day conference drawing some 6,500 investors, entrepreneurs and tech executives, even as a blunt advertisement at the entrance set the tone: "Stop hiring humans."
On the main stage, May Habib, chief executive of an AI platform called Writer, told the audience that Fortune 500 bosses are having a "collective panic attack" on the subject.
The anxiety is well-founded. More and more companies are directly citing AI in announcing job cuts.
High-profile examples are on the rise: Salesforce laid off 4,000 customer support workers, saying AI now handles 50 percent of its work.
Block chief Jack Dorsey announced plans to cut the company's headcount nearly in half, citing "intelligence tools" that have fundamentally changed how companies operate.
Not all claims have gone uncontested -- some economists say firms are pointing to AI to rationalize layoffs that are really about past overhiring or cost-cutting ahead of massive infrastructure investments.
OpenAI's Sam Altman has spoken of "AI-washing," and most speakers at the San Francisco event similarly dismissed the invocation of AI as a false pretext for job cuts -- even as they freely predicted disruption was just around the corner.
AI is going to "transform every single company, every single job, every single way that we do work," said Matt Garman, chief executive of cloud computing giant Amazon Web Services.

'Pretty unsettling'

The debate remains heated. Two years ago, Nvidia chief Jensen Huang declared that the ultimate goal was to make it so "nobody has to program" or code.
"We will look back on that as some of the worst career advice ever given," Andrew Ng, founder of training platform DeepLearning.AI, shot back on Tuesday.
In his view, coding is not an obsolete skill -- AI has simply made it available to more people.
Another argument has taken hold in Silicon Valley: interpersonal skills will become more valuable than ever, with some voices going so far as to tout a humanities education as sound tech career preparation.
"As AI can do more of a job, the things that will distinguish and differentiate a given employee are going to be the human skills -- critical thinking, communication, teamwork," said Greg Hart, chief executive of training platform Coursera, which has seen enrollment in its critical thinking courses triple over the past year.
Florian Douetteau, chief executive of Dataiku, a French company specializing in enterprise AI, agreed. 
The real human added value, he told AFP, is the "capacity for judgment."
He described a world in which an AI agent works through the night, its human counterpart reviews the results in the morning, and then the agent resumes working autonomously during the lunch break.
But the entrepreneur nevertheless expressed unease. 
"We are going to have a generation of people who will never have written anything from start to finish in their entire lives," he said. "That's pretty unsettling."

'Mistake was not preparing'

All of this advice risks ringing hollow for a generation already struggling to land a first job.
AI has automated entry-level tasks that once served as on-the-job training. Hiring of candidates with less than one year of experience fell 50 percent between 2019 and 2024 among America's major tech companies, according to a study by investment fund SignalFire.
"We should be preparing for the loss of knowledge work jobs in a number of categories," warned former US vice president Al Gore.
As the week's lone genuinely dissenting voice, Gore called for a real action plan to map threatened jobs and prepare workers for career transitions, so as not to repeat the mistakes of the globalization era.
"The mistake was not globalization. The mistake was in not preparing for the consequences of globalization," he said, drawing a parallel with the deindustrialization that followed the offshoring wave of the 2000s.
"Maybe we don't want to talk about it," he added, "because it may slow down the enthusiasm for the technology."
bl/arp/pnb/sst

space

Artemis II lunar mission draws flood of conspiracy theories

BY MANON JACOB AND ANUJ CHOPRA

  • Among the falsehoods was an image, viewed over a million times on X, purporting to show the Artemis II crew floating before a green screen and facing film cameras -- suggesting their mission was staged in a studio, but in reality bore the hallmarks of AI manipulation.
  • From false claims that a historic lunar fly-by was staged in a movie studio to unfounded narratives that footage of the crew was AI-generated, the Artemis II mission has been clouded by a blizzard of misinformation.
  • Among the falsehoods was an image, viewed over a million times on X, purporting to show the Artemis II crew floating before a green screen and facing film cameras -- suggesting their mission was staged in a studio, but in reality bore the hallmarks of AI manipulation.
From false claims that a historic lunar fly-by was staged in a movie studio to unfounded narratives that footage of the crew was AI-generated, the Artemis II mission has been clouded by a blizzard of misinformation.
The falsehoods -- circulating across tech platforms including X, TikTok and Facebook -- have also added fresh fuel to a longstanding conspiracy theory that NASA's 1969 Apollo 11 moon landing was faked.
Hashtags such as "fake space" and "fake NASA" have gained traction online since NASA's lunar fly-by sent astronauts farther from Earth than any human before.
Among the falsehoods was an image, viewed over a million times on X, purporting to show the Artemis II crew floating before a green screen and facing film cameras -- suggesting their mission was staged in a studio, but in reality bore the hallmarks of AI manipulation.
Some users also shared a video showing text appearing through the mission's official mascot as purported proof the flight was staged.
But a digital forensics expert told AFP's fact-checkers that the anomaly was the result of a failed text overlay by a news station that had syndicated the official feed.
Unfounded claims that the Artemis II mission detected a mysterious moving object on the moon's surface also racked up millions of views across platforms.
The misinformation spread as four astronauts -- preparing on Friday for a high-stakes re-entry and splashdown -- captivated the world with stunning visuals from their fly-by of the Earth's natural satellite from aboard the Orion spacecraft.

  Internet Wild-West

Once confined to the internet's fringes, conspiracy theories have moved squarely into the mainstream amid growing mistrust of public institutions and traditional media.
Scientific achievements such as the lunar mission present "very easy content for conspiracy influencers," said disinformation researcher Mike Rothschild.
"There are some people whose reflexive reaction to any kind of major event is to claim it's fake and staged, no matter what it is," Rothschild told AFP.
Many of them "pass themselves off as experts in science and physics because it's somehow more believable to their followers than just going with 'the official story.'"
The trend underscores a Wild West internet landscape that is largely bereft of guardrails as false narratives erode digital trust. Several tech platforms have gutted trust and safety teams and scaled back moderation, making them what researchers call a hotbed for misinformation.
Further sowing online confusion were claims that the entire Artemis II mission was a hoax powered by artificial intelligence tools.
The assertion underscores how the rise of cheap and widely available AI tools has given misinformation peddlers a handy incentive to cast doubt on authentic content -- a tactic researchers have dubbed as the "liar's dividend."

 'Secret knowledge'

The swirl of falsehoods has also bolstered one of the longest enduring conspiracy theories -- that NASA faked the 1969 Apollo 11 moon landing, broadcasting visuals shot in a Hollywood studio. 
The conspiratorial discourse has seeped into pop culture, becoming a plotline in movies like romantic comedy "Fly Me to the Moon" -- with Scarlett Johansson's character tasked with faking a moon landing -- and some celebrities also amplifying the claim.
"The moon landing is an example of a conspiracy that will not die," Timothy Caulfield, a misinformation expert from the University of Alberta in Canada, told AFP.
"These conspiracies are attractive for a host of reasons including that they are linked to the allure of having 'secret knowledge' or being aware of things others don't know."
Though easy to debunk, such theories persist as Artemis II comes decades after the previous lunar missions, events today's internet-savvy generation has little recollection of.
"In many ways, it is a testament to how hard it is for humans to travel to the moon -- after all, we did it from 1968 to 1972, and it has taken until 2026 to do it again. It makes many people wonder if it ever happened," space exploration expert Francis French told AFP.
"Right now we are seeing remarkable photographs and video of the Earth and the moon...These photos alone should remove doubt and show once again the amazing things humans are capable of."
burs-mja-ac/sla

OpenAI

OpenAI CEO's California home hit by Molotov cocktail, man arrested

  • Police in San Francisco responded after reports that someone had tried to set fire to a gate at the sprawling home.
  • The luxury San Francisco home of OpenAI boss Sam Altman was hit by a Molotov cocktail on Friday, the company said, as police announced the arrest of a suspect.
  • Police in San Francisco responded after reports that someone had tried to set fire to a gate at the sprawling home.
The luxury San Francisco home of OpenAI boss Sam Altman was hit by a Molotov cocktail on Friday, the company said, as police announced the arrest of a suspect.
No one was injured in the incident, and the firm behind the popular ChatGPT artificial intelligence chatbot would not confirm if the CEO was home at the time.
The motive for the attack and subsequent threats to set fire to OpenAI's San Francisco headquarters -- apparently by the same 20-year-old man -- were not immediately known.
But they come as Altman's profile has risen with the increasing use of AI, amid fears it could massively disrupt employment patterns and cause irreversible societal changes.
Police in San Francisco responded after reports that someone had tried to set fire to a gate at the sprawling home.
A statement from the San Francisco Police Department said officers were dispatched to the home just after 4:00 am (1100 GMT).
"At the scene, officers learned that an unknown male subject threw an incendiary destructive device at a home, causing a fire to an exterior gate. The suspect then fled on foot," SFPD said.
A short time later they were called to the firm's offices where a man was making threats.
"When officers arrived on scene, they recognized the male to be the same suspect from the earlier incident and immediately detained him," the statement said of the unnamed 20-year-old suspect.
A spokesman for OpenAI confirmed the attack on the chief executive's residence and the threats to the San Francisco headquarters.
"The individual is in custody, and we're assisting law enforcement with their investigation," the spokesman told AFP.

AI for war

Altman and OpenAI have become targets for people protesting AI as a threat to society.
Detractors have been particularly troubled by OpenAI's decision to provide its technology to the US Department of Defense.
In a rare post on his personal blog, Altman shared a photo of his husband and their baby "in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house."
The OpenAI chief defended his convictions and called for a de-escalation of rhetoric surrounding.
"I empathize with anti-technology sentiments and clearly technology isn't always good for everyone," Altman wrote.
"But overall, I believe technological progress can make the future unbelievably good, for your family and mine."
OpenAI last month said it was valued at $852 billion after a funding round that raised $122 billion.
The figure reflects the surging costs of computing power and came amid lingering questions about whether OpenAI and rival companies can generate sufficient revenue to cover expenses.
ChatGPT claims the top position in consumer AI, with more than 900 million weekly active users and some 50 million subscribers.
Use of ChatGPT's online search engine has tripled over the course of a year, according to OpenAI.
hg-gc/acb

software

Mythos AI alarm bells: Fair warning or marketing hype?

BY GLENN CHAPMAN

  • Meyers saw embedding a tiny AI model directly into malicious code infecting networks as a natural tactic to be explored by hackers.
  • Anthropic postponing the release of its new AI model Claude Mythos, said to be so skilled at coding it could be a wicked weapon for hackers, has encountered a mix of alarm and skepticism.
  • Meyers saw embedding a tiny AI model directly into malicious code infecting networks as a natural tactic to be explored by hackers.
Anthropic postponing the release of its new AI model Claude Mythos, said to be so skilled at coding it could be a wicked weapon for hackers, has encountered a mix of alarm and skepticism.
The company is among several contenders in a fierce artificial intelligence race. Promoting the awe of Anthropic's own technology boosts business and enhances its allure in the event it soon goes public, as is rumored.
"The world has no choice but to take the cyber threat associated with Mythos seriously," said David Sacks, an entrepreneur and investor who heads President Donald Trump's council of advisors on technology.
"But it's hard to ignore that Anthropic has a history of scare tactics."
Mythos has sparked fears of hackers commanding armies of AI agents able to break through computer defenses with ease.
At this week's HumanX AI conference in San Francisco, Alex Stamos of startup Corridor, which addresses AI safety, acknowledged a real threat from agentic hackers.
And Stamos quipped about what he referred to as Anthropic's "marketing schtick."
"They have these adorable cutesy cartoons about these products that are so incredibly dangerous that they won't even let people use them," Stamos said of the San Francisco-based startup.
"It's like if the Manhattan Project announced the nuclear bomb within a cute little Calvin and Hobbes cartoon."
The heads of America's biggest banks met this week with Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent to weigh the security implications of the yet-to-be released Claude Mythos, according to reports Friday.
"Mythos model points to something far more consequential than another leap in artificial intelligence," Cato Networks co-founder and chief executive Shlomo Kramer said in a blog post.
"It signals a shift that could redefine the balance between attackers and defenders in cyberspace."
A tightly restricted preview of Mythos was shared with partner organizations this week, under an initiative called Project Glasswing. They include Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike and JPMorgan Chase.
According to Anthropic and partners, Mythos can autonomously scan vast amounts of code to find and chain together previously unknown security vulnerabilities in all kinds of software, from operating systems to web browsers.
Crucially, they warn, this can be done at a speed and scale no human could match, meaning it could be used to bring down banks, hospitals or national infrastructure within hours.
"What once required elite specialists can now be performed by software agents," Shlomo said.
"The immediate consequences will be a surge in vulnerability discovery, a true tsunami" of exploiting known and unknown vulnerabilities.

'Agent-to Agent War'

At HumanX, the apparent consensus was that it makes sense that AI agents already adept at coding will excel at finding weaknesses in software.
"We're not in an era where human beings can write code when we have superhuman (AI models) that are then going to find bugs in it," Stamos contended.
"It's just not possible."
He predicted the coming dynamic will involve humans supervising AI agents to protect networks against hackers using that same technology to attack.
Stamos referred to it as "agent-to-agent war," with humans on the sidelines giving advice.
Wendy Whitmore, of cybersecurity firm Palo Alto Networks, expects "some sort of catastrophic attack" this year connected to AI agent capabilities.
"The thing that keeps me up at night is that we're staring down the barrel of a massive influx of new vulnerabilities that are going to be found by AI," said Adam Meyers of CrowdStrike.
Meyers saw embedding a tiny AI model directly into malicious code infecting networks as a natural tactic to be explored by hackers.
"The ultimate weapon would be malware that has no pre-programming," Meyers said.
"It can do whatever you ask it to."
gc-bl/mlm

disinformation

AI chatbots offer children harm as if it were help, says activist

BY ARINA PORKHOVNIK

  • In a 2025 investigation entitled "Fake Friend", the watchdog tested ChatGPT, one of the world's most popular AI chatbots. 
  • The head of a prominent anti-disinformation watchdog has warned of the dangers posed by AI chatbots, saying children are particularly vulnerable.
  • In a 2025 investigation entitled "Fake Friend", the watchdog tested ChatGPT, one of the world's most popular AI chatbots. 
The head of a prominent anti-disinformation watchdog has warned of the dangers posed by AI chatbots, saying children are particularly vulnerable.
"Social media broadcasts to billions, AI whispers to one," Imran Ahmed, who heads the Center for Countering Digital Hate (CCDH), told a disinformation conference this week.
"No society should build machines that can meet a child in their loneliest moment and offer them harm as if it were help," Ahmed told the Cambridge Disinformation Summit. 
In Wednesday's lecture by video call to his former university, Ahmed cited the case of a UK mother killed by her own son, allegedly acting on the instructions of a chatbot.
"None of us is immune, when a machine can offer lethal guidance to a young person as if it were fact," he said.
Ahmed, a British national who lives in the United States, is among five Europeans whom the US State Department has said would be denied visas.
This comes even though he holds US permanent residency and his wife and daughters are American citizens.

'System under pressure'

According to the centre's most recent report "Killer Apps", eight out of 10 AI chatbots were willing to assist teen users "in planning violent attacks, including a school shooting, religious bombings, and high-profile assassinations". 
Out of 10 chatbots only Anthropic's Claude and Snapchat's My AI consistently refused to assist would-be attackers.
In a 2025 investigation entitled "Fake Friend", the watchdog tested ChatGPT, one of the world's most popular AI chatbots. 
"Within minutes, it produced instructions for self-harm, suicide planning, and substance abuse," Ahmed said, adding in some cases it also generated goodbye letters for children contemplating ending their lives.  
Unlike social media and other systems that "just amplify harmful content," AI chatbots generate and personalise it "at the moment of greatest vulnerability".
"The intimacy is deeper and the harm may be harder to detect before it's too late," Ahmed said, adding the systems learn what you fear, what you want, what you are ashamed of and respond in real time, with no human judgement or editorial restraint.
A father of two daughters, Ahmed said: "My wife and I lie awake at night talking about how to protect them from systems that could reach them before we even know it is happening."
He stressed that time to act is limited and called for new laws to regulate AI.
"We spent a decade learning that social media companies will not self-regulate. We have now perhaps 18 months before the same lesson becomes undeniable for AI."
Ahmed said he was "the only one" of the five people threatened by a US visa ban still in the United States, adding he is now "fighting in federal court against that unconstitutional threat to send me to prison".
The US State department has accused the five of attempting to "coerce" US-based social media platforms into censoring viewpoints they oppose.
When powerful industries "lash out like this", Ahmed said, "it is the sound of a system under pressure." 
str/jkb/jxb

Iran

Lego-style memes troll Trump after fragile US-Iran truce

BY ANUJ CHOPRA

  • IRAN WON," read the caption of its video on X after the two-week ceasefire agreement was announced on Tuesday.
  • Shortly after news of a US-Iran ceasefire, an Iranian group released a new Lego-style video lampooning President Donald Trump and declaring "Iran won," the latest in a wave of war-themed AI-generated propaganda flooding the internet.
  • IRAN WON," read the caption of its video on X after the two-week ceasefire agreement was announced on Tuesday.
Shortly after news of a US-Iran ceasefire, an Iranian group released a new Lego-style video lampooning President Donald Trump and declaring "Iran won," the latest in a wave of war-themed AI-generated propaganda flooding the internet.
Explosive Media, a group of pro-Iran creators that describes itself as independent but is widely suspected of government ties, has produced a series of such videos that have racked up millions of views during the conflict.
"The way to crush imperialism has been shown to the world. Trump surrendered. IRAN WON," read the caption of its video on X after the two-week ceasefire agreement was announced on Tuesday.
"TACO will always remain TACO," it added, referring to the acronym "Trump always chickens out."
The ceasefire -- already showing signs of strain -- followed a series of apocalyptic threats from Trump, including his warning that he would take Iran back into the "Stone Age."
With dramatic background music, the video depicts a Trump-like toy figure huddling with Arab leaders, hurling a chair at US military figures, while Iranian generals press a red button with the label "Back to the Stone Age," unleashing a torrent of destruction across the Middle East.
Another clip on X depicted Trump -- caricatured with an oversized yellow head and a flaming backside -- holding a sign that read: "VICTORY! I am a loser."

 'Age of AI slop'

Explosive Media, whose videos often tap into American popular culture, has portrayed Trump as old, isolated, and prone to childish tantrums, seemingly disconnected from reality.
Iranian state media and diplomatic accounts have leaned into their strategy, regularly posting similar so-called AI slop -- mass-produced content created by cheap artificial intelligence tools.
"Iran has crafted a wartime propaganda strategy tailored for the age of AI slop and algorithmic amplification," Joseph Bodnar, a senior research manager at the Institute for Strategic Dialogue, told AFP.
"They are playing to the AI aesthetics and hyperbolic anti-imperialist narratives that draw attention, spark controversy and get rewarded by platforms."
In recent weeks, viral meme videos have depicted fictional Iranian military victories, world leaders in subservient scenarios -- dependent on Iranian leaders for oil -- and even the strategic Strait of Hormuz reimagined as a cartoonish toll booth.
"It is clear that Iran is putting out content that resonates," Bodnar said.
The English-language content of Explosive Media, which describes itself as an "Iranian Lego-style animation team," appears aimed at audiences outside Iran, where platforms like X have been blocked for years and are only accessible via VPN.
With Iranians facing what monitor Netblocks calls an "internet blackout," the ability of Explosive Media to produce and upload slick content has fueled suspicion of government ties.
The group rejected the claim on X as a "media distortion."

 Meme battlefield

The White House's X account has meanwhile posted its own war-themed content --  combining battlefield footage with clips from films such as "Iron Man," "Gladiator" and "Top Gun."
The content highlights an internet meme battlefield that has blurred the line between propaganda and entertainment.
And while the Trump administration used AI-generated content in its social media strategy well before the war, the virality of Explosive Media's clips suggests it may be contending on the digital front, experts say.
The group is "beating the Trump administration at its own game," said Nina Jankowicz, chief executive of the American Sunlight Project.
"The immature humor, the polarizing rhetoric, the idea of 'owning' opponents, and the clicks-at-whatever-cost strategy that Trump and allies have employed is now being mobilized against it."
ac/pnb/des

technology

New Jersey city spurns data center as defiance spreads

BY THOMAS URBAIN

  • Residents learned of the project just nine days before a scheduled city council vote in mid-February.
  • Residents of a New Jersey city mobilized within days to kill a planned data center -- and now activists nationwide want to know how they did it.
  • Residents learned of the project just nine days before a scheduled city council vote in mid-February.
Residents of a New Jersey city mobilized within days to kill a planned data center -- and now activists nationwide want to know how they did it.
Grassroots resistance to these computing fortresses is spreading across the United States, even as Big Tech pours hundreds of billions of dollars a year into AI infrastructure, pushing new projects into communities from coast to coast.
Forty miles (65 kilometers) from the New York skyline, rubble still litters a vacant lot in New Brunswick -- bordered by a railway line on one side and homes on the other.
This former automotive plant was where Amzak Capital Management had planned to build its complex. For now, it remains empty -- a trophy, activists say, for a community that fought back.
Residents learned of the project just nine days before a scheduled city council vote in mid-February.
They moved fast. A video went viral; flyers spread across the city, notably on the nearby campus of Rutgers University. More than 300 people showed up to proceedings held in a room with a seating capacity of barely 80.
Before the matter was even opened for public comment, the city council announced the data center component was being stripped from the redevelopment plan, recalled Ben Dziobek, founder of environmental advocacy group Climate Revolution Action Network.
"We've got tons of people reaching out to us from around the country asking us how we did it," said Charlie Kratovil, a Democratic mayoral candidate and member of environmental group Food & Water Action.
"It is definitely tapping into something that is bigger than any one of us."
New Brunswick Mayor James Cahill told AFP that while data centers have become critical to modern economies, "communities across the country are grappling with how to integrate them locally."
Key considerations, he said, include energy consumption, environmental impact, real estate footprint and benefit to local residents.
Those concerns resonated deeply in New Brunswick.
A 23-year-old resident who asked to be identified by the initials CJ noted that the data center would have been built in the middle of a working-class neighborhood, far from the businesses, hospitals, and university buildings of the more affluent city center.
For Brandon Guillebeaux, a longtime resident of this heavily Hispanic community, the trade-offs simply didn't add up.
"If it had brought thousands of jobs, it would have been worth it," he said. "But this was only going to be a few." Once operational, data centers typically employ very few workers on site.

A precedent?

A boom in generative AI has sent data center demand skyrocketing, with dozens of projects springing up across the United States.
The buildout comes at a cost: power-hungry facilities are straining local grids and driving up electricity bills, contributing to a nearly 17 percent jump in the average New Jersey household's energy costs last year.
Public sentiment is hardening. A recent Quinnipiac University poll found 65 percent of Americans oppose having a data center built in their community.
In early March, seven major AI sector players pledged to offset their electricity consumption by investing in new power generation -- though critics say voluntary commitments fall short of what is needed.
Other communities have pushed back, too. Last year, cities including Chandler, Arizona, and College Station, Texas, rejected proposed data centers -- though neither case drew the national attention that New Brunswick has.
"I really hope this sets a precedent," said CJ. "To show people that if they take action and publicly voice their opposition, they actually stand a chance" of winning.
That momentum is now reaching state capitals. In the coming weeks, Maine could become the first state to enact a moratorium on construction of these massive facilities -- which house millions of processors that form the backbone of the internet and AI.
In New Jersey -- the most densely populated state in the country -- numerous bills to regulate data centers are under consideration. Kratovil, the New Brunswick mayor, alongside prominent left-wing politicians including Bernie Sanders and Alexandria Ocasio-Cortez, is pushing for a more comprehensive statewide moratorium.
"We want feasibility studies and a pause, so we know the actual local impacts -- not just rushing ahead at full speed," said Dziobek.
tu-gc/arp/pnb/des

digital

EU lawmakers want to tax Big Tech to fund budget

  • The centre-left socialists and democrats group has called for a tax on online gambling to finance an increase in spending, said socialist lawmaker Carla Tavares, who leads the budget talks with Muresan.
  • EU lawmakers on Thursday demanded a European Union-wide tax on the world's biggest tech companies and online gambling sites to help fund the 27-country bloc's next seven-year budget.
  • The centre-left socialists and democrats group has called for a tax on online gambling to finance an increase in spending, said socialist lawmaker Carla Tavares, who leads the budget talks with Muresan.
EU lawmakers on Thursday demanded a European Union-wide tax on the world's biggest tech companies and online gambling sites to help fund the 27-country bloc's next seven-year budget.
The EU is facing one of its biggest battles this year over the 2028-2034 budget, which the executive set at two trillion euros ($2.3 trillion).
Fierce negotiations are expected between the European Parliament and member states, especially over where to find extra money that governments are reluctant to chip in.
As they scramble to agree on the budget by the end of year, EU lawmakers proposed that some funding could come from a "digital levy".
"We believe that technological giants are making a lot of good business in Europe and also significant profits," said Siegfried Muresan, the EU lawmaker who will lead negotiations on behalf of parliament.
"It is therefore justifiable that they contribute in form of taxation to the budget of the European single market which enables them this business here," said Muresan, who belongs to the biggest conservative grouping, the EPP.
The parliament's budget committee is currently negotiating on their position and is expected to vote on the text on April 15 before a vote by all EU lawmakers later this month, Muresan said.
The centre-left socialists and democrats group has called for a tax on online gambling to finance an increase in spending, said socialist lawmaker Carla Tavares, who leads the budget talks with Muresan.
The European Commission wants to increase the budget to two trillion euros from the previous 2021-2027 budget, which was worth around 1.2 trillion euros.
Parliamentarians want more money for critical sectors including agriculture.
But they face a big hurdle since EU countries must approve any such measures unanimously.
The future budget also includes setting aside around 168 billion euros to repay the EU loan taken out during the coronavirus pandemic.
fpo/raz/ub/jhb

JPN

You're being watched: Japan battles online abuse of athletes

BY ANDREW MCKIRDY

  • But while Japan is now taking a proactive approach to online abuse, those involved say there is still a long way to go.
  • Japan is fighting back against online abuse of athletes and sports authorities have a warning for trolls planning to target competitors at this year's Asian Games: You are being watched.
  • But while Japan is now taking a proactive approach to online abuse, those involved say there is still a long way to go.
Japan is fighting back against online abuse of athletes and sports authorities have a warning for trolls planning to target competitors at this year's Asian Games: You are being watched.
Online abuse is felt by athletes all over the world, affecting their performances and mental health, leaving them fearing for their safety and even causing them to quit their sports.
Japan is no exception and efforts are belatedly being made to tackle the problem, from dedicated lawyers to teams monitoring social media for offensive posts.
"Even a single negative comment can cut deeply," Japanese Olympic Committee (JOC) official Misa Chida told AFP.
"Athletes don't want to see things like that, so a lot of them choose not to look at social media at all, and that means they miss the 99 percent of messages that are supportive.
"That's a real shame."
Chida was part of a dedicated team of JOC officials monitoring social media at the Milan-Cortina Olympics in February.
Six staff members in Milan and 22 in Tokyo checked around the clock for posts abusing Japanese athletes, using both manual and AI searches.
They worked in conjunction with Meta -- owner of Instagram, Facebook and WhatsApp -- and Japanese company LINE Yahoo.
The team asked social media companies to take down almost 2,000 posts, and succeeded in having nearly 600 removed.
Social media companies have often been accused of not doing enough to crack down on abuse on their platforms.
The JOC said they plan to repeat their monitoring activities at their home Asian Games, which are being held in Nagoya and the wider Aichi area on September 19-October 4.
On top of that, Asian Games organisers told AFP that they will run a wider monitoring programme aimed at protecting athletes from all competing countries.
"We now understand what kinds of comments appear on a daily basis and how they upset athletes," said JOC official Hirofumi Takeshita.
"We've learned how much energy we need to devote to this."

'Hope your family dies'

The JOC is not the first sporting organisation to carry out a social media monitoring programme.
The International Olympic Committee ran one in more than 35 languages at the 2024 Paris Games and there have also been initiatives in football and tennis.
"As awareness of these initiatives grows among athletes, staff and everyone working on the ground, that in itself contributes to a greater sense of psychological safety," said Chida.
Japan has been relatively late to the party, according to lawyer Shun Takahashi, who leads a seven-strong legal group dedicated to protecting athletes from online abuse.
Takahashi says his group, founded in 2024, is a "safe haven" for athletes, many of whom feel uncomfortable talking about the issue.
"They worry that showing vulnerability might lead a coach to bench them or that others will see them as weak," he said.
"Many athletes are raised with the idea that they must always be strong and they don't want to be perceived otherwise."
Takahashi offered support in the case of Taiki Sekine, a professional baseball player who last year took legal action against online abusers.
Sekine, who received messages such as "I hope your whole family dies in an accident", has won several settlements and lodged criminal complaints against the worst cases.
The domestic nature of Sekine's case made it easier to prosecute than social media abuse that crosses international borders.

Long way to go

Takahashi says legal action has "a deterrent effect" on online trolls, many of whom he says are in their teens or early 20s.
"It makes them realise the risk involved," he said.
But while Japan is now taking a proactive approach to online abuse, those involved say there is still a long way to go.
Less than a third of the posts that the JOC's Olympic monitoring team requested be deleted were actually taken down by social media companies.
Takeshita said the tech firms were "very cooperative" but admitted their view of which posts were offensive did not always match up.
"Yes, there was a gap, but it was a gap that we were able to identify by actually doing this work," he said.
"That's better than having an unidentified gap that never gets bridged. Now that we know where the differences lie, we can work to close them."
amk/pst

AI

Waiting for DeepSeek: new model to test China's AI ambitions

BY KATIE FORSTER, WITH LUNA LIN IN BEIJNG

  • "It's important to know because at one level, it is a signal of China's AI self-sufficiency trajectory," Wei Sun, principal AI analyst at Counterpoint Research, told AFP. Tech news outlet The Information reported last week that V4 can be run on the latest chips made by China's Huawei.
  • For weeks now, the global tech industry has been waiting for a major artificial intelligence launch from DeepSeek, seen as a benchmark for China's progress in the fast-moving field.
  • "It's important to know because at one level, it is a signal of China's AI self-sufficiency trajectory," Wei Sun, principal AI analyst at Counterpoint Research, told AFP. Tech news outlet The Information reported last week that V4 can be run on the latest chips made by China's Huawei.
For weeks now, the global tech industry has been waiting for a major artificial intelligence launch from DeepSeek, seen as a benchmark for China's progress in the fast-moving field.
More than a year has passed since the startup put Chinese AI on the map in early 2025 with a low-cost chatbot that performed at a similar level to US rivals.
But despite reports and rumours about its imminent release, DeepSeek's next-generation "V4" model is nowhere in sight.
Speculation is also swirling over the geopolitical implications of which computer chips were chosen to train and power the new system: world-leading US designs or made-in-China alternatives that the country is racing to develop.
"It's important to know because at one level, it is a signal of China's AI self-sufficiency trajectory," Wei Sun, principal AI analyst at Counterpoint Research, told AFP.
Tech news outlet The Information reported last week that V4 can be run on the latest chips made by China's Huawei.
Such a shift would mark a milestone for China in its bid to beat US restrictions on the export of top-of-the-range AI chips from Californian titan Nvidia to the country.
The report cited five people with direct knowledge of large orders for Huawei chips, made in preparation for the DeepSeek launch by tech giants including Alibaba, ByteDance and Tencent.
AFP contacted DeepSeek, Huawei, Alibaba, ByteDance and Tencent but none were able to comment.

'Wake-up call'

DeepSeek started life in 2023 as a side project of a hedge fund that had access to a cache of powerful Nvidia processors.
It shot to attention in January 2025 with its R1 deep-reasoning chatbot, which sent US tech shares tumbling with President Donald Trump calling it a "wake-up call" for American firms.
R1 was based on DeepSeek's last major AI model, V3, which was released in December 2024.
The company's affordable, customisable AI tools have been widely adopted in China, and are also popular in emerging markets such as Southeast Asia and the Middle East.
Stephen Wu, founder of the Carthage Capital fund, told AFP that V4 -- said to be multimodal, meaning it can generate text, pictures and video -- could again shock US tech valuations.
"I expect the upcoming DeepSeek V4 release will not just be a software update; it will be a highly capable, open-source model that handles massive context windows at a fraction of the cost," he predicted.
But DeepSeek's reputation as a company at the frontier of AI technology is also at stake.
Its models previously relied on Nvidia chips, so a move to collaborate with domestic chipmakers would require "substantial re-engineering", Wei said.
"That transition can slow development cycles and introduce performance trade-offs, especially for V4, a model expected to be state-of-the-art."

Training vs inference

The US cites national security concerns as the reason for its export ban on Nvidia's most powerful AI processors to China.
"The ongoing wait for DeepSeek V4 points to friction in scaling advanced models without unrestricted access to top-tier Nvidia hardware," Wu said.
But some reports allege that DeepSeek skirted the ban to train V4 using thousands of Nvidia's top-end Blackwell chips, dismantled in third countries and smuggled to China.
Training AI models requires huge amounts of computing power -- much more than for processing generative AI queries, which is known as inference.
AFP has contacted DeepSeek for comment. Nvidia did not respond to a comment request but told The Information it had not seen evidence of this and "such smuggling seems farfetched".
Another Chinese AI startup, Zhipu, in January unveiled an image generator that it said had been entirely trained on Huawei chips.
And Alibaba said this week it would open a new data centre for AI training and inference in southern China, powered by 10,000 of its own chips and operated by China Telecom.
As for DeepSeek, "if they have successfully trained V4 entirely on Huawei silicon, it signals a material shift in the geopolitical tech landscape", Wu said.
kaf-ll/lkd/lga

government

US court expedites Anthropic's legal battle with Department of War

  • The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk -- a label typically reserved for organizations from unfriendly foreign countries.
  • A US appeals court on Wednesday denied Anthropic's request to put on hold a move by the Pentagon to label it a supply chain risk, but ordered the AI startup's legal battle with the Department of War to be put on a fast track.
  • The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk -- a label typically reserved for organizations from unfriendly foreign countries.
A US appeals court on Wednesday denied Anthropic's request to put on hold a move by the Pentagon to label it a supply chain risk, but ordered the AI startup's legal battle with the Department of War to be put on a fast track.
"On one side is relatively contained risk of financial harm to a single private company," the three-member appellate panel here reasoned.
"On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict."
The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk -- a label typically reserved for organizations from unfriendly foreign countries.
The AI startup sought a stay of the action in appellate court here and also sued the Department of War in federal court in Northern California.
The appellate panel stated in its ruling that requiring the Department of War to prolong its use of Anthropic AI directly or through contractors "strikes us as a substantial judicial imposition on military operations."
However, the appeals court agreed that Anthropic raised "substantial challenges" to the sanctions and ordered that proceedings in the underlying case be expedited.
"We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson told AFP.
"While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
In the suit filed in San Francisco, federal Judge Rita Lin temporarily froze the sanctions, reasoning that President Donald Trump's administration likely violated the law in blacklisting the AI powerhouse for expressing unease about the Pentagon's use of its technology.
In her ruling, she said the government's designation of Anthropic as a supply chain risk was "likely both contrary to law and arbitrary and capricious."
The dispute erupted in February after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
The tech sector has largely supported Anthropic in the wake of the punitive measures.
gc-bl/msp

internet

Meta releases first new AI model since shaking up team

  • For now, Muse Spark is only available in the United States.
  • Meta on Wednesday released an artificial intelligence model, Muse Spark, it touts as smarter and faster than what it offered before shaking up its Superintelligence Labs unit.
  • For now, Muse Spark is only available in the United States.
Meta on Wednesday released an artificial intelligence model, Muse Spark, it touts as smarter and faster than what it offered before shaking up its Superintelligence Labs unit.
"Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up," the tech titan said in a blog post.
Muse Spark succeeds Llama 4, released by the Silicon Valley-based firm a year ago, and will power Meta's AI app and smart glasses along with Facebook, Instagram, WhatsApp and Messenger features.
For now, Muse Spark is only available in the United States.
The new AI model was described as being small and fast by design, capable of reasoning through complex questions in science, math and health.
It is the first in a new Muse series, with the next generation already in development. 
Llama 4 lagged in the fierce AI race as heavyweight rivals from China, France, and the United States produced improved models at a rapid-fire pace.
That prompted Meta chief executive Mark Zuckerberg to overhaul its AI team, which saw the departure of its research boss Yann LeCun.
LeCun spent 12 years leading the AI lab at Meta, where Zuckerberg has made the quest for "superintelligence" a priority.
Zuckerberg embarked on a major recruitment campaign last year to acquire talent for the Meta's efforts, poaching Scale AI co-founder Alexandr Wang and putting him in charge of a newly formed unit called Superintelligence Labs.
Zuckerberg subsequently recruited executives from rivals OpenAI, Anthropic and Google - often personally and at heady costs.
In doing so, the tech tycoon broke with the company's previous approach of prioritizing development of free, open-access AI models such as Llama.
"The future of Meta AI is rooted in the relationships and context already at the center of your life," the company said.
"We are building toward personal superintelligence - an AI that does not just answer your questions but truly understands your world because it is built on it."
gc-tu/msp

cybercrime

Latest Anthropic AI model finds cracks in software defenses

BY THOMAS URBAIN

  • Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic.
  • Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses.
  • Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic.
Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses.
Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking.
"We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco.
Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained.
Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe.
The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic.
Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of iFixests code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks.
"AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post.
"The fallout -- for economies, public safety, and national security -- could be severe."
Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic.
As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators.

Project Glasswing

As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing."
Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system.
"This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing.
"AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back."
Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing.
Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission.
Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco.
"The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said CrowdStrike chief technology officer Elia Zaitsev.
"Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities."
Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup.
That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.
tu-bl-gc/bgs