Britain ditches AI declaration, pitches for cash

Posted by
Check your BMI

A daily download of the topics driving the tech policy agenda, from Brussels to London to Silicon Valley.

Pro Morning Technology UK
toonsbymoonlight

By TOM BRISTOW

with OCÉANE HERRERO, PIETER HAECK, and JOSEPH BAMBRIDGE

SNEAK PEEK

— As the AI Action Summit kicks off, the U.K. is tempted against signing its main declaration.

— Peter Kyle takes his investment pitch to Paris.

— Back at home, there’s a backlash against the Home Office over encryption.

Good Monday morning,

This is Tom in Paris.

You can get in touch with your news, tips and views by emailing Tom Bristow and Laurie Clarke. You can also follow us on Twitter @TomSBristow and @llaurieclarke.

DRIVING THE DAY

HERE WE DON’T GO: The U.K. will not sign the diplomatic declaration which will wrap up the Paris AI Action Summit on Tuesday, according to people familiar with discussions. The declaration, which pledges to work towards “inclusive and sustainable AI”, has not got U.S. backing, and without that, the Brits also have cold feet about giving it their stamp of approval. 

It’s not you: The problem is not so much the wording but the geopolitics. Britain is trying to tread a line between Europe and the U.S. and is wary of anything that will annoy the Trump administration, though negotiations are ongoing. DSIT declined to comment.

Not fussed: One French official played down the importance of signatures, saying: “The big diplomatic declarations, they’re not what matters.” They emphasized that France would rather “shift the perspective and the conversation” on AI. A second just said “we’ll see” when asked about the U.K. signing.

Get serious: MTUK has seen an “alternative Paris Declaration” circulating on private channels. It describes the latest draft of the official statement as showing “deep-rooted unseriousness” with “meaningless commitments.” It wants action from the West to “secure its primacy in AI”, more military applications, and for frontier labs to be “publicly committing to Western victory” over China.

Sovereign rules: And it’s not just online edgelords. Much of the pre-summit talk yesterday was of AI “sovereignty.” “The future of AI is a political issue and an issue of sovereignty and strategic dependence,” French President Emmanuel Macron wrote on LinkedIn.

French play: The hosts are looking to use the summit to reassert France and Europe as serious global AI powers. France will announce €109 billion of AI investment around this week’s summit, Macron said on Sunday, including up to €50 billion from the UAE for new data centers. Its AI darling, Mistral, announced yesterday that its first AI cluster outside of Paris will be up and running in months.

Listen up: In an interview with POLITICO, Mistral co-founder Arthur Mensch said he intends to prove that France and Europe “once again have something to say” on AI.

Happening today:  The official part of the summit takes place today and tomorrow at the Grand Palais and will be attended by world leaders including U.S. Vice President JD Vance, Indian Prime Minister Narendra Modi, Chinese Vice Premier Zhang Guoqing and European Commission President Ursula von der Leyen, as well as tech bosses like OpenAI’s Sam Altman and Google’s Sundar Pichai.

On the agenda: The action starts from 9:30 a.m. with panels on AI security with Microsoft’s Brad Smith and Signal’s Meredith Whittaker, AI safety with Yoshua Bengio and Anthropic’s Dario Amodei and AI in the public interest with LinkedIn’s Reid Hoffman and Mozilla’s Nabiha Syed. A side conference this afternoon is dedicated to military AI.

VIPs only: But the real excitement is happening behind closed doors. Macron is hosting a dinner for world leaders and tech execs tonight ahead of the leaders plenary session on Tuesday.

Thrifty French: While that dinner promises to be haute cuisine, organization for the rest of the summit has been rather last minute (less polite adjectives available). Our Paris colleagues report that the summit is being done at around a third of the price of its Bletchley predecessor, coming in at €13 million compared to Bletchley’s €33 million.

We got a framework: While AI has slipped firmly down the pecking order of priorities this week, the summit has already yielded some results. On Friday, the OECD launched a reporting framework to monitor adoption of an code of conduct for organizations developing advanced AI launched under G7 “Hiroshima Process.”

Rocky start: But in less good news for the organizers, one of the summit’s main announcements — a “Public Interest Platform” with $400 million of funding — was leaked in the French press yesterday. The initiative, called CurrentAI, was meant to be unveiled on Tuesday afternoon. Nine countries are supporting it, including France, Germany and Nigeria.

**The AI Fringe by Milltown Partners is back. We’re bringing a diverse set of voices together across 11 and 12 February at the British Library to delve into the themes and outcomes from the AI Action Summit in Paris and what they mean for policymakers, businesses and citizens. More here.**

AGENDA

LORDS: For those not in Paris, Conservative peer Christopher Holmes should elicit an update on the government’s approach to AI with a question in the Lords on AI legislation related to intellectual property, automated decision-making and data labeling.

AROUND THE WORLD

STILL BULLISH: OpenAI is closing in on a $260 billion valuation that would involve $40 billion investment from SoftBank, CNBC reports. It comes as Big Tech firms plan to invest $300 billion in AI infrastructure this year, the NYT notes.

LAST DITCH: Taiwanese officials have headed to Washington to discuss possible U.S. tariffs that could hit chip exports, Reuters reports.

INCOMING: Meta will begin making global layoffs today, Reuters reports.

X PROBES: After a German court on Thursday ruled that X must immediately provide researchers with access to data on political content ahead of the country’s election, French investigators confirmed that they’ve also opened a probe into X.

DOGE WATCH: A U.S. federal judge issued a sweeping block on most Trump administration officials — including Elon Musk — from accessing sensitive Treasury records.

HANDS-ON: The WSJ reports that President Trump has tasked JD Vance with finding a buyer for TikTok’s U.S. operations. (Musk isn’t interested.)

LEGAL CHALLENGE: Four British families are suing TikTok in a U.S. court, claiming the platform contributed to their children’s deaths through its addictive design and failing to block physically harmful content. The BBC has more.

ARTIFICIAL INTELLIGENCE

TWO CAN PLAY THAT GAME: France has switched the focus of its AI summit to showcasing the best of French and European tech. Today, the U.K. will make a pitch for some of the investment to go its way, using the summit to invite interest in AI Growth Zones. It follows an “AI for Growth” event at the British Embassy in Paris on Thursday.

Pitch in Paris: Technology Secretary Peter Kyle, who’s in the French capital, said: “We’re leaving no stone unturned in how we can harness expertise from all over the U.K. to deliver new opportunities, fresh growth, better public services and cement our position as an AI pioneer, and that’s the message I will be sending to international partners and AI companies.”

Apply please: The first AI Growth Zone is already slated for Culham, Oxfordshire. DSIT is now inviting local and regional authorities, data center firms and energy companies to submit expressions of interest to host other zones. It says “exploratory work” will begin immediately, with the selection process starting “in the spring” and confirmation of sites by the summer.

The criteria: The zones, which will benefit from fast-tracked planning permission, must either have existing energy connections or the ability to scale to 500 MW of power, meaning former industrial sites and locations close to major energy projects are the target. Authorities in Scotland, Wales and northern England are all being encouraged to apply, after regional mayors were invited to an AI drinks reception in No. 10 last week.

Game plan: In an indication of how far safety has slipped down the priority list, DSIT said while Kyle will “bang the drum” for inward investment to deliver on the government’s AI Opportunities Action Plan while in Paris. Matt Clifford, who’s advising the U.K. government on delivery of the plan, told a Tony Blair Institute event last night his focus is now on identifying where the U.K. can “play and win.”

Find your niche: On the same panel, investor Teresa Carlson from General Catalyst said countries were becoming more nationalist on AI, but Singapore’s Digital Minister Josephine Teo called for nations to cooperate while playing to their strengths.

Other takeaways: Marc Warner, founder of Faculty AI, slammed the U.K. government’s lack of digital skills and questioned how much any country could steer the AI revolution, comparing it to the industrial revolution.

Deep who? At a separate event at its Paris AI lab, Google DeepMind’s Demis Hassabis said “we’re close” to AGI, giving a timeline of around five years. He also offered a reality check to the hype of DeepSeek. “There’s no actual new scientific advance here. It’s using known techniques. Actually many of the techniques we invented at Google,” he quipped.

ENCRYPTION

UNITE FOR ENCRYPTION: The Home Office has managed to unite the tech industry and privacy groups in anger after reportedly ordering Apple to give it access to users’ encrypted data under the Investigatory Powers Act.

On notice: The act, which was amended last year, allows the U.K. government to order firms to provide access to encrypted data under a “Technical Capability Notice.” The notices are not made public, but The Washington Post reported on Friday that Apple was issued such a notice last month. Apple can appeal the notice, but has to comply in the meantime.

Backdoor worries: The government has previously argued that criminals and child abusers use encryption to escape prosecution. But Jurgita Miseviciute, head of public policy at Proton, said creating a backdoor for Apple could be exploited by others. “Compliance from Apple would create a dangerous precedent,” she added.

Sure this is a good idea “The reported details suggest the U.K. is seeking the ability to access encrypted information Apple users store on iCloud, no matter their location,” said Privacy International’s legal director, Caroline Wilson Palow. “This overreach sets a hugely damaging precedent and will embolden abusive regimes the world over.” Rebecca Vincent, from Big Brother Watch, described it as an “unprecedented attack on privacy rights that has no place in any democracy.”

Gone too far: Matthew Sinclair, U.K. senior director at tech lobby CCIA, whose members include Apple, said: “The Government should work urgently to reassure the public, companies operating vital digital services here in the U.K. and our international partners that this kind of overreach will not be permitted to stand.” U.S. lawmakers want the U.S. government to intervene, the Post reports.

Terrified: Matthew Hodgson, chief executive of Element, a secure communications platform used by governments, said it marked a “terrifying escalation in the fight to protect users from blanket surveillance.” He described surveillance backdoors as a “catastrophically flawed idea” as they could be exploited by criminals or hostile states.

Just get out: “Apple should withdraw from the UK rather than comply with this order,” he added. An Apple spokesperson said they were unable to comment, but the company has previously said it would rather remove services from the U.K. than weaken encryption.

BEFORE YOU GO

BETTER LATE THAN NEVER: Peter Kyle has belatedly opened a LinkedIn account. “I want to use this platform to exchange ideas with you — leaders from industry and civil society — because we all have a stake in the future of science and tech in the U.K.” his first post reads.

FURTHER READING: As Kyle chases cash for AI infrastructure, a new report from the Ada Lovelace Institute warns that “without careful policy design, public compute investments risk entrenching existing market concentration in AI development and contributing to the environmental harms.”

Speaking of: On that note, a separate paper from the Royal Academy of Engineering calls on the government to impose environmental reporting mandates and sustainability conditions as part of its data center push, with the BBC reporting that Thames Water is talking to the government about managing the water demand from data centers.

Step back: In its own report, the IPPR think tank says we need “a new politics of AI” that doesn’t just consider how it can be rolled out quickly and safely, but also the societal purposes. It suggests linking the government’s missions with incentives for developing AI.

CONGRATS: Three professors have been awarded Turing AI World-Leading Researcher Fellowships, supporting them to move to or remain in the U.K.

FOR GOOD MEASURE: Ousted former CMA chair Marcus Bokkerink has returned with another five-pager detailing changes he implemented at the regulator that “resulted in a significantly different CMA to the one I found in 2022.”

TRACKING CONCERN: The Observer says dozens of gambling websites are sharing tracking data with Meta without explicit consent in apparent breach of data protection rules.

GRIM: Coroners have flagged nine cases of online-related death by suicide to DSIT in the past 12 months, according to a written answer.

Morning Technology UK wouldn’t happen without the production team.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments