How lucky we are to live in interesting times. Over the past few weeks, we have witnessed the beginning of a historic transformation in world politics.
Like everything else in global affairs, the art of the possible in AI policy is being redefined as the ground shifts beneath our feet. Stories matter, and the way we talk about things matters. Everyone wants the upcoming transformation into a world with AI to be “good”. But the definition and justifications of that “good” are changing.
We need to think carefully about paths forward, now that the political order is shifting and the very legitimacy of multilateral institutions is in question.
The old world is dying . . .
Since the Bletchley Park summit in 2023, the international community has gathered around artificial intelligence from what we could call a “safetyist” viewpoint. The Bletchley Park and Seoul declarations highlighted the need for global cooperation and the importance of safety research; meanwhile, the International AI Safety Report was a global effort, uniting the best scientific understanding from around the world and pointing the way forward for risk-mitigating policy.
All that multilateralism seems quaint and old-fashioned now, in the face of the USA’s astonishingly muscular reappearance on the world stage.
In his first international speech, JD Vance dropped the mic on a full-on accelerationist stance for the USA (just before going on to urge Europe to embrace the far right in Munich). This was a shocking break for the former guardians of the free world.
In a recent post, Anton Leicht analysed what this means for the future of AI policy. Two points from Anton’s post stood out to me particularly strongly:
The notion of AI Safety has become “Dem-coded enough to suffer political retaliation, but not Dem-coded enough to be enacted on the coattails of general Democratic victories.”
“The easiest trap to run into is reframing the movement without reconsidering the agenda. There’s been a growing consensus that ‘learning to speak Republican’ is advisable, and that maybe reframing AI safety, e.g. as AI security, could do the trick.”
I think this last one is a really important point. The policies AI safety folks used to advocate for – pauses, licensing, restrictions on who could develop AI, requirements for safety testing, MAGIC – all require adherence to multilateral institutions and mutlialteralist values to work.
All those policies could be seen as “dem coded”.
But I think that’s seeing things the wrong way.
I think they’re more accurately described as “before-times-coded”. I am not alone in thinking that we’re seeing a shift away from the postwar multilateralist order, and not just from the USA.
Vance may have shown the way forward with great audacity, but he is hardly treading the path alone. At the Paris AI Summit, Emmanuel Macron was absolutely not talking about slowing anything down. He announced €109 billion of funding for AI infrastructure to make France an “AI powerhouse” – part of the ‘third way’ for AI that Macron has been championing for years.
Conversations I’ve had with some contacts in the Swedish government support this. “Don’t tell me this is a global problem requiring global solutions,” one told me, waving his hand in frustration. With Russia just across the Baltic, meddling daily in the political process, there’s no time to discuss slowing down. If you want a hearing in the halls of power, talk about unilateral policies that drive unilateral advantage, he said.
So, where does this leave us when it comes to narratives around AI policy?
. . . And the new world struggles to be born.
This is our communications challenge. How do we express the concrete requirements of safe AI in a paradigm acceptable to the post-Paris system? We shouldn’t be clutching our pearls wishing for a return to the old world before 2025. We should be figuring out what we can do in the new one. And in the new world, despite everything we’ve seen in the last two months, I think there is a way for us to proceed.
For example, AI Control is by definition a good thing. No one anywhere would prefer “out of control AI” to “AI we control”. This is as true in Washington as in Bejing – only the “we” is different. This is as true for business as it is for government. Banks, healthcare companies, and many other critical and regulated industries need AI they can trust to come to the right conclusions and not lead them into blind alleys or over cliff edges where they’ll hurt their customers – and perhaps get mired in lawsuits.
Protecting US AI companies from cyber threats is good for controllability, and consistent with the American POV.
Reskilling programmes, such as those proposed by the International AI Report, can still be acceptable, if they take on an economic nationalist tone. Not even authoritarian governments want large numbers of unemployed people – it is always and everywhere a recipe for unstable politics. (See: (1) COVID Pandemic → (2) Widespread joblessness → (3) Biden elected – an example that should be especially salient for US Republicans.)
This last point really gives us an opportunity for content. Joblessness will become even more relevant, and quickly, as and if people start getting laid off through AI application in the US work force. Telling the stories of the displaced is somewhere that I would really like to explore.
I agree that AI safety policy can’t go on like this. But it’s far from over.
Now is the time of monsters.
‘When the way comes to an end, Then change— Having changed, You pass through.’
I Ching
The rhetoric especially from the USA is essentially accelerationist, today. But as AI spreads through the economy and society, there will be pressure for that to change. That drives me to two conclusions (in bold below).
First, we need to be ready to tell and spread the right stories, as those risks become more salient and instantiate into damage.
By analogy, climate change remained a difficult, systemic abstraction until people in the rich world woke up to find their houses under water, or burning in a wildfire. Now that more obvious and damaging consequences of the global temperature rise are obvious, political action is based on a swell of public grievance. It is difficult to photograph a global temperature rise of 1.5 degrees, but seeing a video of someone crying as their house burns tends to concentrate the mind, and creates emotional impact that can drive action.
Grievance is a powerful emotion, and it’s increasingly common. Recent research shows us that almost two thirds of people worldwide feel this way (see slide 17 of this report). That means they feel they feel wronged by the establishment. We can work with that.
The accelerationist anti-establishment approach we’re seeing from Elon Musk in the USA might actually make this easier. There will likely be some thousands of jobless former federal workers by the end of the year. Among their stories will be some that we can tell and employ to build a movement. This is how we build public pressure for change, story by story, brick by brick.
That’s why, as the consequences of misaligned AI start appearing, I think we will need to be at the ready to tell those stories with impact.
My second conclusion is that we will still need to position our safety discourse within the dominant political paradigm, or it won’t go as far as it could.
Even faced with significant consequences from AI, there are limits to how far the Trump administration can change their rhetoric. They can’t compromise their core values, or they lose all legitimacy. Eschewing international institutions in favour of a unilateral focus on America First is one of those core values.
And remember that the political shift we’re seeing isn’t an American one. It’s global, as we saw in Germany a few weeks ago. Even NATO is in doubt – the man who used to run it says he is “struggling to comprehend a transatlantic relationship that is crumbling before our eyes.”
This is a historical, political shift in balance that we must take into account. Previous AI Safety planning included things like an “IAEA for AI” under the UN, or the even more restrictive MAGIC proposal. I don’t think it’s realistic to push for those things now – or it has, at the very least, become much much harder. Perhaps we will see something like MAGIC – but only if it is a Manhattan-project-like initiative at a secure location in the USA, aimed at creating controllable superintelligence specifically before China does.
My point here is that we need to be smart about the way we speak truth to power. It needs to be a truth they can fit into their narrative of power, or it’ll get rejected and we miss an opportunity for impact.
So – I think the time of AI Safety may be passing. I think the time of Beneficial AI is beginning.