The UK between Europe and America: fitting artificial intelligence into a unified foreign policy

by André Petheram, Kathryn Riordan, Colene Short and Thomas Westgarth

Introduction

With the release of the UK Government’s Integrated Review of Security, Defence, Development and Foreign Policy due tomorrow, and after months of reports and events reflecting on the UK’s role in the world following Brexit, it is a good time to consider how a unified approach to the UK’s international activity should treat artificial intelligence. That AI policy has multiple international dimensions is almost self-evident. Exactly which of AI’s affordances and capacities should be emphasised in a coherent foreign policy is less obvious. 

While it is common to refer to AI development as a ‘race’ or even an ‘arms race’, it is not always clear what the race is for. In an frequently-cited 2018 essay, Ian Hogarth writes that ‘economic, military and technological supremacy have always been extremely powerful motivators for countries.’ The AI arms race could be one or all three of these. But this provides a rather vague guide to international action in the next years and decades. 

It is one thing to aim to be the ‘global leader in AI’, as China does. In an arms race, ‘the value [actors] are creating is partly a function of their relative strength over a competitor.’ While relative strength in any field is has a useful political function in that it allows politicians to demonstrate success to their citizens and instill national pride (and this is a key unstated part of Hogarth’s essay), beyond this, it does not achieve much on its own. 

It is another thing to explain why supremacy matters and what the consequences will be. Who should benefit from a country’s success in any field? What are we willing to sacrifice on the way? These are questions of values, politics and ideologies. They will inevitably guide how we think about AI. It may also be that there is nothing especially interesting about AI: it is simply a means to those ends. 

How does the UK now see itself internationally?

The Foreign Affairs Committee concluded in 2020 that the upcoming ‘Integrated Review must address a lack of clear strategic vision, a lack of confidence, and lack of coherent implementation that has undermined recent international policy by the UK.’ That ‘clear strategic vision’ is what will guide AI’s deployment.

The Foreign Policy Centre offers a useful starting point to what the UK’s strategic vision might look like: ‘the UK’s evolving foreign policy needs to have at its centre a coherent and strategic response to the sustained global erosion of liberal democracy and the buckling of the post-war (both WWII and Cold) architecture of the rule-based world order’. The committee report, among a range of recommendations, stresses that ‘the UK should use its convening power and thought leadership to bring together nimble networks of like-minded nations by agreeing a baseline for cooperation between them’: a fluid ‘vari-lateral’ system running parallel to formal multilateral institutions, which the UK also has a role in supporting and reforming. 

Recent think-tank events give a further guide to what we might (ambitiously) call an emerging foreign policy consensus, and provide some intriguing initial guidance to how AI might fit into this. At a UK In A Changing Europe webinar, the dominant arguments among the speakers (Lisa Nandy MP, Tom Tugendhat MP, Sir Lawrence Freedman, and Professor Rana Mitter) were for strategic alliances with liberal democracies for forming a framework of response to global challenges such as democratic backsliding and a shifting geopolitical landscape. ‘Global Britain’ was depicted as a leader in promoting democratic values, and re-establishing a global order that recognizes a rising Asia, and China in particular. 

Importantly, Tom Tugendhadt also stressed the significance of the so-called D10 (or, the ‘Democratic 10’) group of nations cooperating in ‘regulations, internet freedom, digital nationalism and digital globalism.’ In his words, the D10 needs, ‘a way of working together to create a large enough digital pool so we can actually have a counter-balance to China’s push on regulations and norms.’ He called this a General Agreement on Data or GAD. In a Chatham House event with former Foreign Secretary Jeremy Hunt MP, former Prime Minister Tony Blair also emphasised that Britain needs to play an active role in establishing closer international cooperation, including by helping establish a regulatory framework for big technology. Will the Integrated Review set out any paths towards setting up any cooperative instruments of this kind? This might be seen as a key test of seriously it takes the intersection of democracy and digital technology. 

But what about AI?

Assuming that this does reflect a good degree of agreement among policymakers (and tomorrow’s release of the Review may fatally undermine this assumption), we can tentatively set out some directions, if not clear policies, for a UK international AI strategy.

  1. The UK needs to be an international leader on setting out frameworks for AI’s development that are consistent with democracy and human rights;

  2. Within this framework, the UK needs to make sure that British companies and researchers are at the forefront of AI innovation, protecting the UK’s economic competitiveness over the long term);

  3. The UK’s AI R&D will need to support its increasing emphasis on and investment in cyber security (we know already that this is major theme of the Integrated Review: we’ll pass over it here).

Finally, these directions all have implications for the UK’s China policy, given both China’s enormous domestic AI industry and the international criticism it is currently facing for its poor record on human rights. The Integrated Review will certainly pay extensive attention to China; we will be quick to examine how far it will frame this as a question of managing disruptive new technologies in general and AI in particular. 

AI, Democracy and Human Rights

We expect that the Integrated Review will emphasise the UK’s role as a promoter of democracy abroad. This much is easy. Following through – and accounting for the huge strains that artificial intelligence are already placing on our ideas of the role of the state – will be exceptionally difficult. 

Recent contributions on AI ethics fit well in a general sense with the elements of a post-Brexit foreign policy that we outlined above, but these are arguably thin on specifics and choices.  The AI Council, an independent advisory committee, published a paper in January 2021 calling for a National AI Strategy, and recommending that the UK ‘thoughtfully position [itself] with respect to other major AI nations’ as it seeks to ‘become a global lead in good governance, standards and frameworks for AI’. The paper recommends ‘strengthen[ing] existing partnerships with other like-minded countries like France and Canada’ and ‘increas[ing] bilateral cooperation with countries like the US and China’. Broadly, the UK needs to ‘enable effective strategic action to shape a global AI landscape’. 

This is a rather obvious good to be sought, but evades the dilemmas that international leadership will present. What if among the ‘biggest challenges [that] AI presents’ are questions of surveillance and repression in the countries with whom the UK is supposed to be cooperating? 

A December 2020 House of Lords committee report on UK AI policy is more forthright: the government ‘should not be afraid to challenge the unethical use of AI by other governments or organisations.’ To support this, the UK ‘should establish and publish national standards for the ethical development and deployment of AI.’ This is a good starting-point, but still leaves open the question of how to challenge unethical AI. 

In our own recent report on the contested uses of AI in protests and revolutions, we emphasise that an important start is for the UK to make sure that its own use of AI in domestic security is legitimate, consensual and rights-based (assuming, of course, that this is possible). Moreover, it will be crucial to develop ways to measure the extent of this legitimacy – or its lack. And, while alignment with the EU may be seen to miss the point of a ‘post-Brexit’ foreign policy, the UK should take note of the agreement between the European Council and the European Parliament to introduce export controls on cyber surveillance technologies, where the systems in question may violate human rights. Notably, the ‘definition of “exporter” [will be updated] to include natural persons and researchers involved in dual-use technology transfer.’

This raises another issue. Universities are now seen as a focal point for exercising soft power. The production of new knowledge itself is an endeavour laden with political implications, but this isn’t the only purpose of modern universities. Scholarly institutions also act as ambassadors for their nation, attracting international talent, cultivating new ties, relationships and status in a globalised world.

This issue is particularly salient when considering Chinese relations. There has been some recent anxiety about the links between UK academic bodies and Chinese military-linked manufacturers and universities. At a time where academic research becomes increasingly enmeshed with ‘highest-bidder markets’ and geopolitical fragmentation, funding dependency legacies from geopolitical rivals pose a challenge to the pursuit of knowledge.

In a world where the funding of cutting-edge AI research is concentrated in the hands of a small group of organisations, challenging such organisations at any point poses a huge risk for the careers of ambitious AI researchers. This was recently shown at Google, where senior AI ethics researchers Timnit Gebru and Margaret Mitchell were recently fired from the company for producing work that criticised Google’s own natural language generators. 

Although universities proclaim to be paragons of free expression, they are also not free from such dilemmas. Sino-British academic relations may yet prove to be a particularly problematic pressure point, especially if pressures to restrict certain IP uses come into conflict with the priorities of funders and researchers abroad. 

Ultimately, however, these incentives are arguably driven by wider shifts in critical economic markets, meaning that relative economic power is a crucial concern when we consider the UK’s future powers and influence. Questions that might seem like they belong to industrial strategy or trade policy rather than foreign policy per se are crucial here, even if tomorrow’s Review neglects them. 

AI and Economic Competitiveness.

In the same way that electricity and the steam engine were the precondition of economic production in previous centuries, quality data is going to transform economic production.

As a result, we may end up in a situation wherein most nations will depend on a handful of AI-based goods and services. Those who command the supply chains of these products will possess immense negotiating capital when it comes to navigating global economic and diplomatic problems. In this sense, economic strength serves as a political tool.

Semiconductors may illustrate this economic dilemma in our technosphere. Although not strictly AI, semiconductor chips are important for increasingly complex computational and AI-based tasks, where higher compute power is necessary. Georgetown’s Centre for the Security and Emerging Technology recently highlighted that because state-of-the-art AI chips are necessary for much of the modern economy, acquiring a competitive advantage via reducing manufacturing production costs is of utmost geopolitical importance.

In this vein, one of the key geopolitical stories of 2021 will focus on Taiwan, China, the US and semiconductors. With Taiwan being a keyplayer in the chip market, any further tension between the democratic state and China could pose a huge threat to economic supply chains worldwide. Even the European Union has recently stated their ambition to become a semiconductor superpower.

Producing these key technologies will yield a crucial diplomatic edge – the UK could follow suit in the AI space. After all, less dependence on countries with antithetical approaches to liberal democracy means that there is a greater chance that trade can shift global attitudes on issues such as human rights.

Britain is already tightening its grip here. The Competition and Markets Authority is currently examining the security implications of NVIDIA’s application of semiconductor giant Arm. They seem keen to not have another DeepMind scenario on their hands. With the National Security and Investment Bill set to pass shortly in Britain, and both German and Japanese governments passing laws to allow further scrutiny of tech-based foreign takeovers, cross border M&A is set to become increasingly politicised.

Beyond a national security standpoint, the impact that AI will have on the future of work, culture and social relationships may well be unfathomable. This will subsequently open up a series of new paths for different societies to embark upon. Will we adopt a shorter working week, as John Maynard Keynes predicted, or will we further blend our work and personal lives? Will we accept exorbitant levels of inequality that would come about from general purpose technologies being developed by a small elite, or will new windfalls be put in place to distribute gains in a more egalitarian manner

Some governments, at both a national and federal level are already considering these questions. – IAIDL has written the AI strategy for Stockton, California, a city that has put itself on the global policy map by trialling a basic income (or ‘freedom dividend’). Altogether, this new age presents an opportunity for Britain to reimagine the structure of its economy in a highly non-deterministic manner. 

Britain is currently a world-leader in both AI talent and research. That means that the skills and ideas are ready to be cultivated for the nation to construct a new economy. This is important because, for decades, policy-makers have not quite got to grapes with what Britain’s comparative advantage looks like. The omnipresent nature of AI with regards to the digital economy means that this can be more clearly articulated. Will it come in the realm of synthetic biology, as Hogarth and Benaich suggest? Or will computer vision revolutionise the way we approach modern problems? While many path-dependent legacies remain, the AI space offers a new canvas on which which nations can rescript much of their economic production function. Whether the Integrated Review takes on these wider questions or not, they will remain inescapable for anyone asking what the UK’s international role should be.

Conclusions

The Integrated Review comes at a time where Britain is at a crossroads. Brexit acts as a clarifying moment for the nation. The coronavirus pandemic has radically disrupted our economic relationships and hopes for prosperity. Throw climate-systems breakdown into the mix and questions of international strategy begin to radically exceed the boundaries of ‘Security, Defence, Development and Foreign Policy’: and, indeed, AI. 

And yet, Britain is not the only state facing serious existential questions. Joe Biden’s new administration faces the task of repairing a damaged and uncertain America. The European Union is dealing with its own crisis of legitimacy, following their comparatively slow vaccine roll-out. Both wish to use AI to earmark their global leadership plans, each albeit with their own unique approach to China.

It is this divergence which is important to understanding Britain’s own position on AI. Fundamentally, any assessment of the UK’s AI strategy has to be situated within the context of international relations. Their approach to dealing with telecommunications companies such as Huawei is largely dependent upon what positions the US and Europe take up. Britain will be able to act as thought leaders, but their lack of economic and military thrust compared to the global superpowers means that they will have to learn to take on the supporting role. It remains to be seen whether the Integrated Review will reflect this.

2024-03-26T15:56:10+00:00
Change Language