Sir Keith Burnett: what role should physicists play in AI safety?

A computer chip with "AI" printed on it.

Today and tomorrow, a mansion on London’s leafy outskirts is, for the second time in its history, the focus of a global endeavour to deploy technological advances for global security. Where Alan Turing and his team of code breakers once struggled to defeat fascism during the Second World War, Bletchley Park is now hosting the world’s first major AI Safety Summit.

The need for international accord could scarcely be more obvious. At times of War and conflict, as Mark Twain said, “A lie can travel halfway around the world while the truth is still putting on its shoes.” But now it isn’t just a lie. Misinformation may be complex - deep fakes and the manipulation of images and data carry live risks. We worry if we have opened our own equivalent of Pandora’s box. 

But there is so much more than politics at stake. AI brings with it many potential benefits for humanity. It could help us deliver healthcare and improved agriculture, understand climate change and enhance our cities. It could be a power to a better and more prosperous society, although we ask how and for who this might all work.  

A changed world

There are now very few of us who haven’t tried to us one AI app or another and been shocked by how it responds to the task we set it. Most of us have also heard about how it may take the jobs away from people and wondering if will be our role that is gone with the wind of AI. 

This is not the first new technology to be greeted with a mix of celebration and fear. Physics-powered technologies have driven successive economic revolutions – mechanical, electrical, atomic – which have brought great opportunity and prosperity, but also significant risks, change and disruption – some of which I have experienced first-hand.

So the phenomenon of AI is rightly exercising the minds of scientists and non-scientists alike, here in the U.K.  and around the world. In the US, Dr Fei Fei Li - sometimes called ‘the godmother of AI’ - has long championed human-centred computing and the use of AI for public benefit. Closer to home, the Ada Lovelace Institute is working with industry, public institutions and academia to ensure the ethical implications of AI are properly considered. 

But physicists have a special contribution to make to this moment of change. For while AI is grounded in computer science, physics has been central to its development and applications. For decades, Physicists have been at the vanguard of using AI: to improve models and make discoveries – often in very sensitive areas, from defence, to materials science, to nuclear fission and fusion.

So physicists have real-world experience of some of the crucial questions the Summit is considering. It’s all very well talking about algorithmic transparency, explicability, and bias in the abstract, but when that algorithm is helping you design a fusion reactor, it focuses minds!

In a recent survey of 2086 adults in the UK, the polling firm YouGov found that 74% felt that “preventing AI from quickly reaching superhuman capabilities” was an important goal of AI policy, with only 13% disagreeing. But the fears and biases associated with AI often lie in what we as human beings have provided as the sources of information to be amplified, and the dilemmas about its application have as much to do with the political or commercial instincts of humanity as with technological change.

The truth is that AI amplifies possibilities and speeds processes for good and ill. It anticipates based on what it knows of what has gone before, and so now we are faced with both the potential for enormous good or to replicate our own social failings at previously unimaginable speed and scale. 

How we think we might help is for the physics community in all its expertise and diversity to come together, putting political, financial or national interests aside to create change that is good for all. But we also want to listen and learn as the questions and possibilities change. This year at the Institute of Physics, we’re preparing to embark on a new “impact project”, working with our membership and the wider community to explore the opportunities and risks of AI in a physics context – to explore cross-domain opportunities, support policymakers and ultimately leverage this technology for positive impact.

What next?

I think we have to admit that the path ahead is uncertain to say the least. What should one do when we know too little about what the impact will be? So how should we respond to the changes ahead? We need to look to our children and the world so they can build with the new technology. And makes me genuinely concerns that we give all our children access to this new world.

I do think as many people as possible need to see how the latest technology will affect their lives and so I believe giving access to the new techniques to as many of our citizens as possible is crucial. We have by no means settled on what or who will be the winner in this race, but the clear danger is that it becomes a domain for those who can afford it. Only if we make this new technology available to the widest range of people, along with any training that is needed, will we give the best AI future to our children.

Bletchley was a place where experts helped win a war. Our aim is to help secure the peace and benefits for all. We trust the big technology companies and politicians who will be in Bletchley, and those who will pick up the hard work of development once the politicians return to their desks, plan on doing the same. We are ready to play our part in a technological revolution which is bound to impact all of our lives. 

This article was originally published in Physics World. You can read their version here.