A global pause in AI development will not work and play into the hands of rogue regimes and organisations, IT professionals have warned. 

Attempts at a world consensus on holding back AI will produce an ‘asymmetrical pause’, where bad actors seize the advantage, said BCS, the Chartered Institute for IT. An open letter by the Future of Life Institute (in March) called for an immediate halt in training of systems ‘more powerful than GPT-4’ for at least six months, which must be public, verifiable and include all public actors.

Is it possible to ‘pause’ AI?

Even if a go-slow on AI could be achieved, it would still be harmful to humanity by delaying advances in medical diagnosis, climate science, and productivity, BCS said. In a policy position paper, BCS, the professional body for computing, argues that putting ethical guardrails around AI, ‘as it grows up’ will be far better than a dangerously uneven pause.

To make sure humanity gets the benefits of AI as early and safely as possible, it should be clearly labelled, backed by public education, supported by professional standards, and developed within ‘AI sandboxes’ – safe spaces for early testing.

In the policy paper entitled ‘Helping AI grow up – without pressing pause’ BCS said a halt in AI development would:

  • Delay AI research in areas that are fundamental for society’s survival like climate change and health.
  • Be ‘asymmetric’; it is not possible to ensure all governments and organisations would respect such an agreement; bad actors would win an advantage in the AI race.

The research concluded that instead AI can continue to ‘grow up’ safely if:

  • Organisations are more transparent about their development and deployment of AI, comply with data privacy regulations, and submit to audit of processes and systems.
  • AI systems are developed by communities of competent, ethical, inclusive, information technology professionals, supported by professional registration.
  • There are clear health warnings, labelling and opportunities to give informed consent around AI products and services.
  • It is supported by a programme of increased emphasis on computing education and adult digital skills to help the public understand and use AI. This should be driven by government and industry.
  • It is tested safely within established regulatory ‘sandboxes’ as proposed in the white paper on AI regulation published by government in March this year.

Rashik Parmar MBE, Chief Executive of BCS, The Chartered Institute for IT said:

“We can’t be certain every country and company with the power to develop AI would obey a pause, when the rewards for breaking an embargo are so rich. So, instead of trying to smother AI, only to see it revived in secret by bad actors, we need to help it grow up in a responsible way. That means working hard together to agree standards of transparency and ethical guardrails designed and deployed by AI professionals.

“We’ve got a generational opportunity to make something that, pretty soon, can solve a huge number of the world’s problems and be a trusted partner in our life and work; let’s take it.”

About the author

The paper ‘Helping AI grow up without pressing pause’ was edited by Adam Leon Smith FBCS, Chair of BCS’ Fellows Technical Advisory Group (F-TAG); CTO of Dragonfly and by Dr Sam de Silva, Chair of BCS’ Law Specialist Group and a Partner with CMS.

Read the BCS policy position paper, now.

A PDF version of this paper is available for download.