Training artificial intelligences to identify faces or digitise text involves thousands or millions of iterations of a two-stage process known as back-propagation, but a new approach could save time, energy and computing power
Artificial intelligence is growing ever more capable at increasingly complex tasks, but requires vast amounts of computing power to develop. A more efficient technique could save up to half the time, energy and computer power needed to train an AI model.
Deep learning models are typically composed of a huge grid of artificial neurons linked by “weights” – computer code that takes an input and passes on a changed output – that represent the synapses linking real neurons