Steve Furber

Professor
Room number: IT208
email:
steve.furber@manchester.ac.uk
Tel.: +44 161 275 6129
Other contact details
New
Honorary DSc from Queen's University, Belfast (2015)
BCS Lovelace Medal 2014 and Distinguished Fellow
Science Council 100 leading UK practising scientists

Biography
Steve Furber is the ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester. He received his B.A. degree in Mathematics in 1974 and his Ph.D. in Aerodynamics in 1980 from the University of Cambridge, England. From 1981 to 1990 he worked in the hardware development group within the R&D department at Acorn Computers Ltd, and was a principal designer of the BBC Microcomputer and the ARM 32-bit RISC microprocessor, both of which earned Acorn Computers a Queen's Award for Technology. Upon moving to the University of Manchester in 1990 he established the Amulet research group which has interests in asynchronous logic design and power-efficient computing, and which merged with the Parallel Architectures and Languages group in 2000 to form the Advanced Processor Technologies group. From 2003 to 2008 the APT group was supported by an EPSRC Portfolio Partnership Award.
Steve served as Head of the Department of Computer Science in the Victoria University of Manchester from 2001 up to the merger with UMIST in 2004.
Fellowships and Awards

Steve is a Fellow of the Royal Society, the Royal Academy of Engineering, the British Computer Society, the Institution of Engineering and Technology and the IEEE, a member of Academia Europaea and a Chartered Engineer. In 2003 he was awarded a Royal Academy of Engineering Silver Medal for "an outstanding and demonstrated personal contribution to British engineering, which has led to market exploitation". He held a Royal Society Wolfson Research Merit Award from 2004 to 2009. In 2007 he was awarded the IET Faraday Medal, "...the most prestigious of the IET Achievement Medals." Steve was awarded a CBE in the 2008 New Year Honours list "for services to computer science". He was a 2010 Millenium Technology Prize Laureate. In 2012 he was made a Fellow of the Computer History Museum, Mountain View, CA, USA. He has Honorary DScs from the University of Edinburgh (2010), Anglia Ruskin University (2012) and Queen's University, Belfast (2015). He was a recipient of a 2013 IEEE Computer Society Computer Pioneer Award. He received, with Sophie Wilson, the 2013 Economist Innovation Award for Computing and Telecommunications for co-creating the ARM. In January 2014 he was included in the Science Council list of 100 leading UK practising scientists. He was made a Distinguished Fellow of the BCS in 2014, and also received the 2014 BCS Lovelace Medal.
Public and professional service
Steve has served as a member and chair of the UKCRC executive, and as Vice-President Learned Society and Knowledge Services and a Trustee of the BCS.
In 2002 Steve served as Specialist Adviser to the House of Lords Science and Technology Select Committee inquiry into 'Innovations in Microprocessing', which resulted in the report "Chips for Everything: Britain's opportunities in a key global market"
Steve chaired the Royal Society study into Computing in Schools, which resulted in the report Shut down or restart? in January 2012.
Steve is chair of sub-panel B11 Computer Science and Informatics for REF2014.
Technology Exploitation
In December 2003 Silistix Limited was formed to commercialise self-timed Network-on-Chip technology developed within the APT group. Steve was an observer on the Silistix Board and chaired the Technical Advisory Committee. The company ceased trading in 2009.
Cogniscience Limited is a University company established to exploit IP arising from the APT research into neural network systems.
Steve's book
An overview of the world's leading 32-bit embedded processor: ARM System-on-Chip Architecture
Building a Common Vision for the UK Microelectronic Design Research Community
This community-building initiative began with a workshop hosted by the IEE at Savoy Place, London, on November 15 2004. A series of further workshops and meetings led to a Network Grant proposal funded by EPSRC, which culminated in a set of Grand Challenge proposals. These were subsequently been signposted by EPSRC, and have evolved into the current eFutures network.
Research interests
- Neural systems engineering
-
The classical computational paradigm performs impressive feats of calculation but fails in some of the simplest tasks that we humans undertake with ease and from a very early age. Biological neural networks are proof that there are alternative computational architectures that can outperform our fastest systems in tasks such as face recognition, speech processing, and the use of natural language. Brains are complex highly-parallel systems that employ imperfect and slow (though exceedingly power-efficient) components in asynchronous dynamical configurations to carry out sophisticated information processing functions. Note the word asynchronous in the previous sentence! Many aspects of brain function are little-understood, but we hope that our deep understanding of the engineering of complex asynchronous systems may be of use in the Grand Challenge of understanding the architecture of brain and mind.
At present our major activity here is the SpiNNaker project, where we are building a massively-parallel chip multiprocessor system for modelling large systems of spiking neurons in real time. The ultimate goal here is to build a machine that incorporates a million ARM processors linked together by a communications system that can achieve the very high levels of connectivity observed in biological neural systems. Such a machine would be capable of modelling a billion neurons in real time (which is still only around 1% of the human brain).
We were also involved in the COLAMN project, which was a large EPSRC-funded investigation into novel computing architectures based on the laminar microcircuitry of the neocortex, and the NanoCMOS project where SpiNNaker was used to research the impact of increasing process variability on multiprocessor Systems-on-Chip.
You can find a simple simulation of the effects of evolution on the performance of a simple neural network here.
- On-chip interconnect and GALS
-
As Systems-on-Chip become ever more complex the problem of linking together the various modules that make up the complete system - the processor, memory, peripherals, signal processing hardware, and so on - becomes ever more complex. The solution for simpler SoCs was to use buses, but already high-end SoCs require hierarchies of buses to meet their performance targets. Ultimately this will lead to on-chip interconnect that is better viewed as a network rather than as a bus. Networks-on-Chip are at their most flexible if they are self-timed, and a GALS - Globally Asynchronous Locally Synchronous - architecture emerges that allows each module to operate with its own independent clock (or, for the more adventurous, no clock at all!). Our past work focussed on the GALS network fabric (e.g. CHAIN) and on Quality-of-Service issues. Most of this activity moved out into Silistix Ltd, although there is a lot of NoC work (including use of Silistix tools) within the SpiNNaker project described above.
- Asynchronous systems
-
Since 1990 I have been building large-scale asynchronous VLSI systems, most notably the Amulet processor series detailed elsewhere on these web pages.
Research funding





The design and construction of the SpiNNaker machine was funded by EPSRC. The ongoing support and software development, with provision of internet access to the machine, is being supported by the EU through the ICT Flagship Human Brain Project. Research using the machine is being supported from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) ERC Grant Agreement no. 320689 BIMPC - "Biologically-Inspired Massively-Parallel Computation". The research has also received support from ARM Ltd, and from Samsung through their GRO programme. We are grateful to all these funding bodies and companies for their support.
Publications
Details of Steve's publications can be found on the APT group publication pages.