Advanced Processor Technologies Home
APT Advanced Processor Technologies Research Group

Simon Davidson

My photo

Research Fellow
Room number:IT-209

email: simon.davidson@manchester.ac.uk

Tel: +44 161 275 3547

Biography

Simon joined the APT group in the School of Computer Science at The University of Manchester in 2009, as a research fellow.

He received a BEng(Hons) in Electronic Engineering from the University of Sheffield in 1989 and completed a PhD from the same department in 1999. His research topic was the development of neural network architectures for memory and symbolic computation.

In 1990 he became a research assistant in the Electronic Engineering department at Sheffield, working on a number of ESPRIT funded integrated circuit design projects, leading in 1993 to his joining SGS-Thomson (formerly INMOS) in Bristol to contribute to the CHAMELEON Project - a 64-bit superscalar microprocessor aimed at MPEG decoding and multimedia applications - where he was part of the core processor pipeline design team, creating pipeline micro-architecture for an out-of-order superscalar pipeline in VHDL, performing simulation, synthesis and place-and-route using (mostly) Cadence and Synopsys tools.

In 1996 he joined Hewlett-Packard's Integrated Circuit Business Division (ICBD) in Grenoble, France, working with customers in several European design centres on the backend design (place-and-route and layout) of a range of chips for HP peripherals.

Early in 2000 he returned to the UK to join ARC International - a configurable microprocessor IP design company in north London - as principle engineer in the core processor group. Notable roles included technical lead for the development of their new dense instruction set - ARCompact - and co-technical lead on their deeper pipeline processor line, beginning with the ARC700, which provided support for precise exceptions and virtual memory.

By 2005 Simon had decided to return to academia to pursue more fundamental research in neural computation, picking up many unfinished threads from his doctoral work. After a time in the Computer Science Department at The University of York he moved in 2009 to the APT group in the Computer Science Department at The University of Manchester to join the SpiNNaker project and its follow-up project - BIMPA - to develop neural architectures for a massively parallel SpiNNaker machine.

Research Interests

  • Neural Memory

Real brains seem to store knowledge by changing the synaptic connections between individual neurons. Individual memories are distributed across thousands or even millions of neurons making them resilient to noise and damage. This is very different from how today's computers store information - can we learn anything from the biology?

My own work on artificial neural memory has focused on a coding technique called N-of-M coding, where the neural population is divided into groups of size M. To represent a given memory, approximately N of the neurons in each group is set to 'fire' while the rest are silent. Every memory would be represented using a unique code. Connections are modified between firing neurons in such a way as to store the new memory and to allow it to be recalled from a triggering stimuli at a later time.

I'm particularly interested in developing this basic technique to capture to notion of data assimilation - modifying the internal coding for a memory over time to make it easier to store. The neural memory combines reliable short-term memory with optimised long-term memory in the same synaptic connections with novel learning processes used to assimilate information from short- to long-term storage only if required. This organisation principle is intended to preserve valuable information in long-term memory from corruption by short-lived data, while keeping the benefits of co-locating the two types of data in a single network.

  • Neuro-Symbolic Computation

One of the great and longstanding challenges in computer science is the development of a general framework to harness the power of many processors working together on a single problem. The brain seems to have a good solution for this: the average human brain consists of about 10 bilion neurons that, individually, can send and receive only very simple signals. Yet when working in concert they can represent, store and reason with all of the knowledge that we acquire over the course of a lifetime. The fact that a brain can respond in less than a second to a complex visual stimulus is testament to how well it can use its parallel resource.

My own interest in this area is to understand how we can use a large parallel array of simple neuron-like computing elements to represent and reason with the complex data structures that occur in everyday computer processing. Representing knowledge as symbol structures allows a computer of limited processing power to work in a world on unbounded size (like the real world!) by focusing on only a small part of the problem at given time. Symbols can be used to represent each part of a problem and the symbol structure captures information on how the parts are combined (or could be combined). Manipulations of the symbol structure allow us to re-arrange the parts 'in the mind' to find new and useful combinations and to make sense of input which may correspond to familar parts in unexpected combinations.

This work, like that in neural memory described above, is based around N-of-M codes. This time, a set of N-of-M codes are interpreted as symbols that can be both combined with others to form new structures or opened up to reveal their own internal structure. Manipulations can be performed by applying control codes (also N-of-M based) from other networks and several networks can work together, each sending control codes to the others.

This work has several aims. The principle aim is to address the grand challenge of how to perform general-purpose computation efficiently on massive-parallel array of simple processors, with applications across computer science. In the area of intelligent systems and robotics, a symbol processing neural system will be of interest in developing advanced intelligence driving flexible, robust robots and self-governing vehicles. It's also possible that by solving this computer architecture problem we may gain some insight into how the brain works - it has faced the same set of challenges and may have solved them in the same way!