During this tutorial we propose a practical on the robotics vision SLAMBench benchmarking framework. Our goal is to ensure that attendees can install and run the framework on their machine during the tutorial. No previous knowledge of computer vision is required. Invited speakers will talk about their experience with SLAMBench.
During this tutorial we propose to introduce the robotics vision SLAMBench benchmarking framework. Our goal is to ensure that attendees can install and run the framework on their machine during the tutorial. Invited speakers will talk about their experience with SLAMBench.
The increased processing capability of mobile and embedded platforms is enabling more and more ambitious machine vision applications. Google’s Project Tango, Movidius, with their Myriad processors, and Qualcomm, with the Vuforia framework, are examples of a dramatic level of activity, in entertainment, automotive and robotics domains. Vision applications combine image acquisition, computationally intensive algorithms and high-resolution rendering all in real-time, whilst the mobile domain places it in a power/energy constrained system. This makes mobile vision systems an ideal platform on which to evaluate reconfigurable systems, embedded processors, coprocessors, design automation methods, compilers, runtime systems and tools.
Real-time 3D scene understanding is essentially characterised as the Simultaneous Localisation And Mapping (SLAM) problem. SLAM systems aim to perform real-time localisation of the camera and 3D mapping “simultaneously” for a camera moving through an unknown environment. The SLAMBench benchmarking framework has been built in this purpose, a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption of SLAM algorithms. It provides several implementations of the same kernels in popular languages, e.g. C++, OpenMP, OpenCL, CUDA, and includes techniques and tools to explore accuracy and performance tradeoffs. Industry and academia embraced SLAMBench as a mean to design and evaluate new computing systems on the applications of tomorrow. Meanwhile computer vision scientist use the framework to improve the state-of-the-art in SLAM research and as a result keeping up-to-date the framework with recent improvements in 3D robot vision pipelines.
is research Associate at the University of Edinburgh, investigating hardware mapping in the context of computer vision for embedded systems. Previously, he completed a PhD in Computer Science at the University Pierre and Marie Curie in 2013 in collaboration with Kalray, a fabless semiconductor company. His research interests include dataflow modeling and parallelization.
Luigi Nardi is a postdoctoral Research Associate at Imperial College London in the Software Performance Optimisation group. Luigi's primary role is to work in the co-design of high-performance low-power computer vision systems where performance, power and accuracy are part of the same optimisation space. Prior to his current position, Luigi earned his Ph.D. in computer science at Pierre et Marie Curie University and was a permanent researcher leading the high-performance computing effort at Murex S.A.S.
Harry Wagstaff is working with the PAMELA project at the University of Edinburgh, investigating simulation in the context of computer vision workloads. His research interests include high speed simulation and architecture description languages.