This website is no longer updated.
Currently I work on the TeraFlux project as a Research Associate looking at building parallel programming models with data flow and transactional memory support for multi/many-core architectures.
An off-the-shelf computer nowadays ships with a multi-core chip (dual-core, quad-core, and so on). In order to exploit such architectures, programmers have to switch into writing parallel applications. Parallel programming is well known for its complexity and thus the need for automatic parallelization tools is appealing in order to abolish the burden from the programmer. Although such tools produce adequate results, they are not a panacea.
I am interested in the engineering and optimization of automatic parallelization tools as well as language constructs that make parallelization more accessible to the programmer. Check out my research interests section for further information.
During my PhD I was part of the iTLS (intelligent Thread-Level Speculation) project which brings together the areas of Machine Learning and Thread-Level Speculation to facilitate automatic run-time parallelization of sequential applications.
I spent most of 2012 working on an internship program as a Research Assistant in Oracle Labs (formerly Sun Microsystems Labs) in California Bay area, as part of the Simulation & Optimization group.