I have professional experience with digital LCOS microdisplays, VAN and TN liquid crystal, colorimetry, machine vision, human visual system, and robotics. I also have an extensive background in software control of hardware systems and measurement equipment.
I began programming computer and embedded software in the mid 1990s and have used a wide range of languages and platforms over the years. At this time my software projects are primarily C# and C++ applications, and PIC microcontroller code written in C or assembly.
I developed gamma and white point calibration for multiple generations of LCOS microdisplay products, and built Windows applications automating the calibration process.
Visual Studio, C#, CA-210 color analyzer, and CL-500A spectrophotometer
PIC I2C and SPI
I wrote C code implementing a Microchip PIC24FJ64GB004 as an I2C and SPI master with WinUSB connection. I defined a USB packet protocol encoding the ASIC register and SPI flash I/O, and programmed a Windows C# application to control the system board.
Microchip MPLABX, Microchip Code Configurator, Visual Studio, C, C#
CES 2013 Concept Product
Starting two months before CES, I learned Objective-C and created an iOS app to demo a concept iPad dock with integrated pico projector. The iPad drove the projector as an external monitor while the rear camera enabled real-time interaction with spheres cascading from the top of the screen. The most challenging part of the application was a robust machine vision algorithm to detect any user in the unknown lighting and background of a hotel demo suite.
To achieve near real-time user interaction, I pushed many of the machine vision algorithms onto the iPad GPU. There was also an automated calibration process to map the projected image to camera coordinates. I used a mix of OpenCV and GPUImage for machine vision, the cocoa2d game engine, and box2d for physics.
Xcode, Objective-C, OpenCV, GPUImage, cocoa2d, box2d
Informatics MSc Thesis – University of Edinburgh (2004)
Saliency maps provide a biologically plausible model of visual attention based on parallel preattentive features. The goal of past research with saliency maps often is to find regions of interest in a scene under various conditions or top-down effects. Recent publications suggest learning the significance of preattentive feature from visual scanpaths. Our research implements a computational model of saliency maps based on dynamical systems and then proposes a method of recovering feature weights from points of focal attention. Performance of the learning model is evaluated by comparing learnt focal attention to the training data. Finally, suggestions are made for improving the learning system during future research.
C, Java, FFTW, Video4Linux