LinkedIn Logo

Industry Experience

VP - Research and Development
Star Navigation Systems Group Ltd.
2013

Designed and began developing a military version of the ISMS product by re-architecting the commercial ISMS system to use SQLite and a MIL-STD-1553 Interface. Lead new technology development studies for the next generation ISMS system

Software Systems Lead, EV9 Program Manager
COM DEV Ltd
2007-2013

Managed and Developed Ground Support Equipment (GSE) and Satellite Payload Operations Center Software for several satellite missions.

MSS L&SE Systems Engineering
MDA
2000-2007

Performed several engineering activities on the Mobile Servicing System (Canadarm2). This included, but was not limited to, writing software, performing hardware/software integration, supporting real-time operations, leading technical projects, supporting astronaut training and performing flight software safety assessments.

Education

University of Guelph
PhD -- Computing and Information Science

A Method for Removing and Replacing Lighting Effects using Image Sequences

Image sequence analysis has been used extensively for tasks such as the detection of moving objects, tracking, segmentation, automation, and generation of 3D models. There has been much research in the area of normalizing lighting, removal of shadows, and enhancement of images and their contrast. Some image sequence algorithms try to reduce variation in lighting conditions using simple background subtraction, but the use of the content of image sequences to help with the removal and replacement of degraded areas caused by these lighting conditions has not been attempted.

This thesis presents a new method promoting automated image reconstruction given a same-scene (from a fixed camera) set of degraded input images. The method determines appropriate technique(s) using reconstruction goals and characteristics of the degraded content, and creates a reconstructed image of the scene for each technique selected. The method ranks the reconstructed images to determine which one is most likely to represent the actual scene. A series of experiments comparing the methods ranked images to best captured images illustrate the strengths of this new method.

University of Guleph
MSc -- Computing and Information Science

Spherical Panoramic Video "The Space Ball"

Techniques for synthesizing panoramic scenes are widespread. Such a scene can be automatically created from multiple displaced images by aligning and overlapping them using an image registration technique. The ability to generate panoramic scenes has many applications including the generation of virtual reality backgrounds, model-based video compression, and object recognition. These techniques--and consequently their associated applications share the restriction that all scenes are limited to a 360 degree view of the horizontal plane at the particular moment in time the images were taken.

Until recently, there has been little motivation to develop techniques for the presentation of complete spherical views in real time---scenes that present the entire potential visible fields of view, through time. With the advent of space exploration and associated micro-gravity environments, "up" and "down" are relative terms and locally fixed points of reference are difficult to come by. It may be useful to rethink how video is captured and presented to a user working in such an environment employing extended notions of what a panorama is.

This thesis allows a user to view and pan/tilt through arbitrary angles of view including elevation and declination as well as providing the view in real time from a network of seven synchronized CCD video cameras whose video outputs are selectively "stitched" together to provide a smooth transition between different camera fields of view. In this way, the user can smoothly pan/tilt through all the fields of view that are generated by the system. All video processing is done in software--there are no moving parts.

Space Ball Poster

Ryerson Polytechnic University
BSc -- Applied Computer Science

Autonomous Navigation for Indoor Mobile Robotics

Navigation of Autonomous Robotic Devices has traditionally been accomplished by means of a fixed path navigation system. The use of such vehicles in unstructured indoor environments has led to a requirement for more sensory input. The objective of these experiments was to determine and solve problems encountered with maneuvering a robotic device around an unstructured environment. This paper discusses how to use and the problems encountered with video, imaging, and sonar sensors for collision avoidance and micro switches and digital inputs for collision detection to successfully navigate a tele-operated and autonomous robot around the unstructured indoor environment. Local position in the environment is also a major issue for autonomous vehicles. Where am I? This is a question an autonomous robot must be able to answer to successfully navigate around. This paper will also discuss how the use of an internal local coordinate system is not accurate for dead-recognition purposes and external sources are needed.

top