We aim to use 3D stereo display to visualize a data bank of characteristics common to spiral galaxies. We plan to use an immersive environment at the Industrial Technology Centre's Virtual Reality Centre (VRC), i.e. a "half]cave" in which the user walks around and interacts with the data. Our approach will use advanced computer algorithms, supplied by nQube, working in combination with the human vision system to select colours which will be optimized for emphasizing relationships between galaxy characteristics.
Autonomous surveillance based on unmanned robotic network is in demand in search & rescue in hazardous environments and for continuous remote monitoring. For autonomous robotic surveillance, the robots should be able to gather information and analyze patterns in object behavior. Effective surveillance rests in detecting point and pattern changes normal behavior and thereby assessing the situation. The robotic network can then reconfigure to deliberate for additional information or to optimize surveillance performance.
There are two fundamental wave modes that travel through the earth p‐waves and s‐waves. Currently most seismic exploration uses p-waves only. P‐waves convert into s‐waves at a discontinuity of the material parameters of the earth and are called converted waves. Converted waves offer the ability to detect fractures in the earth which are linked to areas of high oil and gas production. Reverse time migration is a tool to process seismic data using multipathing and without dip imitation.
Autonomic Computing Systems are systems which are capable of self‐configuring, self‐healing, self‐optimizing and self‐protecting themselves, by constantly monitoring the current state of the system, determining if the state of the system must change and how the state must change, and finally taking appropriate action in order to bring the system to the desired state. The intern will build on previous work in autonomic computing by analyzing the impact of making changes in a control loop at runtime.
This internship proposes to create a support vector machine based classifier which classifies image aesthetics, ranking images from 1-7. An existing classifier has been developed which classifies professional quality photographs. To be of use to Vidigami, the classification method needs to be developed which can provide an aesthetics score for average quality photographs.
Incorporating a head-mounted display (HMD) into a high-performance sport goggle involves many technical challenges. Optically, the microdisplay must be magnified to a comfortable size for viewing, and the image must be shown in an area of the goggles that does not detract from the main field of view. This means that the magnifying assembly must be as small, lightweight, and unobtrusive as possible, but still maintain excellent visual fidelity. The focus of this research will be to produce the smallest, most cost-effective optical device that fulfills these requirements.
Ontario is implementing smart grid technologies to its electrical grid. Smart meters have been installed and the utilities will be collecting data about electricity usage and providing time of use choices to enable peak load shaving. The implementation of distributed generation (DG) and electrical vehicles will cause new challenges such as islanded operation of micro]grids and the storage offered by PHEVs. The data communication between the utility and user will play a key role in the implementation of smart technologies.
Intelligent visual surveillance refers to the use of context rich visual sensors, i.e. video cameras, for the purpose of surveillance. Surveillance systems can be deployed in diverse environments, such as airports, department stores, office buildings, home buildings, conference rooms, parking lots, and hotels for diverse purposes, such as ambient and personal security, information recording, and personal identification.
This project will be used to develop a method of calibrating multiple camera together. In other words for example in a two camera scenario each camera will be looking at the shape of a single laser line. Once the cameras and laser are passed over a target object each camera has collected its’ own image. It is desired to analyze only one image from both cameras. Therefore it is required to merge/calibrate the data together.
The WaterBoard is a 'waffle iron' for prototyping electronic systems. In the envisioned concept, the user simply places components('dough') in the WaferBoard and closes the cover. 'WaferBoard' then senses the component contacts and recognizes the components and interconnects them ('cooks them'). The prototype ('waffle') is now ready to be brought up and run. The WaferBoard will have saved the PCB development process weeks or months in time to market and tens to hundreds of thousands of dollars (or more).