The research focuses on development and implementation of advanced software algorithms designed for the automated analysis of skin lesion images. The algorithms will be designed to run on mobile computing devices such as smartphones and tablets, and could be used by the general users as well as doctors for computer-assisted screening and diagnosis of skin cancer. For users, our computer program will automatically compute the risk score for skin lesions based on the previously diagnosed cases, and for doctors, it will use machine learning to assist in making the best diagnostic decision.
Mining companies have a need to sort rocks based on mineral type. MineSense is researching and developing sorting equipment for this task. Intern will model the problem using probabilistic modelling techniques from the area of artificial intelligence applied to the area of mineral processing using electromagnetic sensors. He will create computer algorithms for automatically interpreting sensor data and intelligently controlling diverters which will identify and divert good rocks into a keep pile and leave the waste in a discard pile.
Embedded systems are small low-power computing units designed for specific applications, often with real-time constraints. These systems can be found in cell-phones, portable video games, automobiles, home appliances, digital cameras where they perform specific tasks. The objective of this research is to participate to the development of the embedded software algorithms and tools that will facilitate the embedding of new software solutions to the Synopsys embedded vision system.
In this project, we will design a set of communication protocols dedicated for linear wireless sensor networks, which have promising applications in the real world for environment and infrastructure monitoring. The protocols will be implemented and evaluated by using the multimode, enhanced wireless senor nodes developed in the host academic supervisor's lab.
S’inscrivant dans la continuité du premier stage, ce projet vise à unifier les langages de description de structures argumentatives GSN et TCL et à adapter la théorie de Dempster-Shafer au langage unifié. Cette théorie est la plus adéquate pour intégrer à une structure argumentative les évaluations de ses noeuds. Notre objectif est d’arriver à déterminer de bonnes règles servant à propager avec précision les évaluations des noeuds enfants vers le noeud racine.
Social media websites (e.g. Facebook, Twitter, etc.) are widely used by the general public and also by businesses to promote their product and services. Due to the nature of these websites, marketed content by businesses does not always reach their desired target. This can be improved by providing useful information to the businesses, such as the times their audience is using social media or their most active users. This information is not readily available, but can be derived by analyzing the available data on social media websites.
Traditional static authentication systems have a fundamental deficiency; it assumes the presence of the validated user through the length of the session. Continuous authentication algorithms periodically validate the identity of a user during the entire session. It relies on information that can be automatically extracted from the user such as biometrics and behavior patterns. A probabilistic approach can naturally model the noise and latent variables present in the data. The probabilistic output of such models is a confidence value.
At WorkSafeBC, an overwhelming amount of data is being produced, particularly in conversation logs between people who have an active claim with WorkSafeBC and various practitioners with whom they have regular contact (usually over the phone). These records are transcribed into text and the information is far beyond what any analyst could possibly read and understand without the help of analysis and visualization tools.
The rise of Big Data, social networking, and mobile interactions coupled with an accelerating increase in the amount of structured and unstructured information enabled by cloud based technologies is forcing organizations to focus on information that is most relevant, value-generating, and risk-related. Problems arise when we make decisions based on good enough metrics (e.g. means) instead of proper statistical methods (e.g. T-tests, ANOVA). The issue is that everyday business workers need to understand statistics in a cloud centered world.