During surgeries, it is important to keep track of what is happening with the patient, the steps being taken during the surgery by the operating staff, and unforeseen events that occur. All the previous correspond to the surgical workflow. Keeping track of the workflow is essential to achieving a better and safer surgery. In the past, computational tools have been developed to track each step the surgeon takes during the surgery, and dividing the separate surgical phases. However, the adverse events have not been tracked.
Using web crawling technology in coordination with state of the art machine learning techniques, the project aims to mine useful, structured information about the worldâs suppliers from the web. Recent advances in artificial intelligence have increased the viability of such autonomous systems for extracting coherent information from arbitrary human-produced content. By leveraging these technologies, our goal is to build improved supplier discovery and recommendation systems.
The intern will work on applying new advances from the field of Machine Learning to models which make predictions about time-series data. The models have the desirable property modeling the distribution of outcomes in a way that we can sample from, allowing us to account for uncertainty in the modelâs predictions. By making more accurate predictions with more accurate gauges of uncertainty, Electronica will be able to construct portfolios which give more desirable risk-adjusted returns to investors.
The goal of the research is to implement different data mining algorithms in order to improve the prediction on a userâs electricity consumption. The research will be dedicated to improve the existing algorithms or implementing new algorithms for the improvement of the prediction accuracy. Besides application of the prediction algorithms, different data pre-processing methods will be used. Research will include supervised and unsupervised modelling of the dataset by using the R programming language.
Users must decide which websites to trust and which to avoid. How can users know if a website is truly what it claims to be? This is a pivotal issue. When attackers can convince users to trust their sites, though phishing or other strategies, user security and privacy are easily compromised, malware can be downloaded, and infrastructure undermined.
Our plan is to conduct user studies to explore the understanding of browser-presented certificate information.
The primary goal of this project is to explore a variety of new and existing Natural Language Processing (NLP) techniques to improve the performance, and further the automation of, Knoteâs text analysis software â specifically with entity recognition. Entity recognition is the process of identifying all groupings of words in a collection of documents that fall within that entityâs purview, such as proper names or chemical compounds.
In this project, we will apply machine learning to perform image style classification. We will build a system that uses image style classification to increase user engagement in an eCommerce platform setting. We will study the effects of user preferences for particular image styles on their engagement with the platform.
Image style classification is the task of categorizing an image based on attributes such as composition style (e.g., minimal, geometric, etc.), atmosphere (hazy, sunny), or colour (pastel, bright).
This project proposes to explore and implement a method of storing and retrieving data relating to genetic variation across a population of individuals. Due to the large amount of genetic information each person possesses, such a database requires special attention to minimize the amount of data stored and to create efficient methods of accessing the data. This work will research and test different strategies to build a compact data store that will return results quickly. This data store will be incorporated into the PhenoTips software provided by Gene42 Inc.
Kobo is an online e-book retailer that provides recommendations for future purchases to its user base. One difficulty that recommendation systems face is what is known as the âcold-userâ problem. In this scenario, when we know so little of a userâs preferences (for example, if they are new to the platform), we do not have any basis for recommendations. The goal of this project is to develop an interactive application that can elicit such preferences from users about whom we have little information, and that can help improve recommendations for power users.
This project will develop and apply machine learning techniques to predict the valuation of the properties in Nova Scotia. The techniques will help Property Valuation Services Corporation (PVSC) assessors with more efficiently and accurately valuing properties. The ultimate goal is to help PVSC reduce the number of annual appeals â which is a costly undertaking. It will also reduce the need to send assessors directly to the property locations, instead they will use machine learning techniques to more accurately predict property values.