The main aim of the project is to develop the characteristics of a Prosumer, i.e. a consumer that interacts on-line with a company providing useful and insightful comments on its products. Data from consumer interaction databases are will be analyzed using the tools of data mining in order to accomplish the goals of the project.
Le projet consiste à développer une méthodologie novatrice, de type heuristique, de construction d’horaires annuels des pilotes de la Corporation des Pilotes du St-Laurent central. L’algorithme de construction d’horaires doit tenir compte de plusieurs contraintes tant au sein de la Corporation des Pilotes du Saint-Laurent central que celles imposées par l’Administration de Pilotage des Laurentides (APL).
L’Institut de Recherche en Électricité du Québec (IREQ), qui est le centre de recherche d’Hydro-Québec, a pris le virage des technologies du web sémantique depuis quelques années. L’IREQ doit gérer une quantité énorme d’informations provenant de ses équipements réparties dans tous le Québec. Les chercheurs de l’IREQ ont notamment choisis les technologies du web sémantique afin de faciliter la cueillette et la gestion des informations. Au coeur des technologies de web sémantique réside une pile langagière qui permet de textuellement coder les informations et la sémantique qui s’y rattache.
The demand for computing power has been rapidly growing over the last decade. The ability to efficiently utilize computing resources and improve the productivity of applications is necessary for the competitiveness of any industry and it will become more critical as the demand for computational resources increases. The computer hardware sector has seen rapid advances with the introduction of multicore and many-core processors and has introduced many challenges for the software development community to efficiently utilize the new architectures.
The project will research software for providing writing assistance to adult non-native writers in post-secondary education and corporate settings. Of particular interest is encouraging writers to employ a wider range of linguistic constructions, while avoiding repetitive, dull choices or language that is inappropriate to the local context or the genre of the text in question.
This research project will systematically compare different design alternatives for a data store module that is tuned for concurrent accesses by multiple independent threads in a program. The particular access pattern is modelled after the specifics of automated securities trading programs implemented in Java. Based on a systematic evaluation, a prototype implementation will be developed that will allow X3 Trading to dramatically improve the efficiency of business application development.
With advances in techniques, high volumes of valuable data are generated in many domains (e.g., energy sector) at a rapid rate. Consequently, a scalable and flexible system for efficient storage and fast management of these distributed data is needed. In this proposed research project, we plan to design and implement a cloud-based data storage & management system that is flexible, scalable and fast to handle distributed data in a parallel fashion for the partner organization.
In recent years, machining with robots has become a trend in the manufacturing industry. The concept offers an economical solution for medium to low accuracy machining applications. However, due to the complexity of the robot kinematics, planning for these paths is challenging. Jabez Technologies has developed a semi-graphical approach that can program large robot-paths. This approach has been very well received by the industry and has proven to be extremely robust in practice. However, this approach is semi-automatic and cannot work without user input.
Poor data quality is a barrier to effective, high-quality decision-making based on data. Declarative data cleaning has emerged as an effective tool for both assessing and improving the quality of data. In this work, we will address some important challenges in applying declarative data cleaning to big data, challenges that arise due to the scale, complexity, and massive heterogeneity of such data. First, we will investigate the use of domain ontologies to enhance declarative data cleaning. Second, given the dynamic nature of big data, we will develop new continuous data cleaning methods.