This project involves assisting with Digital Identity Transformation Programs by conducting business requirements gathering and cybersecurity assessments. The main focus is on implementing top-tier identity solutions to enhance identity and access management (IAM). The research objectives include developing IAM assessments to evaluate the current state of IAM within client organizations. This will inform the development of IAM strategies that align with clients' business objectives and address their unique identity management challenges.
In traditional claim processing system, adjusters or claim handling specialists need to read massively number of medical reports like diagnosis report, medical assessment, prescription daily for different types of decision making. Such a manual review process is time-consuming and tiring, which causes slowness in serving claimants and consequently unpleasant customer experience. In the digitalization era, we aim to digitize the manual process of reviewing documents with NLP techniques.
Hate speech and toxicity pose a serious threat in online spaces, particularly for marginalized communities. Detecting and preventing harmful speech in online games is challenging, and current methods lack transparency and reliability. To address this issue, this project aims to develop a robust and trustworthy toxicity detection model for in-game chat. On the robustness side, we will reiterate on an existing context-aware toxicity detection model to address four main areas: rare categories, continuous learning, adversarial learning, and human-in-the-loop.
New/Mode is a Canada-based social enterprise that facilitates communication between elected representatives and their constituents on the behalf of progressive NGOs. This project will bring novel machine learning technologies to bear in an effort to improve accuracy and efficiency in the maintenance and evolution of a repository of contact information for these elected representatives.
Bitcoin! Ethereum! Cryptocurrency! Blockchain! In the past years, these words made headline news globally. The promise of blockchain, or decentralized ledger-based technologies, has electrified the world and created an excitement for technology that was last seen in the 1990s when the internet was entering mainstream. The core premise of the technology is that blockchains are secured by cryptography and economic incentives, and that they are governed by decentralized consensus.
Today's TV shows and films are supported by a large industry of visual effects studios for post-processing. To enhance actor performances, virtual humans are created and used to provide hyper-realistic 3D renderings of actors' performances. The artists working on these high-quality virtual humans currently perform a heavy workflow of scanning actors in 4D to obtain hi-fidelity textures, rigging body models and capturing motion capture performances to apply adjustments to virtual humans and creatures.
With recent advances in microscopy techniques for calcium imaging, scientists are now able to monitor the neurons in large brain areas of living animals. However, this also creates significant challenges for scientists who must make sense of the large and complex datasets generated. As a result, there is a need for new componential methods and systems to augment current workflows used by neuroscientists. This study seeks to address this problem by leveraging advanced visualization techniques, building on the results of our previous successful collaboration with a partner.
Artinus hopes to work with computing and data analytics interns to enhance the current practices in Natural Language Processing (NLP). The company provides machine learning services to a number of government departments. The company expects to provide similar services to corporate clients in the near future. The services consist of digitization and analysis of a large collection of documents.
Today’s machine learning (ML) world is no longer mostly about "releasing code", since development teams now need to integrate the software release pipeline with data engineering and ML model training pipelines, collaborating in cross-disciplinary fashion with ML experts, data scientists, and operators. This project aims to empirically study practices, tools, and techniques for determining ML release-readiness, based on a partnership with the Artificial Intelligence (AI) Factory team of the National Bank.