Deep learning approaches for semantic textual similarity on low-resource languages and specialized domains
The aim of this research is to investigate from traditional methods to deep learning methods, how to measure the meaning relationship between two sentences, by combining the local context, at word-level, and the global context at the sentence-level, and their ability to model informativeness and diversity of meanings expressed in natural language, i.e. in English or in French.
Moreover, as we are interested in Information Extraction of entities, concepts, triplet and semantic relation in unstructured text, we will adapt the BERT model for low resource domains and languages. Evaluations on the proposed model will be conducted by experimenting several specific in-domain versus out-of-domain open source datasets and comparing with the state-of-the-art approaches.