Recurrent Deep Architectures for Modeling Time Series Data

Deep learning is currently the dominant machine learning technique as a result of state of the art performance in vision (Russakovsky, et al., 2015), speech (Amodei, et al., 2015) and natural language processing (Vinyals et al., 2015). The improvement in performance of these models is attributed to the availability of large datasets for training the models as well as software & hardware improvements that help accelerate the training process. Recurrent Neural Networks (RNNs) are one of the most powerful and popular frameworks for modeling sequential data such as speech and text. We propose to create an open source implementation of scalable, “industrial-strength”, RNN models. These models can then be fine-tuned and trained to perform specific prediction tasks on time series datasets from finance and insurance sectors.

Nikhil Sapru
Faculty Supervisor: 
Graham Taylor
Partner University: