Learning representations through stochastic gradient descent by minimizing the cross-validation error

Representations are fundamental to Artificial Intelligence. Typically, the performance of a learning system depends on its data representation. These data representations are usually hand-engineered based on some prior domain knowledge regarding the task. More recently, the trend is to learn these representations through deep neural networks as these can produce significant performance improvements over hand-engineered data representations. Learning representations reduces the human labour involved in any system design, and this allows in scaling of a learning system for difficult problems. In this project, we propose to design a new incremental learning algorithm, called crossprop, for learning representations based on prior learning experiences. Specifically, the algorithm considers the influences of all the past weights while minimizing the current squared error, and uses this gradient for incrementally learning the weights in a neural network. This algorithm is called crossprop because it learns to shape the weights in a neural network through leave-one-out cross-validation procedure.

Faculty Supervisor:

Richard Sutton

Student:

Vivek Veeriah

Partner:

RBC Financial Group

Discipline:

Computer science

Sector:

Information and communications technologies

University:

Program:

Accelerate

Current openings

Find the perfect opportunity to put your academic skills and knowledge into practice!

Find Projects