Related projects
Discover more projects across a range of sectors and discipline — from AI to cleantech to social innovation.
Mitacs brings innovation to more people in more places across Canada and around the world.
Learn MoreWe work closely with businesses, researchers, and governments to create new pathways to innovation.
Learn MoreNo matter the size of your budget or scope of your research, Mitacs can help you turn ideas into impact.
Learn MoreThe Mitacs Entrepreneur Awards and the Mitacs Awards celebrate inspiring entrepreneurs and innovators who are galvanizing cutting-edge research across Canada.
Learn MoreDiscover the people, the ideas, the projects, and the partnerships that are making news, and creating meaningful impact across the Canadian innovation ecosystem.
Learn MoreWith the advances in big data analyses, the estimation of models is a more and more important issue. Two main estimation techniques are prevalent: least-squares minimization and likelihood maximization. These problems are typically solved using iterative optimization techniques, but given a model and an initial, the time required to obtain satisfactory estimates can be prohibitive due to the large number of observations, especially when non-linearities or/and heterogeneity have to be taken into account. Various strategies have been proposed to speed up the estimation time, especially when the models are twice-continuously differentiable. The first possibility is to increase the number of observations considered in the optimization only when close to the solution, as the first iterations solutions will be discarded. The question remains how to increase the sample during the estimation process, and how to select the observations to add. The second possibility is to exploit the mathematical structure of the problems, especially when using optimization techniques relying on quadratic models. In both cases, if the model is properly identified, the Hessian of the function to optimization can be related to the outer product of the scores, that is the individual gradient contributions to the average gradient of the objective function. However, model misspecification can lead to non-convergent candidate solutions sequences, as the constructed quadratic models do not properly approximate the objective function when close to the true parameters values to estimate. In order to circumvent this limitation, Hessian correction techniques, based on the secant equation, have been proposed. This has lead to Gauss-Newton algorithm in case of the non-linear least-squares estimation problem, and similar techniques for maximum likelihood. In the latter cases, we have however observed that for complex models, corrections based on positive-definite Hessian approximations, as BFGS, can perform quite poorly, while other techniques, e.g. SR1, can perform much better. Our current intuition is that since the outer product of the scores is already positive definite, to constraint the eigenvalues of the correction matrix to be positive can prevent to have an adequate correction. We also expect that a similar effect is prevalent for least-square estimation. The goal of the project is to validate this explanation and provide more adequate guidance in Hessian correction when estimating models by least-squares minimization or maximum likelihood estimation using standard non-linear optimization algorithms, that is line-search methods or trust-region algorithms. The applications to variants, as retrospective techniques, will also be considered. The impact on some adaptive sampling strategies, that allow to vary the number of observations used in the objective function at each iteration, will also be considered.
Fabian Bastin
MANYUAN TAO
Computer science
Globalink
Discover more projects across a range of sectors and discipline — from AI to cleantech to social innovation.
Find the perfect opportunity to put your academic skills and knowledge into practice!
Find ProjectsThe strong support from governments across Canada, international partners, universities, colleges, companies, and community organizations has enabled Mitacs to focus on the core idea that talent and partnerships power innovation — and innovation creates a better future.