Computationally hard search and optimization problems are ubiquitous in science, engineering and business. Examples include drug design, protein folding, phylogeny reconstruction, hardware and software design, test generation and verification, planning, timetabling, scheduling and on and on. In rare cases, practical application-specific software exists, but most often development of successful methods requires hiring specialists, and often significant time and expense, to apply one or more
I've started a new research project with the goal of implementing a new algorithm which interpolates a polynomial F of degree D in N variables with T non-zero terms.
In a typical scenario, a large heterogeneous software system, installed on many different sites and composed of several interacting components, exchanging data with several different protocols, must be updated to correct some defects, add new functionalities, or replace some obsolete components without breaking the system and while keeping its dependability.
Word segmentation in handwritten document is a difficult task because inter-word-spacing (i.e. the space between parts of the same word) is sometimes wider than the intra-word-spacing (i.e. the space between two consecutive words). Many different approaches to segmenting words have been proposed so far. However these segmentation approaches usually use some parameters that are manually tuned; meaning that they do not take into account the properties of the document in order to automatically calibrate the parameters.
Geometric Networks typically are a sparse subgraphs of a complete graph defined over a set of points embedded in the plane (or space). There are several algorithms which take a complete graph and compute a sparse subgraph satisfying various constraints, e.g. low diameter, constant degree and fault-tolerant. In our recent work, we have designed algorithms which compute sparse subgraphs of non-complete graphs.
The pervasive ubiquity of parallel architectures and memory hierarchy has led to the emergence of a new quest for parallel mathematical algorithms and software capable of exploiting the various levels of parallelism: from hardware acceleration technologies (multi-core and multi-processor system on chip, GPGPU, FPGA) to cluster and global computing platforms. In this project, we propose to revisit fundamental algorithms in symbolic computation so as to optimize them in terms of data locality and parallelism and adapt them to these new modern computer architectures.