Back to search

FRINATEK-Fri prosj.st. mat.,naturv.,tek

Parameterized Complexity for Practical Computing

Alternative title: Parametrisert Kompleksitet for Praktiske Beregninger

Awarded: NOK 12.6 mill.

Computing is changing everything from how we form and maintain relationships, shop at the grocery store, manufacture anything, design anything, listen to music, or analyze the flood of data in every area of basic science. This project empowers the heart of the computer revolution, the science of algorithms. We all know a few basic algorithms learned in school: how to cook up the product of two numbers (multiplication) or long division. Both involve looping through basic operations and storing intermediate results in an organized manner. But how does one systematically compute the similarity of DNA sequences, or extract, from large data sets, information to differentiate between which genes cause cancer and which do not? A much faster way to multiply two numbers than the algorithm learned in school depends on special properties of prime numbers that are one more or one less than a power of two, such as 7 and 17, and is closely related to efficient algorithms for signal processing, without which there would be no modern civilization. Faster algorithms often depend on deep mathematics. For thousands of natural and important computational problems, algorithms that would take less than a million years to execute on the fastest supercomputers seem to be impossible. This is bad news! It is also a little bit of good news, because computational complexity of this sort is used to construct the cryptosystems that we all depend on for modern commerce and security. This project overcomes the bad news by a fine-grained appraisal of computational complexity. Crucial to the approach is to pay attention to the structure of typical inputs and design algorithms that adapt to that structure to achieve greater algorithmic efficiencies. Parameterized complexity analysis, in terms of the mathematics involved in this new algorithmic perspective and technology at the heart of the PCPC project, offers a relatively flexible and natural way to consider and address computational situations, where there is a necessary trade-off between exponential costs associated to small secondary measurements about the computational problem inputs (for concrete example, think DNA sequences: the total number of bits is huge, but there may be other structural aspects, such as the number of genes in the game, that are much smaller, and other such issues, such as the number of species that are the sources of the sequences). These secondary structural aspects of the computing situation, can seemingly often inevitably (but so far, nothing is known for sure about this) involve exponential computing costs. Parameterized complexity, in essence, attempts to pay attention to these issues, and deploy appropriate mathematical ideas, to try to contain any intrinsic exponential computing costs to these smaller secondaries, and keep the exponential costs away from the bulk bit-size data measurement, which is not something exotic, we are all confronted with this, almost every day, when we consider how many bytes we can download, and might have to pay for. In this project we connected parameterized complexity with two new fields, namely programming theory and machine learning.

One of the central outcomes is the education of three PhD students (one has successfully completed his PhD and two are expected to defend their theses in 2024). In addition, the postdoctoral fellows have enhanced their skills and publication records to pursue scientific careers. The impact of this project was a substantial increase in research of theoretical and practical aspects of parameterized complexity with potentially substantial impact on theoretical computer science in general as well as combinatorial optimization, machine learning and programming theory.

Computational tasks vital in all areas of science, industry and society are often NP-hard, meaning unlikely to be solvable optimally on even the fastest computers in any reasonable time. Practitioners use heuristics, a term that denotes intuitive/ad-hoc rules such as the Greedy Heuristic which makes choices based on what looks best at the moment. Heuristics often do a good job, but have no guarantees of efficiency or quality of solution. A central contemporary challenge of theoretical computer science is to explain the surprising effectiveness of heuristics on real world data for hard problems. This Project takes up that challenge in the framework of Parameterized Complexity (PC) with novel lines of attack that dramatically go beyond existing knowledge and develops methods new to the field. The Chief Investigator is a founder of PC well positioned to execute the proposed frontier research. PC theory allows deep insight into problem intrinsic nature and structure (parameters). Parameters for a scheduling problem may include airlines, pilots, weather. The ambitious, ground-breaking research will develop new, mathematically sophisticated inductive gradient theory that maps out refined notions of solution quality. High impact will be novel fixed parameter tractable (FPT) turbocharged subroutines for heuristics in a variety of application areas that out-perform best-practice approaches on standard benchmarks. A novel theory of smooth FPT will allow systematic transfer of results to the design of heuristics, kernelization, approximation and exponential algorithms. Approximation and worst-case exponential algorithms will be developed based on extremal gradients and smooth FPT that asymptotically beat best-known theoretical results. A bold new exploration will use PC to model typical input generation. Breakthrough theory and experiments will be of interest in other areas of algorithm theory, related areas of mathematics, data visualization and heuristic practice.

Funding scheme:

FRINATEK-Fri prosj.st. mat.,naturv.,tek