Saturday, July 21, 2007

Computational complexity theory

As a branch of the theory of computation in computer science, computational complexity theory describes the scalability of algorithms, and the inherent difficulty in providing scalable algorithms for specific computational problems. That is, the theory answers the question, "As the size of the input to an algorithm increases, how do the running time and memory requirements of the algorithm change and what are the implications and ramifications of that change?" The theory places practical limits on what computers can accomplish.

The implications of the theory are important to government and industry. The speed and memory capacity of computers are always increasing, but then so are the sizes of the data sets that need to be analyzed. If algorithms fail to scale well, then even vast improvements in computing technology will result in only marginal increases in practical data size.

Algorithms and problems are categorized into complexity classes. The most important open question of complexity theory is whether the complexity class P is the same as the complexity class NP, or is merely a subset as is generally believed. Far from being an esoteric exercise, shortly after the question was first posed, it was realised that many important industry problems in the field of operations research are of an NP subclass called NP-complete. NP-complete problems have the property that solutions to problems are quick to check, yet current methods to find exact solutions are not scalable. If the NP class is larger than P, then no easily scalable solution exists for these problems; whether this is the case remains an important open question in computational complexity theory.

No comments:

Post a Comment