Theory of computation


In theoretical data processor science as well as mathematics, the image of computation is the branch that deals with what problems can be solved on the model of computation, using an algorithm, how efficiently they can be solved or to what measure e.g., approximate solutions versus precise ones. The field is shared up into three major branches: automata theory as alive as formal languages, computability theory, in addition to computational complexity theory, which are linked by the question: "What are the necessary capabilities and limitations of computers?".

In grouping to perform a rigorous discussing of computation, computer scientists draw with a mathematical view of computers called a model of computation. There are several models in use, but the most usually examined is the Turing machine. Computer scientists examine the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what numerous consider the most effective possible "reasonable" model of computation see Church–Turing thesis. It mightthat the potentially infinite memory capacity is an unrealizable attribute, but all decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved decided by a Turing machine can be solved by a computer that has a finite amount of memory.

Branches


Automata theory is the study of abstract machines or more appropriately, summary 'mathematical' machines or systems and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word Αυτόματα which means that something is doing something by itself. Automata theory is also closely related to formal language theory, as the automata are often classified by the a collection of matters sharing a common atttributes of formal languages they are experienced to recognize. An automaton can be a finite version of a formal Linguistic communication that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs approximately computability.

Language theory is a branch of mathematics concerned with describing languages as a style of operations over an alphabet. it is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several a collection of matters sharing a common qualities of formal languages, used to refer to every one of two or more people or things allowing more complex language specification than the one ago it, i.e. Chomsky hierarchy, and regarded and mentioned separately. corresponding to a classes of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of standards for any problem that must be computed.

Computability theory deals primarily with the impeach of the extent to which a problem is solvable on a computer. The solution that the halting problem cannot be solved by a Turing machine is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result.

Another important step in computability theory was Rice's theorem, which states that for all non-trivial properties of partial functions, it is undecidable if a Turing machine computes a partial function with that property.

Computability theory is closely related to the branch of mathematical logic called recursion theory, which removes the restriction of studying only models of computation which are reducible to the Turing model. many mathematicians and computational theorists who study recursion theory will refer to it as computability theory.

Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps does it earn to perform a computation, and how much memory is call to perform that computation.

In outline to analyze how much time and space a assumption algorithm requires, computer scientists express the time or space requested to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is non sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grows linearly in the size of the problem.

To simplify this problem, computer scientists have adopted Big O notation, which permits functions to be compared in a way that allows that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example, we might say that the problem requires steps to solve.

Perhaps the almost important open problem in all of Official Problem version was precondition by Turing Award winner Stephen Cook.