Minicomplexity theory is a mathematical theory that studies the complexity of *two-way finite
automata*, 2FAs.

It relates to automata theory in much the same way that complexity
theory relates to computability theory:
moving beyond *computability questions*, of the form “*can this problem be solved by this
machine?*”, it focuses on problems that can indeed be solved and on *complexity questions*, of the form
“*how much of this resource does this problem require on this machine?*”.

By analogy to complexity theory, where the machines are Turing machines of various modes (deterministic, nondeterministic, alternating, probabilistic, quantum, etc.), where the main resource of interest is time, and where the problems are represented by languages which are all decidable, in minicomplexity theory the machines are two-way finite automata of the same various modes, the main resource of interest is size (i.e., number of states), and the problems are represented by families of languages which are all regular.

By analogy to complexity theory, where the central open question ‘P versus NP’ asks whether every
nondeterministic *Turing machine* is equivalent to a deterministic one which is at most polynomially *slower*
Cook 1971, in minicomplexity theory the central open question
‘2D versus 2N’ asks whether every nondeterministic *two-way
finite automaton* is equivalent to a deterministic one which is at most polynomially *larger*
Sakoda Sipser 1978.

For an introductory essay on the subject, presenting the main motivation, the early history, and the central formal concepts and open questions, see Kapoutsis 2009.