We will see one of the first results relating precisely the expressive power of a logic with a complexity class.

## Properties of structures and their complexities.

Fix a signature $\Sigma$. A

*property*$P$ of finite structures over $\Sigma$ is just a class of structures over this signature. A property induces a language of words:

$$L_P=\set{w: w\text{ is an encoding of a structure }\str A\in P}.$$

The *computational complexity *of the property $P$ is the computational complexity of the language $L_P$.

**Example. **Consider the graph signature $\set{E}$. The property “$G$ is an undirected graph which is two-colorable” is in PTime, while the property “$G$ is an undirected graph which is three-colorable” is complete for NPtime.

## ESO logic.

A formula

*of*

*existential second order logic*(ESO) is a formula of the form

$$\exists R_1\exists R_2\ldots\exists R_n \phi,$$

where $R_1,\ldots,R_n$ are relation symbols and $\phi$ is a first order formula which may use the symbols $R_1,\ldots,R_n$.

**Example. **

The following formula holds in a graph $G$ if it it 3-colorable. It is *mon**adic* – the symbols $R,G,B$’s are unary.

\begin{multline*}

\exists R\exists G\exists B.\forall x.\textit{part}(R,G,B)\land\forall x\forall y.\\((x\in R\land y\in R)\lor (x\in G\land y\in G)\lor (x\in B\land y\in B))\implies \neg E(x,y),\qquad\end{multline*}

where $\textit{part}(R,G,B)$ expresses that the predicates $R,G,B$ form a partition of the universe.

The main result of this part is:

**Fagin’s Theorem. **Fix a signature $\Sigma$. A property of finite structures is ESO-definable if and only if it is in NP.

One direction is easy: any ESO-definable property can be decided by a nondeterministic Turing machine, in polynomial time: first guess the relations — each of them has polynomial size — and then evaluate the underlying first-order formula in polynomial time on the expanded structure.

Therefore, it remains to prove that any property which can be decided in NP, can be defined by an ESO formula. Before we show this, we give some definitions.

## Relations with values in finite sets

We make the following convention. Let $Q$ be a finite set. If $\str A$ is a structure and $k$ a natural number, then a relation $R\subset \str A^k\times Q$ can be represented by a tuple $(R_q:q\in Q)$ of relations of arity $k$ over $\str A$: $$R(\bar x,q)\equiv R_q(\bar x).$$ (There is a more efficient encoding, using $O(log(|Q|))$ relations, but we will not need it).

We will therefore allow relational symbols for such $Q$-*valued relations *of arity $k$* *(where $|Q|$ and $k$ are finite).

In the same way, we may consider $Q$-*valued formulas,* which are just $Q$-tuples of formulas. We will use the following syntax for a $Q$-valued formula with values in $Q=\set{p,q,r,\ldots}$: $$\left[\begin{array}{cc}p:&\phi\\q:&\psi\\r:&\rho\\\ldots&\ldots\end{array}\right].$$ The free variables of a $Q$-valued formula are all the free variables of its constituents. The semantics of a $Q$-valued formula with $k$ free variables is a $Q$-valued relation of arity $k$. Therefore, if $\phi$ is a $Q$-valued formula with free variables $\bar x$, then $\phi(\bar x,q)$ is equivalent to $\phi_q(\bar x)$.

Observe that the properties that a $Q$-valued relation $R$ is a function, or a partial function $R:\str A^k\to Q$, can be expressed by first-order formulas. If $f$ is a $Q$ valued partial function, then we write $f(\bar x)=q$ for $f(\bar x,q)$ (which in turn is encoded as $f_q(\bar x)$).

## Adjacency matrix representation

It is often useful to encode a structure via its

*adjacency matrix*rather than its

*adjacency list*(which was considered previously). The adjacency matrix is defined for a linearly ordered structure $(\str A,<)$, as follows. Let $n$ be cardinality of $A$. Observe that the linear ordering on $\str A$ induces a bijection between the elements of $\str A$ and numbers $0,\ldots,n-1$, which will be viewed as digits in base $n$. Therefore, a $r$-tuple $\bar x$ of elements of $\str A$ defines a sequence of $r$ digits, which in turn defines a number between $0$ and $n^r-1$. We denote by $[\bar x]$ the value of this number. Let $r$ be the maximal arity of the relations of $\str A$.

If $R$ is a relation over $\str A$, then by $[R]$ we denote the list of length $n^r$ of binary digits, where the digit at position $[x_r\ldots x_1]$ indicates whether the relation $R(x_l,\ldots,x_1)$ holds in $\str A$, where $l$ is the arity of $R$. The *adjacency matrix *representation is the concatenation of strings $[R_1],[R_2],\ldots,[R_s]$, where $R_1,R_2,\ldots,R_s$ are all the symbols in the signature (in a predefined order).

Observe that the adjacency matrix representation can be computed from the adjacency list representation in polynomial time, and vice-versa, assuming that the signature is nontrivial, i.e., contains relational symbols of arity greater than $0$.

Consider again a property $P$ of finite structures. Earlier we defined $L_P$ as the language of encodings of structures with property $P$. Consider a similar language of encodings of structure with property $P$, via the adjacency matrix representation:

$$L_P^\textit{adj}=\set{w:\ w\text{ is the adjacency matrix representation of a finite structure }\str A\in P}.$$

The following claim proves that the complexity of $L_P$ is roughly the same as the complexity of $L_P^\textit{adj}$.

**Lemma. **The languages $L_P$ and $L_P^\textit{adj}$ can be reduced to each other, via a PTime reduction, and even via a LogSpace reduction.

**Proof.**

We show how to convert an adjacency list representation into a adjacency matrix representation, in polynomial time (and actually, this will describe a LogSpace reduction).

The adjacency matrix representation of a structure $\str A$ consists of a list of identifiers of its elements. The order of appearance in this list induces a linear order $<$ over the universe of $\str A$. We may use this linear order to produce the adjacency matrix representation. Determining the $i$-th bit of this representation boils down to computing:

- The base $n$-representation $i_1\ldots i_k$ of the number $i$, where $n$ is the cardinality of $\str A$
- Finding the elements $x_1,\ldots,x_k$ of the structure $\str A$ corresponding – via the linear order $<$ – to the numbers $i_1,\ldots,i_k$
- Testing whether a relation $S(x_1,\ldots,x_k)$ holds in the structure $\str A$.

All these steps can be performed in logarithmic space. This finishes the reduction in one direction. The other direction is similar.

**Corollary. **The computational complexity of a property of structures $P$ is the same – up to LogSpace reductions – as the computational complexity of the language $L_P^\textit{adj}$.

**Proof of Fagin's Theorem.**

It remains to prove the “only if” implication. Consider any property $P$ of finite structures which is decidable by a nondeterministic Turing machine $M_0$ in polynomial time, i.e., such that the language $L_P$ is decidable in NPTime. Then there also exists a nondeterministic polynomial time Turing machine $M$ which recognizes the language $L_P^\textit{adj}$

**(Here is why:)**

This is because $L_P^\textit{adj}$ reduces to $L_P$ in PTime. Another way of seeing this is by saying that $M$ can input the adjacency matrix representation of $\str A$, and out of it produce the adjacency list representation, and feed it to $\str M_0$. If $M_0$ works in NPtime, then so will $M$.

Let $B$ be the work alphabet and $Q$ be the state space of the machine $M$. Suppose that $k$ is a natural number such that an input structure of cardinality $n>1$ is accepted by some run of $M$ of length at most $n^k$. Moreover, we suppose that $k$ is greater than the arity of the relations in the signature.

We construct an ESO formula which will accept $\str A$ if and only if for some linear order $<$ on $\str A$, the machine $M$ accepts the representation of $(\str A,<)$ by its adjacency matrix. The formula will guess:

- A binary relation $<$,
- A relation $f$ of arity $2k$ with values in $B\cup B\times Q$.

Firstly, we use an FO formula to assert that $<$ is a linear ordering of the universe. For a tuple $\bar x$ of elements of $\str A$, when writing $[\bar x]$, we implicitly refer to this linear order (and use the interpretation via base $n$-encodings, described earlier). Observe that the relation “$[\bar x]$ is the successor of $[\bar y]$” is definable by a first-order formula.

Secondly, we use an FO formula to assert that $f$ is a partial function (with values in $B\cup B\times Q$).

Thirdly, we verify that $f$ encodes a run of the Turing machine, where $f(\bar x,\bar y)=b$ is interpreted as

the tape symbol in the $[x]$-th configuration, at its $[y]$-th position is $b$,

and $f(\bar x,\bar y)=(b,q)$ additionally asserts that

the head of the machine is located at this position in state $q$.

Testing that $f$ indeed encodes a run of the machine $M$ can be easily expressed in first-order logic.

The last thing that remains is to guarantee that the initial configuration of the run encoded by $f$ corresponds to the encoding of the structure $(\str A,<$). This can be done by a first-order formula, by the following lemma.

**Lemma. **There is a first-order formula which holds in a linearly ordered structure $(\str A,<)$ equipped with a $\set{0,1}$-valued function $g$ of arity $k$ if and only if $g(\bar x)$ is the $[x]$-th bit of the encoding the structure $(\str A,<)$.

**Proof.**

- the number $j=[x_{l+1}]$ belongs to $\set{0,\ldots,s-1}$, where $s$ is the number of symbols in the signature and $l$ is the maximal arity of the symbols,
- the relation $R_j(\bar x_r\cdots x_1)$ holds in $\str A$.

A dual condition is needed for $g(x_k\cdots x_1)=0$.

To conclude, the ESO expressing that $M$ accepts an encoding of the structure $\str A$ is of the form:

$$\exists\!<\!\exists f.\begin{cases}<&\text{is a linear order}\\f&\text{is a function of arity $2k$ with values in $B\cup B\times Q$}\\f&\text{encodes a run of $M$}\\&\text{whose first configuration is the encoding of $(\str A,<)$}\\&\text{an accepting state is reached}\end{cases}$$

This finishes the proof of Fagin’s Theorem.

**Excercise. **Deduce the Cook-Levin Theorem from Fagin’s Theorem (see Theorem 2.6 in Gradel’s book):

**Theorem (Cook-Levin).** SAT is NP-complete.