How can we know for sure which "assertion" is meta-level?

by user 170039   Last Updated September 11, 2019 19:20 PM

Probably this is a very naive question, but I couldn't really find an answer to this question while browsing through the internet and so I am asking this here. In this book, the authors write the following,

The notion of consequence, however, belongs to the meta-language of a logical discourse. If the pair $\langle F,\vdash\rangle$ is taken as the definition of a logic, the items in $F$, the sentences (or formal strings of symbols belonging to the alphabet, the well-formed formulae) fall within the object-level entities while the relation $\vdash$, which is a relation between a subset $X$ of $F$ and an element $α$ of $F$, belongs to the meta-language. $X\vdash α$, if properly written, becomes $``X"\vdash ``α"$, which is an assertion, viz. the wff named $``\alpha"$ is a consequence of the set of wffs named $``X"$. This assertion speaks something about specific object linguistic items $X$ and $α$. So, this is a meta-linguistic assertion.

I agree with the analysis here but the analysis here is specific to the way we are viewing $\vdash$. I am wondering is it must be the case that the notion of logical consequence can only be expressed in the meta-language? Why so? More specifically, how can we be sure that the notion of logical consequence cannot be expressed in the object language?



Answers 1


The question you've asked, at least how I'm interpreting it, is very deep and gets at the core of what much of logic in the 20th century was about.

The tl;dr is that if your system is "powerful enough", you can (kind of) encode $\vdash$ inside the system, but this comes with some caveats. I won't formalize what I mean by this, but rest assured that the Natural Numbers $\mathbb{N}$ with their usual signature ($0,1,+,\times$) is "powerful enough", and many other logics are too. Let's run through how one might encode $\vdash$ inside the language of $\mathbb{N}$, and at the end we can discuss the caveats.


It turns out you can encode formal strings of natural numbers as natural numbers themselves, in such a way that all of the formal manipulations you need to do to do logic are all computable functions from $\mathbb{N}^k \to \mathbb{N}$. As a (naive) example, we might give each symbol a number, and then simply write the numbers next to eachother separated by $0$ (to ensure we know where one symbol stops and the next starts). Consider the following coding:

  • $0 \to 1$
  • $1 \to 2$
  • $+ \to 3$
  • $\times \to 4$
  • $\land \to 5$
  • $\lor \to 6$
  • $\lnot \to 7$
  • $\forall \to 8$
  • $\exists \to 9$
  • $= \to 11$
  • $x_i \to n$ (where $n$ is the $i$th number bigger than 11 without any $0$s. So $x_1 = 12$, $x_9=21$, and so on.)

Then we could "encode" the logical formula $\forall x_1 . \exists x_2 . x_1 + x_2 = 0$ as $8012090130120301301101$.

Now, recall a formal proof is just a sequence of formulas such that every formula in the sequence is either an axiom, or follows from an inference rule and formulas coming before it. We already know how to encode formulas as natural numbers, it isn't too surprising that we can also encode these formal proofs as natural numbers (say we use two zeros to separate one logical formula from the next in our sequence, or something similar). The surprising fact is that there is a (computable) function $\text{isProof} : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ such that

$\text{isProof}(p,x) = \begin{cases} 1 & \text{if $p$ and $x$ encode a proof and a formula, and if, after decoding, $p$ is a proof of $x$} \\ 0 & \text{otherwise} \end{cases}$

This lets us encode the notion of $\vdash$ inside of our language! In fact, you can even write a more complicated (but still computable!) function $\text{isProofAssuming} : \mathbb{N} \times \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ where, loosely, $\text{isProofAssuming}(T,p,x) \iff T \vdash x$ (and $T \vdash x$ is witnessed by a proof $p$). Of course, we need a way to encode a set of assumptions, but I think by now you get the idea of how these sorts of encodings can be done.


So why, then, at the top of this answer, did i say that $\vdash$ can only kind of be encoded inside a logic?

Notice that $\text{isProof}$ requires we actually provide a proof $p$. If we want to ask the question "is $x$ provable?", one can prove that our only hope is to, for every possible $p$, run $\text{isProof}(p,x)$. Then, if $x$ is provable, we will eventually find a proof $p$ and our code will halt and tell us yes. If instead $x$ is not provable, our code will loop forever, and we will never know if $x$ is or is not provable. For this reason, $\vdash$ is not fully encoded inside the system. When we ask "$\vdash x$?", we will only ever receive the answer "yes" or "maybe", and we have no idea how long we will have to wait for a "yes" (if one is coming at all).


This was not officially part of your question, but we're so far in at this point that I feel obligated to mention it:

The final, crushing, blow comes if one wants a notion of truth inside the system, not just a notion of provability. Godel showed that (for any encoding scheme you want!) there is a natural number $n$ such that, when you decode $n$, it says "There is no proof of the formula encoded by $n$". Then we can ask if some $p$ satisfies $\text{isProof}(p,n)$. If there is such a $p$, then there is a proof of false, since $n$ now encodes a false statement. If instead there isn't such a $p$, then $n$ encodes a true statement... but it has no proof.

A similar argument can be made to show that the only way to write $\text{Provable}(x)$ is to simply try $\text{isProof}(p,x)$ on every $p$, which runs into the problem I discussed above.


Sorry this answer got so long. I wanted to give real justification for why $\vdash$ is only kind of representable inside some systems. Unfortunately, providing some amount of justification required going to a bit of detail so that we could see (at least a bit of) all the moving parts.

I hope this helped ^_^

HallaSurvivor
HallaSurvivor
September 11, 2019 19:04 PM

Related Questions


Updated March 10, 2019 21:20 PM

Updated March 29, 2019 18:20 PM

Updated October 28, 2018 17:20 PM

Updated June 18, 2018 02:20 AM