You may be interested in:

## Poincaré, Heisenberg, Gödel. Some limits of the scientific knowledge

seminar from group Science, Reason and Faith.

Fernando Sols. Pamplona, May 25, 2010

**summary:**

The 20th century has left us with the formulation of two important limitations of the scientific knowledge . On the one hand, the combination of Poincaré's non-linear dynamics and Heisenberg's uncertainty principle leads us to an image of the world where reality is, in many ways, indeterminate. On the other hand, Gödel's theorems reveal the existence of mathematical theorems that, while true, cannot be proved. More recently, Chaitin has shown, inspired by the work of Gödel and Turing, that the randomness of a mathematical sequence cannot be proved (it is "undecidable"). I reflect here on the consequences of the indeterminacy of the future and the undecidability of randomness. It is concluded that the external (intelligent) design and randomness are not provable. Science can suggest the existence of design, but it cannot prove it. Nor can it prove its absence. And questions of finality must be left out of the scientific discussion .

Text developed:

**Can science offer an ultimate explanation of reality?**

**Author**: Fernando Sols. department de Física de Materials, Universidad Complutense de Madrid

**Published** in: Francisco Molina, ed. Science and Faith. En el camino de la búsqueda. Madrid. CEU Ediciones.

**Date of publication**: 2014

**Brief CV**

Fernando Sols is Full Professor of Physics of the Condensed subject since 2004 and director of department of Physics of Materials of the Complutense University of Madrid. graduate in Physics (University of Barcelona, 1981). D. in Physics (Universidad Autónoma de Madrid, 1985). He has been scholarship recipient Fulbright at the University of Illinois at Urbana-Champaign, Senior Associate Professor at UAM, director of high school Nicolás Cabrera (UAM) and member of committee publishing house of the New Journal of Physics (IOP-DPG). He is a Fellow of the Institute of Physics (UK). His research interests are in theoretical physics problems related to the dynamics and transport of electrons and cold atoms and macroscopic quantum phenomena.

**summary**

Science cannot offer a complete explanation of reality because of the existence of fundamental limits to the knowledge that it can provide. Some of these limits are internal in the sense that they refer to concepts that belong to the domain of science but are outside the scope of science. The 20th century has left us with the formulation of two important limitations of scientific knowledge . On the one hand, the combination of Poincaré's non-linear dynamics and Heisenberg's uncertainty principle leads us to a picture of the world where reality is, in many ways, indeterminate. On the other hand, Gödel's theorems reveal the existence of mathematical theorems that, while true, cannot be proved. More recently, Chaitin has shown, inspired by the work of Gödel and Turing, that the randomness of a mathematical sequence cannot be proved (it is "undecidable"). I reflect here on the consequences of the indeterminacy of the future and the undecidability of randomness. I conclude that the question of the presence or absence of finality in nature is fundamentally outside the scope of the scientific method .^{1}.

**Palabras core topic**: Incompleteness, chaos, quantum indeterminacy, falsification criterion, randomness, design, scientific method.

**Introduction: On the limits of scientific knowledge**

One may ask whether science can potentially offer an ultimate explanation of reality. An equivalent question is whether there are limits to scientific knowledge . Some limits are obvious. We can call them provisional. They are the limits that any well-considered scientific project aims to displace. Beyond those limits, there are concepts or realities that belong to the domain of science and that are eventually attainable by science. It is a matter of time before we discover those scientific truths that are currently unknown to us but that we or our descendants will eventually discover. Or maybe not, but they are goals in principle achievable by science. We can say that there is a large consensus on the existence of these provisional limits of science. A not uncommon mistake is to invoke (explicitly or implicitly) these limits to prove or suggest the existence of God. Such a "God of the holes" is the lord of a realm that shrinks inexorably as science progresses. Although not always explicitly invoked by this name, it is a concept particularly dear to materialist philosophers (because it is easy to disprove) and naive Christian apologists alike. The "God of the holes" is not the God of the Christian faith, the latter being much deeper and more subtle.

There are other limits of the scientific knowledge whose explicit acceptance requires the adoption of a specific philosophical position. We may call them the outer limits of science. Beyond these limits there are realities which do not belong to the domain of science and which (therefore) cannot be reached by science. Among these realities we can include the concepts of God, soul, creation from metaphysical nothingness, consciousness as subjective experience, human rights (ethics), or beauty (aesthetics). Of course, scientistic materialist worldviews will tend to ignore at least some of these concepts as unreal or purely mental constructs. But for other people these concepts describe ideas whose possible reality intrigues them.

There is yet a third limitation subject of the scientific knowledge that has only recently been discovered, which refers to what we might call the internal limits of science. Beyond these limits abound realities that belong to the domain of science but are beyond its reach. These inner limits have been discovered by science itself. The two main examples already belong to the legacy of the 20th century, namely physical indeterminacy and mathematical incompleteness. In the remainder of this article, we describe both ideas and argue that they lead us to the conclusion that discussion about the existence or absence of finality in nature is fundamentally beyond the reach of the scientific method.

**Uncertainty, incompleteness and the question of finality**

Since the publication of The Origin of Species in 1859 by Charles Darwin, there has been an important discussion on the presence or absence of design in nature. During the 20th century, progress in cosmology has made it possible to take this discussion beyond its initial limits restricted to the evolution of life to include the history of the universe. Intellectual discussion has intensified especially in recent decades following the proposal of the so-called "intelligentdesign " as a possible scientific programme that would aspire to demonstrate the existence of purpose in biological evolution [Dembski, 2006]. In this often unnecessarily acrimonious controversy, chance combined with natural selection on the one hand, and intelligent design on the other hand, are contrasted as possible driving mechanisms for the progress of species. Chance is certainly an essential concept to work with in various scientific disciplines, not only in evolutionary biology but also in quantum physics and statistical physics. However, it is surprising that, within the aforementioned controversy, little attention has been paid to the fact that, within the field of mathematics, chance is not provable. More precisely, Gregory Chaitin has shown that the randomness of a mathematical sequence is in general undecidable, in the sense given to this adjective by the mathematicians Kurt Gödel and Alan Turing. The epistemological consequences of this observation are far-reaching.

In this chapter we will argue that Chaitin's work , combined with the current knowledge of quantum physics, leads inevitably to the conclusion that discussion on the presence or absence of finality in nature is outside the realm of scientific method, although it may be of philosophical interest. To do so, we will review some moments core topic in the history of physics, mathematics and the Philosophy of science. In this pathway we will talk about Newton's physics, Poincaré's non-linear dynamics, Heisenberg's uncertainty principle, the collapse of the wave function, Gödel's theorems, Turing's halting problem, Chaitin's algorithmic information theory, Monod's Philosophy of biology, proposal of design intelligent , and Popper's falsification criterion. The main thread of our argument will be the attempt to answer a fundamental question, as simple to formulate as it is difficult to answer: "What or who determines the future? We hope that these reflections will be clarifying and will help to put each question in its place, distinguishing between what is established scientific knowledge and what is philosophical reflection around this scientific knowledge.

**Practical Indeterminacy in Classical Physics: Newton and Poincaré**

In his monumental work Philosophiae naturalis principia mathematica (1687), Isaac Newton (1642-1727) formulated the law of universal gravitation and the laws of classical mechanics that bear his name. The study of these laws by means of the infinitesimal calculus which he himself created ^{2 }leads to a deterministic picture of the world, according to which the future of a dynamical system is completely determined by its initial conditions, namely the position and the linear momentum ^{3} of each of the particles that make up the system, if its law of forces is known.^{4}. This mechanistic view took hold, backed by the impressive success with which Newton's mechanics simultaneously explained the motion of the planets and gravity in ordinary life, in what can be considered the first unification of forces. The deterministic picture of nature took root with great force and, although, as we shall see, it is not borne out by modern physics, it still has some supporters today.

At the end of the 19th century, Henry Poincaré (1854-1912) tackled the three-body problem and concluded that the evolution of such a dynamical system is in general chaotic, in the sense that small variations in the initial conditions give rise over time to very different trajectories. The longer the time interval over which we wish to predict the evolution of the system with a given accuracy, the more accurately we need to know the initial conditions, i.e. the smaller the initial error in our knowledge of the position and linear momentum of the system has to be.^{5}. The conclusion is that, in the context of classical mechanics, the regularity of the two-body problem, whose paradigm would be the case of a planet revolving around the sun, is more the exception than the rule. In the non-linear dynamics developed by Poincaré, most dynamical systems are chaotic, which means that predicting their long-term behaviour is, in practice, impossible. This brings us to the concept of practical indeterminacy in classical physics.

One might still think that, while determinism is rejected for practical reasons, it can still survive as a fundamental concept. That is, one could argue that the future of nature and the universe, including ourselves, are determined but in such a way that in practice we can only make reliable predictions in the simplest cases. Such determinism would, for all practical purposes, be indistinguishable from the apparent indeterminism in which we think we operate. In the next section we will see that quantum mechanics discards this deterministic picture not in a practical way, but in a fundamental way.

**Intrinsic Indeterminism in Quantum Physics: Heisenberg**

During the first third of the 20th century, quantum mechanics was discovered and formulated. It offers a picture of the microscopic world that in many ways departs radically from our intuitions based on the ordinary knowledge of the macroscopic world. It can be said that, in relation to classical (pre-quantum and pre-relativistic) physics, quantum mechanics represents a greater conceptual breakthrough than that introduced by the other great revolution in 20th century physics, the theory of relativity. The latter teaches us that space, time, mass, energy and gravity are not independent concepts juxtaposed but are interrelated by subtle mathematical equations that we now understand well. Relativistic mechanics in principle allows position and momentum to be simultaneously well defined and in general has no practical consequences in our ordinary life beyond the use of nuclear power and GPS devices.

Quantum mechanics makes stronger claims. Among others: position and momentum cannot be simultaneously well defined; in the microscopic world there is no qualitative difference between particle and wave; the fundamental equation cannot be extrapolated to the macroscopic scale because it predicts superpositions that we do not observe in practice; only the statistical behaviour of experiments is successfully predicted; microscopic systems are radically altered when observed. On the other hand, the consequences of quantum physics for ordinary life are numerous. These include: the stability of subject and the rigidity of solids in particular are unthinkable without quantum mechanics; chemistry, magnetism, electronics, as well as all derived technologies, are only possible thanks to the quantum properties of subject. Finally, the indeterministic picture offered by quantum physics allows us to think that our experience of free will can be real and not merely subjective.

For our discussion, we concentrate on one particular aspect of quantum mechanics: the uncertainty principle of Heisenberg.^{6} Formulated in today's parlance, the uncertainty principle is an immediate consequence of the wave mechanics of Schrödinge^{7}but it is associated with Heisenberg's name because it was he who first deduced it from his matrix mechanics (equivalent to Schrödinger's) and, surprised by result, tried to find an intuitive explanation. The uncertainty principle tells us that, due to its wave nature, a particle cannot have its position and momentum well defined simultaneously. Concretely, if Δx and Δp are the uncertainty in the linear position and momentum, respectively, the inequality

ΔxΔp≥h/4π (1)

where h is Planck's constant. An immediate consequence is that if the state of a particle is such that, for example, the position is very well defined (Δx→0), then necessarily the uncertainty in the linear momentum has to be large (Δp→infinity) for inequality (1) to be satisfied.

If we combine Poincaré's nonlinear dynamics with Heisenberg's uncertainty principle, we come to the conclusion that, in order to successfully predict the increasingly distant future, there comes a point at which it is necessary to know the initial conditions with an accuracy that violates the uncertainty principle. The reason is that the condition Δx→0 and Δp→0 (necessary for the prediction of the far future) is incompatible with inequality (1). We thus conclude that, within the picture of the world offered by modern quantum physics, the prediction of the far future is impossible, not in a practical sense but in a fundamental sense: the physical information about what a chaotic system will do in the far future is nowhere to be found at.^{8}. Bearing in mind that non-chaotic dynamical systems are in general an exception and always an approximation to reality, we can state that the future is open.^{9}.

As a significant example, we can point out that, for a system as macroscopic as Hyperion, Saturn's elongated moon of about 300 km mean diameter and about 6×1018 kg mass, whose rotation is chaotic, Zurek has estimated that quantum mechanics prevents predictions about its rotation for times longer than 20 years [Zurek, 1988].^{10 }for times longer than 20 years [Zurek, 1988].

Although not supported by modern physics, the image of a deterministic world still has some defenders. In the context of quantum physics, hidden variable theories propose the existence of unmeasurable variables whose precise values would determine the future.^{11}. Among their followers are Albert Einstein, David Bohm and, more recently, Gerard 't Hooft (b. 1946). In 1964, John S. Bell (1928-1990) showed that an important class of hidden variable theories, the so-called local theories, could be subjected to observation. He proposed an experiment for which a local hidden-variable theory predicts the fulfilment of certain inequalities, now known as Bell inequalities. In contrast, conventional quantum mechanics allows the violation of these inequalities. The main experiments were carried out by Alain Aspect (b. 1947) in the early 1980s and yielded results contrary to the predictions of local hidden-variable theories and consistent with the conventional interpretation of quantum mechanics. Much of the emerging technology of quantum communication (which allows the use of essentially undecipherable codes) is based on the violation of such inequalities.

Despite the great scientific prestige of some of their advocates, hidden variable theories occupy a relatively marginal place in current physics ^{12}. The low relevance of hidden variable theories can be understood as an application of the Ockham's razor criterion: between two competing theories with similar explanatory power, the simpler is chosen.^{13}.

**Uncertainty and indeterminacy**

This section is somewhat technical and its purpose is goal to make a few points that are generally missing in quantum mechanics texts. The aim is to discuss a small but important difference between two concepts that are often presented as practically synonymous, and to see how, within quantum physics, one implies the other. Although the discussion is accessible to a wide audience, the reader who is not particularly interested in this subtlety and is comfortable thinking of uncertainty and indeterminacy as synonyms can dispense with reading this section.

Consider a physical system that is in a state which, following a common convention in quantum mechanics, we describe by the symbol |ψ⟩. Suppose that this quantum system has an associated physical quantity S that is measurable. This "observable" (or measurable physical quantity) S can take two values, s1 and s2, and we denote by the symbols |s1⟩ and |s2⟩ two states that have perfectly defined (without uncertainty) the observable S. It is then said that |s1⟩ and |s2⟩ are eigenstates of the observable S with eigenvalues s1 and s2. The wave nature of quantum particles is reflected in the linearity principle of quantum mechanics. According to this principle, if |s1⟩ and |s2⟩ are two possible states of a quantum system, then a linear combination of them is also a possible state.^{14}. In particular, a realisable state is (ignoring normalisation)

|ψ⟩=|s1⟩+|s2⟩ (2)

This state |ψ⟩ no longer has the observable S well defined, since it is a linear combination of two eigenstates of S with different eigenvalues, s1 ≠ s2. We can say that there is uncertainty in the value of the observable S in the state |ψ⟩^{15}.

Now suppose we want to make a measurement on this state and, in particular, we want to measure how much the observable S is worth. In quantum mechanics, the ideal measurement of an observable can only yield eigenvalues of that observable. Since the physical quantity S is not well defined in state (2), the result is a priori uncertain. Specifically, there is a probability ½ that the result is either s1or s2.

More precisely, the measurement of an observable is described as follows. Suppose we couple the quantum system to a macroscopic apparatus whose initial state we denote by |0⟩. The interaction between the quantum system and the macroscopic apparatus is such that, after a certain time, the apparatus evolves to the state |A1⟩, if the quantum system is in the state |s1⟩, or to the state |A2⟩, if the system is in the state |s2⟩, where |A1⟩ and |A2⟩ are self states of an observable A of the macroscopic apparatus that can be measured with the naked eye, e.g., the position of the needle of an ammeter. Since evolution in quantum mechanics is linear, if the system is initially in the combination (2), then the final state of the "universe" (the joint system consisting of the quantum system that is measured and the macroscopic apparatus that measures it) is also a linear combination of the respective final states. Concretely, we can write the evolution of the universe during the measurement in the following way

(|s1⟩+|s2⟩)|0⟩→|s1⟩|A1⟩+|s2⟩|A2⟩ (3)

In (3), the state on the left is one in which the system is in state (2) and the device is in state |0⟩. This is the initial state, prior to the start of the interaction that implements the measurement. The system and the device are still uncoupled. Therefore the joint state can be written as a "product" of states of each of them separately. The state on the right is the state of the universe after the interaction, when the measurement has already been performed. It is said to be an "intertwined" state because, due to the mutual interaction they have experienced, the state of the apparatus is correlated with the system. Unlike in the initial state, the state of the universe can no longer be "factored" into one of the system and one of the apparatus.

Here we come to a central aspect of quantum mechanics, that of projection in the measurement process. If we interpret outline (3) as an evolution of the quantum state of the "universe", we see that it is a deterministic evolution in the sense that, considering the universe as a whole, the initial state determines the final state. It is true that both states contain an uncertainty in the value of the observable S of the system, but even though they carry this uncertainty, we can say that one state evolves towards the other in a deterministic way.

The point core topic is that, although quantum mechanics allows for a linear combination as indicated on the right hand side of (3), in practice this combination is not observed in nature, because |A1⟩ and |A2⟩ are macroscopically distinct states.^{16}. That is, the final state after the measurement does not behave like the one on the right-hand side of (3), but randomly takes the form of |s1⟩|A1⟩ or |s2⟩|A2⟩, with a probability ½ for each possibility. Schematically we can write it as

|s1⟩|A1⟩+|s2⟩|A2⟩→|s1⟩|A1⟩|A1⟩ or |s2⟩|A2⟩ with probability ½. (4)

We then speak of the "collapse of the wave function" or the von Neumann projection.^{17}. Quantum mechanics correctly predicts the statistics of possible outcomes. That is, it predicts that, if we perform the same experiment many times, always preparing the "universe" in the same state [the one on the left of (3)], the result will be similar to a coin tossed many times to see if it comes up heads or tails: as we increase the number of trials, the statistical prediction of half heads and half tails will be fulfilled with increasing relative precision, by virtue of the law of large numbers.

The efficiency with which quantum mechanics predicts the statistics of a measurement made many times under identical initial conditions contrasts with its notorious inability to predict what happens in a particular experiment .^{18}. We say then that the result of a particular essay is indeterminate. Therefore, we conclude that the uncertainty in the initial knowledge of the observable S has as a consequence the indeterminacy in the result obtained when trying to measure it. That is, if we see the measurement process as a dynamic evolution governed by the interaction between system and apparatus, the initial uncertainty translates into indeterminacy of the future result of the measurement. Noting that the performance of a particular experiment generates information that did not exist before, Wheeler^{19} referred to the individual quantum measurement as an "elementary act of creation" [Wheeler, 1983].

Uncertainty is a natural consequence of wave mechanics, governed by the Schrödinger equation. However, indeterminacy requires the projection postulate, which is not contained in Schrödinger mechanics but is introduced as an additional postulate of quantum mechanics.

We end this section with a word of caution. The analogy with the tossing of a coin, being very visual, has the danger of inducing the mistaken image that indeterminacy is not fundamental but practical, since, in principle, the coin evolves in a classical and deterministic way. To admit that for the case of a strongly quantum system would be tantamount to invoking hidden variable theories which, as we have indicated above, have been ruled out by experiment in a wide range of cases.

**What or who determines the future?**

We have seen that, within the picture offered by modern quantum physics, the future is not determined. In particular, it rejects the deterministic picture according to which the future would be completely determined by the initial conditions and the laws of operative forces. The indeterminacy advocated by the conventional view of modern physics is fundamental and not merely practical, as it might be in a context of deterministic chaos compatible with classical mechanics.

This indeterminacy is compatible with our personal experience of free will. That is, it allows us to think that our experience of freedom can be real and not merely subjective.^{20}. If the result of a quantum process can be indeterminate, why can't some neural events be indeterminate, which ultimately may be amplifications of microscopic processes in which quantum indeterminacy plays an essential role?^{21}

We move on in our attempt to answer the fundamental question that gives degree scroll to this section and which is the common thread of this chapter: "What or who determines the future? We have rejected the determinism of initial conditions. We are left with the possibility of invoking some design, or finality, which would condition the evolution of a system to the achievement of a certain goal.

There is a subject of design, which we will call here internal, that is not particularly controversial. Except for recalcitrant determinists, almost all of us agree agreement that the set of physical actions that lead to the construction of a car, a building, or to the achievement of any simple goal of ordinary life such as riding a bicycle or washing a dish, take place in a certain way because one or several rational beings, with the capacity of free will, decide to achieve that previously desired goal . Using Aristotelian language, an efficient cause (the rational and free being) pursues a certain final cause (for example, the contemplation of the washed dish). Although the existence of internal design is generally accepted, it does not entirely resolve the above question because in nature there are many phenomena that are not directly induced by humans.

There is another subject of design, which we can call external, that can be controversial, because it suggests the idea of transcendence. If the internal design reflects the action of the freedom of human beings, the external design would reflect, to use theological language, the action of divine providence, meaning God's influence on the world without the need to manifestly alter the habitual behaviour of nature. The image of an indeterminate world leaves room for - but of course does not prove - the existence of freedom and providence. Free will can act through quantum processes of result a prioriindeterminate that probably take place in our brain. We mentioned in the previous section a concrete proposal of the neurobiologist Eccles.

The possible means of providence is more difficult to delimit, probably because it is more general. However, the ubiquity of chaotic macroscopic systems, starting with meteorology and continuing with the formation of the solar system and primitive cosmological fluctuations, strongly suggests the possibility that the long-term indeterminacy of these systems is ultimately quantum in nature and therefore intrinsically open to a diversity of evolutions. Returning to the aforementioned example of a system as massive as Hyperion, if we try to anticipate what the rotational motion of this satellite of Saturn will be like in a century, we realise that a multitude of drastically different evolutions are possible, all of them being compatible with the laws of physics and with the most accurate knowledge we can have of its current state. The reason is, as we indicated above, that a detailed prediction of Hyperion's rotation a century from now would require knowing its current state with such high precision that, even for a 1018 kg system, the Heisenberg uncertainty principle would be violated. Whatever the detailed rotational behaviour of Hyperion a century from now, no future observer will be particularly surprised, and yet the information about what Hyperion will do then is nowhere to be found right now, as it has no possible physical support.

Within the context of evolutionary biology, the idea of an intelligent design has been proposed in recent years, the existence of which would be necessary to explain the appearance of complex biological Structures supposedly very improbable a priori. The intelligent design is not very different from the external design just mentioned. The problem with the intelligent design programme is that it poses itself as a scientific programme when, as we will argue later, questions of purpose in nature are outside the scope of the scientific method. The intelligent design may be an interesting philosophical or theological proposal , but it is not a scientific proposal . We will also see that the same can be said of the absence of design.

To explain the apparently unpredictable behaviour of complex systems, and especially when talking about biological evolution, the concept of chance, which can be understood as indeterminacy without finality, is frequently invoked. Chance is a ubiquitous concept in interpretations not only of evolutionary theory, but also of statistical physics and quantum physics. We have seen before how the result of a particular quantum measurement can be highly indeterminate. We then understand that the results of the measurement are random and that quantum mechanics only predicts - and very successfully - the statistics of the results provided that - importantly - the experiment can be repeated many times under identical initial conditions. It escapes no one's notice that this last condition is difficult to fulfil in biological evolution and in our personal lives. Quantum mechanics has little predictive power in experiments of uncertain result that can only be performed once.

Thus, chance is a useful concept when analysing the statistical behaviour of many processes, each of which is associated with a certain indeterminacy. The problem with chance is that, as we shall see, it can never be assigned with certainty to a particular sequence of events that have been properly quantified. The reason for this impossibility is of a fundamental nature, as it is a consequence of Gödel's theorems, which constitute perhaps the most important result in the history of knowledge.

In the following sections we try to understand the meaning and epistemological consequences of the impossibility of the demonstration of chance.

**Gödel's theorems**

In the 1920s, David Hilbert (1862-1943), perhaps the most influential mathematician of the time, proposed a programme that had as goal the proof that, for axiomatic arithmetic and axiomatic set theory [Fernández-Prida 2009, Leach 2011]:

(i) It should be possible to demonstrate the consistency of its axioms, i.e. to demonstrate that no contradiction (i.e. a formula and its negation) logically follows from them (i.e. according to the rules of logic).

(ii) It should be possible to prove completeness, i.e. that every formula expressible in the language of the theory is logically derivable from the axioms of the theory, or its negation is.

(iii) It should be possible to demonstrate its decidability, i.e. the existence of an algorithm such that executed on any formula expressible in the language of the theory it always terminates, and terminates with the result "yes" when the formula is logically derivable from the axioms of the theory, and terminates with the result "no", when it is not logically derivable from the axioms of the theory.

The young Austrian mathematician Kurt Gödel threw himself into project and, first of all, proved the completeness of first-order logic, i.e. that a formula is logically derivable from a set of formulae if and only if it is true in every semantic interpretation of the theory in which those formulae are true (this is equivalent to saying, as a consequence, that a logical theory is consistent if and only if it has a model). Encouraged by this result, he decided to tackle the problems proposed by Hilbert. To everyone's surprise and disappointment, he gave a negative answer to these expectations, both in the case of arithmetic and in the case of set theory. More precisely, Gödel proved in 1931 that, for Peano's arithmetic (or any extension of it by addition of axioms) and also for the set theory axiomatised by Zermelo and Fraenkel (or any extension of it by addition of axioms), assuming that the theory is consistent, it is satisfied:

1) It is incomplete, i.e. there exists at least one formula such that neither it nor its negation belongs to the theory. That is, neither it nor its negation is derived from the axioms.

Now arithmetic, i.e. the set of true formulae about the natural numbers, is obviously a complete theory (for a formula about the natural numbers is either true itself, or its negation is true). Therefore neither Peano's arithmetic, nor any extension of it which is also axiomatic, is the whole of arithmetic: there will always be formulae which belong to arithmetic (i.e. are true) but are not deducible from these axioms.

2) It is not a theorem of the theory, i.e. a formula deducible from its axioms, the formula that expresses that the theory is consistent.

This is especially serious for set theory, because the formulas that follow from the axioms of set theory are the mathematical .^{22}. We are all internally convinced that mathematics is consistent. If it turns out that they are not, it could be proved one day, when a contradiction appears. If it turns out that they are, as we all believe, we will never be able to prove it within the mathematics itself, but by turning to another logical theory, which in turn we will have to prove that it is consistent, and for this we will have to turn to another logical theory, and so on, without ever proving consistency.

3) It is undecidable, i.e. there is no algorithm for deciding whether or not a formula written at random in the language of the theory is derivable from the axioms of the theory.

These surprising results, of a negative nature, demolished Hilbert's expectations, according to which mathematics would be reduced to a mechanical game in which, on the basis of axioms and rules of logical inference, it would be possible to demonstrate, with sufficient patience and skill (or with the financial aid of a modern computer), all the mathematical formulae that are true; and according to which there would also be an algorithm -today we would say computer program- that would allow to decide on any (well-defined) mathematical formula whether or not it is deducible from a given set of axioms.

Gödel's unexpected results forced in a way a paradigm shift in mathematical thinking: the dream of mathematical certainty and the systematic exploration of mathematical truths vanished. Our idea of what mathematics is becomes analogous to our idea of what a physical theory is. If in a physical theory universal laws are postulated and it is expected that there will be no experiments to refute them, in mathematics, axioms are postulated and it is expected that they will not lead to contradictions.

Of the surprising theorems stated above, the one of interest for our present discussion is the undecidability theorem, and not the one proved by Gödel himself but an analogous one proved by Turing. The proof proposed by Gödel of his theorems was very abstract and did not seem to have important practical consequences in the mathematical activity .^{23}. The English mathematician Alan Turing (1912-1954), considered the father of computer science, had the intuition to take Gödel's undecidability theorem to a more concrete field, that of the behaviour of a computer when executing a given program.^{24}. In the 1930s computers did not exist, but Turing anticipated their possible existence. This ideal concept of a universal computer that handles sequences of zeros and ones according to prescribed rules is known as the Turing machine.

Turing proved that there is no machine (today we would say a programme) that decides whether a self-contained programme^{25} ends or not. That is to say, there is no machine that fed as input with the number that encodes any self-contained program always emits a response, this response being 1 if the self-contained program stops, and 0 if the self-contained program does not stop.

In this context, a computer would be a deterministic physical object that is designed to evolve discontinuously between a discrete set of states following physical rules. We can consider the evolution of the computer under the action of a program that includes not only the logical rules, but also the data from entrance, and ask the simple question of whether the computer will ever stop or not. In some simple cases, the answer may be clear, but in general it is not obvious. Turing showed that the problem of whether a program stops or not is undecidable. It is then said that the halting problem is undecidable.

Since Turing's work, classes of undecidable problems have been identified, along with problems whose undecidability is suspected but not proven. A very important example of an undecidable problem is that of proving the randomness of a mathematical sequence.

**Chance**

Chance is a concept that, in a vague form, has been invoked since ancient times. It appears in the writings of the Greek classics, especially Aristotle, and even in the Bible26^{.} In general, chance is understood in a dynamic way, as indeterminism without .design^{27}. At the end of the 20th century, Gregory Chaitin has given a more static but probably more fundamental definition of chance (or randomness) that can be applied to mathematical sequences [Chaitin, 1975, 1988, 1997, 2005, 2006]. This need not be a major limitation if we think that, ultimately, science is reducible to physics, physics is expressed in mathematics, and mathematics is reducible to arithmetic .^{28}. Within the algorithmic theory of information proposed and founded by Chaitin, it is easier to start by defining the absence of randomness. We say that a mathematical sequence (for example, of zeros and ones) is not random if it can be compressed, that is, if there is a shorter sequence that, applied to a Turing machine, yields as result the long sequence. The short sequence is then said to contain the information of the long sequence in compressed form. A long sequence is said to be random if it is not compressible, that is, if there is no short sequence that determines it.

A canonical example of a non-random sequence is the sequence of digits of the number π. Up to ten trillion (Spanish) decimals of this famous transcendent number have been calculated. If we print out, for example, the first million decimals, wasting a lot of paper, the appearance is one of total randomness, even under the detailed scrutiny of various checks carried out with the financial aid of a computer. And yet, the sequence is not random, because we can create a program, much less than a million digits long, that after being run by a computer will give us the first million digits. For example, Leibniz proved that the result of multiplying by 4 the sum of all the inverses of odd numbers by alternating their sign, tends to π. The more odd numbers we consider, the more digits of π we can obtain. But the length of the programme depends very weakly on the number of consecutive odd numbers that, starting with 1 and following an ascending order, we are willing to include in the truncated series.

Chaitin has shown that the question of whether or not a long sequence of numbers is random is in general undecidable, in the sense of Gödel and Turing. There is no general algorithm that, when applied to an arbitrary sequence, yields a yes or no answer to the question of whether the sequence is random.

The consequence is that, while chance is a useful, and even necessary, hypothesis in many contexts, it cannot be assigned with complete certainty to any mathematical sequence and thus to any physical or biological process. This consideration may not have important practical implications, but it certainly has important epistemological consequences: Insofar as chance is understood as indeterminacy in the absence of design, it can never be legitimate to present the absence of design as a scientific conclusion. The existence of chance may be a reasonable work hypothesis, a defensible philosophical interpretation, but it cannot be presented as an established scientific fact when questions of principle, such as the presence or absence of finality in nature, are being debated.

The concept of chance is not provable in the strict sense, since it cannot be assigned with absolute certainty to any process. Chance can only be invoked as a phenomenological concept (in the sense that this word is used in physics).

**Popper's falsification criterion**

In his work Logik der Forschung (1934), the philosopher and theorist of science Karl Popper (1902-1904) proposes that the demarcation line that distinguishes a genuinely scientific theory from others that are not, is the possibility of being falsified, that is, the possibility of carrying out an experiment among whose possible results there would exist a priori at least one that contradicts a prediction of the theory [Popper, 1985]. In the logic of the scientific finding proposed by Popper, the theories of natural science are formulated by means of universal statements (from the subject "for everything... it is true..."), so they can be refuted, in principle, if only one counterexample is found that does not fulfil a logical consequence derived from the theory .^{29}.

Universal statements can be falsified but not verified, for the simple reason that, for their verification, an infinity of particular cases would need to be verified, something clearly unfeasible. On the contrary, existential statements (of the subject "there exists a... that fulfils...") are verifiable but not falsifiable, since their negation is a universal statement that, as we have pointed out, cannot be verified. Particular statements (from subject "my car has the property of being between 3 and 5 metres long") are verifiable and falsifiable, because both the statement and its negation are verifiable by means of a finite number of experiments.

According to Popper's criterion, we can never be absolutely certain about the veracity of a scientific theory, since we can only falsify it. However, when the predictive ability of a theory reaps numerous successes over decades, without a single experiment forcing it to be substantially revised, we can achieve almost complete certainty about the theory. This is the case, for example, with atomic and quantum theories. Having started as bold conjectures in the early 19th and 20th centuries, respectively, they are now part of the established knowledge on which a huge amount of science and technology is based. We have as little doubt about the existence of atoms and molecules as we do about the sphericity of the Earth.

The universal statements that make up a scientific theory are proposed on the basis of the empirical verification of numerous particular (or singular) statements following an inductive process. This requires that the particular statements describe legitimate certainties, both in terms of the mathematical language used and in terms of their claim to correspond to reality and our ability to confirm it. For example, when we summarise a series of empirical observations in the statement "the Earth describes an elliptical path around the Sun", we are implicitly assuming the veracity of several assumptions. Among them, that we are able to measure, with some precision, the relative position of the Earth with respect to the Sun at various instants, and that we can identify the trajectory with an ellipse, within a margin of tolerance, since various factors prevent this ideal behaviour. Implicitly, we are also assuming something that seems obvious but is very important for our discussion: we are admitting that the mathematical concept of ellipse is well defined and that we are entitled to state that, within a margin of error, a set of experimentally observed points form an ellipse.

This last assumption, the one that allows us to associate a series of empirically obtained numbers with a mathematical object, is the one that cannot be adopted when a supposed universal law invokes chance. The reason is that, as we have pointed out, chance cannot be assigned with certainty to any mathematical sequence. And here the nuance of "within a margin of error" cannot be invoked. Suppose we take the second million decimal places of the number π. Its appearance is totally random, and yet we know that it is a radically non-random sequence.

Of course, this observation is compatible with the fact that, for many practical purposes, the second million digits of π can be taken as random. However, the above point is important when we refer to laws that invoke randomness with a claim to universality, especially if the association with randomness is used to reach metaphysical conclusions (such as the absence of design in nature) and even more especially if such philosophical propositions are presented as part of the established scientific knowledge . Again, this observation is compatible with the fact that chance is a useful, even essential, hypothesis in many contexts of science. However, it is not a scientific datum that can be used to draw philosophical conclusions.

**Chance in the interpretation of evolutionary biology**

The scientific evidence in favour of the historical continuity and genetic kinship of the various biological species is overwhelming, comparable to the confidence we have in the validity of atomic theory [Ayala 1997, 2006]. However, for reasons that we have already advanced, the same cannot be said of an element that is always included in the descriptions of evolutionary biology. The problem is not methodological, since, as we have pointed out, chance is a useful and essential working hypothesis in many fields of science, in particular in quantum physics and in the evolutionary biology that now concerns us. The problem arises when the role of chance is considered sufficiently established to be taken to the realm of principles, in the domain where philosophical ideas are debated.

In 1970, the French biologist Jacques Monod (1910-1976) published his work Le hasard et la necessité (Essai sur la philosophie naturelle de la biologie moderne), which has had a great influence on biological thought over the last half century. Monod contrasts chance and natural selection as the two driving mechanisms of evolution, the former indeterministic and the latter deterministic. He identifies chance with indeterminism without project, but he never manages to define it quantitatively, except for some reference to its possible quantum origin (which does not solve the mathematical problem either). The lack of a precise definition does not prevent Monod from frequently invoking chance as an essential concept. He presents it as the only possible source of genetic mutations and of any novelty in the biosphere, and then states that chance is the only hypothesis compatible with experience. As it may seem that I am exaggerating, I reproduce below a sufficiently self-contained paragraph. After describing some types of genetic mutations, states^{30}:

"We say that these alterations are accidental, that they occur at random. And since they are the only possible source modifications of the genetic text, the only repository, in turn, of the organism's hereditary Structures , it necessarily follows that chance alone is at the origin of all novelty, of all creation in the biosphere. Pure chance, chance alone, absolute but blind freedom, at the very root of the prodigious edifice of evolution: this central notion of modern biology is no longer one hypothesis among others that are possible or at least conceivable. It is the only conceivable one, as the only one compatible with the facts of observation and experience. And there is nothing to suppose (or to hope) that our conceptions on this point should or even could be revised".

It is understandable that Professor Monod was unaware of the work of Chaitin, whose work on the fundamental indemonstrability of chance, although beginning in the 1960s, does not yet seem to have reached biologists, but it is more difficult to justify his ignoring, firstly, the old intuition (prior to Chaitin's work) that chance is, if not impossible, at least difficult to prove, and, secondly, Popper's falsification criterion, knowing that it is difficult to devise an experiment or observation that yields as a conclusive (random) absence of chance, if not impossible, at least difficult to prove and, secondly, Popper's falsification criterion, knowing that it is difficult to devise an experiment or observation that yields as a conclusive result the absence of chance (at least Monod did not propose one), i.e. that the chance hypothesis is not refutable. We will expand on this question below.

**design and chance are outside the scope of the scientific method.**

Relating Chaitin's work on randomness to discussion on finality, the Austrian mathematician Hans-Christian Reichel (1945-2002) made the following observation [Reichel, 1997]^{31}:

"Is the evolution of life random or is it based on some law?...The only answer that mathematics can give has just been indicated: the hypothesis of randomness is unprovable in principle, and, conversely, the teleological thesis is irrefutable in principle".

This logical conclusion can be seen as a strength of the theories of design, since they cannot be refuted, but it can also be understood as a weakness, since, according to Popper's criterion, a theory that is irrefutable in principle cannot be scientific.

We conclude that, because (i) chance cannot be verified for a particular succession of (conveniently mathematized) natural events, then, equivalently, (ii) finality cannot be disproved as a general law that purports to describe many such sequences .^{32}. We have reached this conclusion for very fundamental reasons rooted in Gödel's theorems, even though it may have seemed intuitive to many before.

Similarly, we may ask whether it is legitimate to claim (iii) that design cannot be verified for a particular sequence of natural events, or equivalently, (iv) that the chance hypothesis is not refutable as a universal law applicable to a wide class of sequences. To prove these last two equivalent statements, it may seem that we lack a fundamental theorem of the subject invoked in the previous paragraphs. However, the weakness of the chance hypothesis is not so much that it is virtually irrefutable when invoked as an ingredient of a general law, but that it is fundamentally unverifiable when assigned to any singular event properly characterised by a mathematical sequence. Popper's falsifiability criterion emphasises that, to be considered scientific, a universal assertion must be susceptible to refutation by the hypothetical observation of a singular event that contradicts the general proposal . However, in such a criterion it is assumed that the general law can at least be verified in a finite issue of individual cases that provide the basis for induction. The latter requirement cannot be satisfied by the chance hypothesis, for fundamental reasons anchored in Gödel's theorems.

In short, Chaitin's work on the undecidability of chance leads to the conclusion that the hypothesis of design is irrefutable as a general law while the hypothesis of chance is unverifiable in any particular case. Both types of assumptions are beyond the reach of the scientific method.

We can reach a similar conclusion by following a more intuitive reasoning not linked to fundamental theorems. For this we imagine a conversation between two scientists who are also philosophers of nature.

Suppose we (amicably) confront Alberto and Beatriz. We show them two long sequences of numbers that have been generated by a mechanism whose interpretation is controversial. Some say the mechanism is random; others think that someone with a bit of patience designed them by hand. In turn, they are not told whether these sequences have been chosen randomly among many others generated by the same mechanism, or whether they have been chosen at purpose to complicate the discussion. Alberto is convinced that the mechanism is random; Beatriz that there is a designer. Both are unwilling to budge on their positions. Of the two sequences, the first looks random, with no clear guideline , while the second sample has obvious repetitive patterns. The two sequences describe completely different natural phenomena. In other words, the debates about the randomness of each of the sequences are independent. However, they have in common that Alberto argues the randomness of both and Beatriz the design of both. Both defend their positions passionately, although each honestly strives to be rational and goal.

discussion begins. Both are unaware of Chaitin's work on Gödel and Turing. They discuss the first sequence. Alberto says that it is obviously random, as it has no clear pattern. Beatriz argues, on the contrary, that the first sequence is designed, although not in an obvious way. According to her, the designer has tried to give an appearance of randomness, trying to avoid any repetitive pattern or correlation in general. They do not arrive at a agreement.

If they knew Chaitin's work, discussion would not change decisively. Alberto would continue to say that the first sequence is not random, although he would admit that he cannot prove it. Beatriz would insist on the designer's skill disguising and would be pleased to recall the essential impossibility of proving the randomness of the sequence. Neither would they reach agreement on the origin of the sequence.

They now move on to discuss the second sequence. Beatriz says that it has obviously been designed, as sample has clear patterns that are easy to program. Alberto defends the randomness of the sequence and argues that among many random sequences, there is always a non-zero probability that one of them will show some repetitive patterns. Moreover, by virtue of the physical and biological significance they attribute to that second sequence of numbers, Alberto claims that those non-random appearing patterns are necessary for the existence of both debaters; if the sequence had not shown those regular patterns, the two of them would not have come into existence and would not be there to debate it. Therefore, one should not be surprised that the sequence shows certain regularities, as they are a necessary condition for the existence of the debaters. Moreover, he adds, it is possible that there are many other worlds, past or future, in which the equivalent of that second sequence is truly random, but those worlds cannot give birth to rational beings who meet to discuss their design. Nor do they agree on agreement.

In neither case do they reach a agreement and there seems to be no experiment or observation that can resolve their discrepancies. The discussion presented is obviously a caricature of a real discussion. However, it is easy to find in it patterns of reasoning that are often heard in current debates about the presence or absence of design in natural processes, be they biological or cosmological. When there is a strong philosophical motivation for maintaining an interpretation, there is always an argument to defend it against any experimental appearance. But this is natural because, in the discussion on finality, there can be no decisive experiment or observation.

We are driven to the conclusion that debates about the presence or absence of finality are outside the scope of the scientific method .^{33}. In any empirical scenario, even assuming that both contenders have agreed on agreement what the experimental evidence seems to suggest at first sight, the disadvantaged opponent will always have an argument to deny the interpretation that seems to win. We have argued that the apparent irreducibility of discussion chance-design is of a fundamental character since it can be seen as a consequence of Gödel's theorems.

It seems more constructive that, in their scientific work, the two researchers in the discussion group concentrate on choosing in each context the working hypothesis that most stimulates the progress of knowledge, leaving for the realm of philosophical interpretation the considerations of finality that can be debated with the tools of reason but not with those of the scientific method.

**Conclusion**

We have seen that physical indeterminacy and mathematical incompleteness represent two internal limits of the scientific knowledge , one of their consequences being the fundamental inability of the scientific method to settle the discussion question of the existence of finality in nature. If science explicitly acknowledges that it cannot reach all realities that nominally fall within its domain, then it is science itself that is revealing that science cannot explain all of reality. We could now also invoke those metaphysical concepts that do not even belong to the domain of science. But that exercise is no longer necessary if one just wants to answer the question "Can science offer an ultimate explanation of reality?" Science itself is telling us that the answer is: no.

Whether one wants to invoke this accepted limitation of science itself to infer the existence of realities outside the domain of science, with an element of immateriality, is already a matter of philosophical choice. Many will find it natural to conclude that the proven existence of internal limits of science reinforces the notion that there are also external limits, that is, that there are realities that are not fundamentally associated with the domain of subject.

I would like to thank Gregory Chaitin, Javier Leach, Anthony Leggett, Miguel Angel Martín-Delgado, Javier Sánchez Cañizares, Ignacio Sols, Ivar Zapata and Wojciech Zurek for the interesting conversations I have had with them on the various issues discussed here. Posthumously, I would also like to acknowledge conversations with John Eccles and Rolf Landauer. This acknowledgement does not imply agreement or disagreement, on the part of the aforementioned persons, with the theses presented in this chapter. All possible errors and inaccuracies are my responsibility.

**References**

Ayala, F. J. (1997). Teoría de la evolución. Barcelona: Planeta.

Ayala, F. J. (2006). Darwin y el design inteligente. Bilbao: Ediciones Mensajero.

Chaitin, G. (1975). Randomness and Mathematical Proof. Sci. Am. 232, pp. 47-52.

Chaitin, G. (1988). Randomness in Arithmetic. Sci. Am. 259, pp. 80-85.

Chaitin, G. (1997). Number and randomness: algorithmic information theory - new results on the foundations of mathematics. In: Driessen, A., and Suarez, A. eds. Mathematical Undecidability, Quantum Nonlocality and the Question of the Existence of God. Dordrecht (the Netherlands): Kluwer Academic Publishers, p. 15.

Chaitin, G. (2005). goal Math! The Quest for Omega. New York: Vintage Books.

Chaitin, G. (2006). The limits of reason. Sci. Am., March 2006, 74-81.

Dembski, W. A. (2006). design Inteligente. Madrid: Homo Legens.

Eagle, A. (2005). Randomness is unpredictability. British Journal for the Philosophy of Science, 56 (4), 749-790.

Eccles, J. C. (1992). Evolution of consciousness. Proc. Natl. Acad. Sci. USA 89, pp. 7320-7324; Beck F., Eccles J. C. (1992). Quantum aspects of brain activity and the role of consciousness. Proc. Natl. Acad. Sci. USA 89, pp. 11357-11361.

Fernández-Prida, José (2009). Lógica matemática. Madrid: Marova.

Koch, K., Hepp, K. (2006). Quantum mechanics in the brain. Nature 440, 611-612.

Landauer, R. (1991). Physics Today, May 1991, p. 23.

Leach, J. (2011). Mathematics and Religion. Santander: Sal Terrae.

Monod, J. (2007). El azar y la necesidad. Barcelona: Tusquets Editores.

Popper, K. (1985). La lógica de la investigación científica. Madrid: Tecnos.

Popper, K. (1986). El universo abierto. Madrid: Tecnos.

Reichel, H. C. (1997). How can or should the recent developments in mathematics influence the philosophy of mathematics? In: Driessen, A., and Suarez, A. eds. Mathematical Undecidability, Quantum Nonlocality and the Question of the Existence of God. Dordrecht (the Netherlands): Kluwer Academic Publishers, p. 3.

Smith, K. (2011). Taking aim at free will. Nature 477, 23-25.

Wheeler, J. A. (1983). Law without Law. In: Wheeler, J. A., and Zurek, W. H. eds. Quantum Theory and Measurement, eds. J. A. Wheeler and W. H. Zurek, Princeton University Press, Princeton (1983) 182.

Zurek, W. H. (1998). Decoherence, Chaos, Quantum-Classical Correspondence, and the Algorithmic Arrow of Time. Physica Scripta T76, pp. 186-198.

**Notes**

(1) Many of the ideas presented in this article are already discussed in [F. Sols, Can Science offer an ultimately explanation of reality? Revista Pensamiento (ICAI, Universidad Pontificia de Comillas, Madrid)], [F. Sols, Uncertainty, incompleteness, chance, and design, in Intelligible Design, M. M. Carreira and Julio Gonzalo, eds., World Scientific (Singapore, 2013), in press;http://arxiv.org/abs/1301.7036], [F. Sols, Heisenberg, Gödel y la cuestión de la finalidad en la ciencia, in Ciencia y Religión en el siglo XXI: recuperar el diálogo, Emilio Chuvieco and Denis Alexander, eds., publishing house Centro de Estudios Ramón Areces (Madrid, 2012)]].

(2) Infinitesimal calculus was developed in parallel by his contemporary Gottfried Leibniz (1646-1716).

(3) Product of mass times velocity, also called quantity of motion.

(4) More precisely, we can say that the position x(t) and linear momentum p(t) at time t are determined by the position x(0) and momentum p(0) at the initial instant t=0. For a many-particle system in more than one dimension, the variables x,p can be interpreted as multidimensional vectors whose components are the positions and momenta of each of the particles in the three directions of space.

(5) The error in position and momentum at the initial instant, Δx(0) and Δp(0), must tend to zero so that, at a much later time t (tending to infinity), the prediction has an error in these variables, Δx(t) and Δp(t), equal to a previously fixed value.

(6) Werner Heisenberg (1900-1976).

(7) The mechanics that is described by the wave equation named after the physicist Erwin Schrödinger (1887-1961), who proposed it in 1925.

(8) Rolf Landauer (1927-1999) used to say that "information is physical" [Landauer, 1991]. The corollary is that if there is no physical medium, there is no information. It emerges as the various possible future evolutions take shape.

(9) Indeterminacy as a basis for an open future is defended by the philosopher Karl Popper (1902-1904) in his books The Open Universe: An Argument for Indeterminism [Popper, 1986] and Die Zukunft is offen (1985), the latter written together with the Austrian zoologist Konrad Lorenz (1903-1989).

(10) The translational movement is much more stable.

(11) Although hidden variable theories have generally been motivated by the desire to restore determinism to the world view, strictly speaking the characteristic of such theories is realism, i.e. the perfect simultaneous definition of all physical variables. In fact, there are stochastic hidden variable models where realism is not accompanied by determinism.

(12) Prestige and marginality are compatible in this case because a scientist can enjoy a well-deserved prestige earned through successes in conventional science while in another field, motivated by his philosophical preferences, he defends theoretical proposals that are difficult or impossible to verify experimentally and which are not supported by the majority of the scientific community.

(13) To quote literally William of Ockham (1288-1348), entia non sunt multiplicanda praeter necessitatem (entities should not be multiplied beyond what is necessary).

(14) This statement is not true for some conserved physical quantities (such as electric charge) due to so-called superselection rules. We assume that S does not belong to this subject of observables.

(15) We can think of a molecular orbital that combines two atomic orbitals located in different atoms. In this case the atomic orbitals would not be strictly self-states of position but each would be more localised (have less uncertainty in position) than the molecular orbital that results from combining the two.

(16) The paradigmatic case would be that of an imaginary experiment formulated by Schrödinger which would have as result the superposition of a live cat and a dead cat, something we do not expect to observe in reality. In recent years, much research has been done on the problem of a quantum system coupled to a dissipative bath acting as a macroscopic measuring apparatus. Under the leadership of Anthony J. Leggett (b. 1938), it has been concluded and experimentally verified that, under certain conditions, linear superpositions of macroscopically distinct states can exist. However, this occurs only in very special cases which are beyond the scope of a basic discussion such as the present one.

(17) John von Neumann (1903-1957).

(18) This general statement is compatible with the existence of a wide range of situations in which quantum mechanics predicts a given result with probability close to one. This is the case for classical mechanics (understood as a limit of quantum mechanics) away from potentially problematic bifurcations. The process shown in (3)-(4) is radically far from such a classical deterministic limit.

(19) John Archibald Wheeler (1911-2008).

(20) We are referring here to an elementary act of freedom such as raising one arm or the other, or looking one way or the other. We do not enter here into the question of sociological determinism.

(21) In the last years of his life, the Australian neurophysiologist John C. Eccles (1903-1997) identified a neural process that could meet all the requirements criteria for being at the basis of an objectively free decision [Eccles, 1992]. Other neuroscientists question the reality of free will [Koch, 2006; Smith, 2011]. In connection with the reference [Koch, 2006], we would like to note that quantum mechanics is rather broader than quantum computation.

(22) Strictly speaking, this is the logicist view in mathematics, and is therefore only an opinion, albeit the most widespread opinion. What is not an opinion, but a fact, is that all the mathematics we keep in our libraries is logically deducible from the axioms of set theory.

(23) Much later, the completeness theorem has had conventional mathematical applications.

(24) Similar ideas were proposed in parallel by the American Alonzo Church (1903-1995).

(25) Short form to refer to a program that contains its own input, i.e. a pair (machine, input) or (program, input).

(26) In the book of Wisdom, probably written in the first century B.C., the reference to chance is put in the mouths of the ungodly (Wis, 2, 2), to whom are attributed statements suggesting that the discussion on finality is much older than it may seem.

(27) A terminological precision is in order. In English there are two words that are practically synonymous: chance and randomness. The first can be translated as chance and the second as randomness, with random being equivalent to random. In biology contexts the term chance is more commonly used, while randomness is the favoured term in physics and mathematics. The word chance has a dynamic connotation, typical of biology, and randomness a static connotation, typical of mathematics. In many practical cases they can be taken as equivalent, since the inability to anticipate the future is directly related to the inability to find a clear pattern in past events, once recorded quantitatively and therefore mathematically. However, they are not equivalent in the strict sense. For example, as result of chance, one can, with probability leave, generate a non-random sequence. For a detailed discussion, see e.g. [Eagle, 2005].

(28) We refer here to the mathematics used in experimental physics and computational physics, which is always finite. The German mathematician Leopold Kronecker (1823-1891), who was a convinced constructivist, is credited with the sentence: "God made the natural numbers; everything else is the work of man".

(29) (Of course, in practice a theory is only revised with due care (including the independent reproduction of experiments that contradict the theory), and this has to be all the more careful the greater the successful predictive capacity the theory has shown up to that point.

(30) Text taken from the Spanish edition [Monod, 2007]. Emphasis in italics is Monod's.

(31) The translation at Spanish is mine. Emphases are Reichel's.

(32) In this context, a sequence would be an ordered set of numbers characterising the properties and timing of genetic mutations leading from one biological species to another, assuming we could one day have that information with sufficient precision.

(33) It is curious to note that in other contexts the existence of design is not controversial. For example, nobody doubts the existence of design in an aeroplane, and yet, strictly speaking, it is no more demonstrable or less refutable than design in biological evolution. In particular, it is not possible to design an experiment that yields as result that the aeroplane has been designed. The difference is that in ordinary life we have direct experience of design. We know that there are engineers who design and, without the need for much training, anyone can decide on the layout of objects in their room. But there is not the same evidence for an external designer who could have facilitated the progress of some species. That is why the question of design in biological evolution will always be more controversial.