Several times I was taking part in arguments about the artificial intelligence, I was extremely skeptic. Referring to Heisenberg, I said that the human brain cannot be reproduced, recreated or duplicated. As you might have guessed, the reaction to such statements was always the same: denial.
We live in a mechanical world. It happened so that the concept of science and technology as a universal way of solving tasks is one of the key paradigms of the public conscience in the 21st century (at least of the technically educated part of the society). People of mechanical intelligence think that the world is a huge machine/computer/mechanical system, in which any event happens according to some law or algorithm. As a rule, an average techie is far from fundamental science. But we are most likely to think that his mechanical ideas are based on the success of the modern science.
In this post I will try to break this stereotype and show that:
- similar technical and mechanical conception does not correspond to the current scientific paradigm;
- the modern science does not know everything about the world structure.
To tell the truth, it is in even more severe crisis, than it was at the beginning of the 20th century. So that you would not think that I am a common psycho, I am providing the reference list I will rely on:
- Lee Smolin. The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next.
- Lee Smolin. Time Reborn: From the crisis in physics to the future of the universe
- Werner Heisenberg. Physics and philosophy.
- The Internet, of course.
To my mind, the last book by Smolin – Time Reborn – is the best work about the science philosophy in recent years. Therefore, it’s quite surprising that it was not as popular as “The Trouble with Physics”. I guess, the book complexity is the main reason for that.
Smolin had set a very simple goal. He wanted to explain that physics had ignored reality from the Newton times. The children of the scientific and technical revolution epoch do the same, actually. What does it mean?
Time as a Phenomenon
The most obvious example is physics’ attitude to time. By the way, I have recently made a review. For Newton, time was just some given fact. Einstein and modern physics stated that time was another spatial dimension.
But the problem is that none of these descriptions reflects reality. It is absolutely obvious that time is not the same as space. I can move in space in any direction. But I do not control the time. It flows forward, and only forward, no matter what. The sense and the march of time play the fundamental role in our life.
As for physics, it denies the fact. It only seems so. Time is just a variable in an equation. It has no distinguished features. Look, this equation is working!
This method is working and it definitely has a right to exist. Equations denying a special distinguished role of time can be successfully applied to description of many processes and they even provide correct predictions. But there’s one more question remaining. Why does not physics describe the objective reality provided in sensations?
The General and the Particular
According to a modern paradigm, physics does not describe the objective reality, as it is not the task. Physics builds models that allow to make predictions, within the applicability area, about the course of phenomena.
In other words, physicists find some mathematical formalism and substitute in it some conditions and restrictions of a real process, the course of which it is required to predict. This formalism is considered to be “the law of physics”, as it is expected to be the determining link in the chain. Notice any trap here?
Suppose there’s a common abstract model. We measure the entry conditions, substitute them into equations and obtain a result. The result will be different for different sets of conditions. Moreover, the absolute majority of the entry conditions lead to trivial degenerate or even invalid solutions.
Nevertheless, each time we “take” a nontrivial and absolutely abnormal set of conditions, in which a model behaves extremely unusual. In other words, reality is an extremely specific set of entry conditions.
But what is the law of physics describing some nontrivial phenomenon – a mathematical model or a choice of entry conditions? One would think that both of them are extremely important, as our final goal is to predict the course of a process.
For example, physical models of the space time are symmetric with regard to time. Put it differently, knowing the current state of the system, we can count it back and forth in time (actually, it is the direct consequence of denying the special part of time in physics). We can count a wave function back in time even in quantum mechanics.
We keep ignoring the fact that there is no symmetry in reality with regard to time. For instance, the brain cells die when there’s no oxygen during 5 minutes. They become a biomass. No matter how accurate this biomass state measurement is, it will not allow to bring it back to life. After reading some information about irreversible processes, we will definitely find out that all complex natural processes are considered to be irreversible, despite the fact that…
But does it mean that the physic model is false? It is definitely true. Irreversible processes are affected by physic laws as well as any other processes. But why are they irreversible then? Should not be physicists interested in this question?
But it is far from true in the modern fundamental physics. The modern physics moves toward compete denial of the meaning behind the choice of the model parameters.
The Standard Model of particle physics operates on 19 basic constants. We need 7 or 8 more to describe neutrino masses. The Standard Model itself does not provide any interpretation of these constants.
String theorists have taken a step further. They have introduced some (from 6 to 22) number of unobserved dimensions of the space time. Depending on geometry of each of them, there appear new of string theories. There’s a notion of the String Theory Landscape. We can give a rough estimate of a set of acceptable string theories. The estimation falls in the range of 1010 — 10100.
In fact, it challenges the possibility of performing a sufficient number of experiments to define, which of the string theories the Universe corresponds to.
Should we expect that this way we can answer the Main Question of Life, Universe and all that? I really doubt it. We are more likely to build a mathematical approximation, which is so general that it will not be able to describe all the past and future phenomena. We will just have to choose the appropriate constants.`
But let’s get back to irreversible processes and answer the question that must have arisen after reading the previous article. If physic laws are reversible, why physic processes are irreversible then?
What makes this process irreversible? Immeasurable fluctuations. Every process is affected by disturbances that can influence the final result. It may seem that these disturbances (when they are essential) should lead to some random and unpredictable effects. But it does not happen so in reality. On the contrary, random disturbances form quite understandable and describable laws of physics. The second law of thermodynamics is a classic example. The entropy of a system always increases.
Why does it happen this way? Due to tumultuary motion of atoms. You should note that this law contains two problems that are not interesting for physicists. Suppose there’s some system determined by tumultuary and unpredictable process of the atoms motion and we’re watching it. SUDDENLY, it turns out that the system evolution in time is described by a simple and reasonable metalaw. Since it is irreversible in time, this law differs radically from the initial model.
It turns out that the irreversible in time non-decreasing law of entropy results from the time-symmetric laws of motion of particles. There are plenty of such processes. By the way, the absolute majority of the processes that are connected with activity of living organisms certainly belong to this category.
Let’s try to move a step down. As it has been mentioned, since the motion of particles is probabilistic, it is impossible to get rid of random fluctuations. What does it mean?
Suppose there’s some phenomenon. We will create some conditions, a laboratory experiment, in which some phenomenon is repeatedly reproduced from various initial conditions. Suddenly, we notice that there’s a series of processes (let’s call them “quantum”), the results of which are not described by accurate laws, but probabilities. If we develop quantum mechanics allowing to calculate the density of probabilities, it will provide predictions regarding the course of such processes.
As you might have noticed, there are two logic gaps in this chain. First of all, there is nothing we can say about each individual experiment, but only about a series of identical ones. But what should we do, if there is just one object, the Universe? It’s unclear.
Secondly, we carry out all of these researches and experiments with the tools that are considered to be non-quantum. After all, we consider the laboratory to be a regular and non-probabilistic object. Even if we set a goal to eliminate possible quantum effects of the equipment and laboratory, nothing will come out of it. We will have to build a series of identical laboratories and unite them into one big meta-laboratory. But it can also have quantum effects, can’t it? Building even bigger labs, we will have to build several Universes in the end — but there is only one Universe.
Let’s imagine that we have some quantum object. Let’s name it an “observer”. Then, in the presence and absence of an object the same physical processes can occur in different ways. WAIT, OH SHIT!
It’s nonsense, — some readers may say. Well, not really, my dear reader…
In 2013 a group of Japanese researches discovered quantum vibrations in microtubules. A microtubule is one of the elements of a cell. It’s the most important part of a neuron. Thus was indirectly confirmed the theory of Sir Roger Penrose, the inventor and the main ideologist of the quantum mind hypothesis.
These theories appeared in the late 80s and were an attempt to explain the phenomenon of consciousness from the standpoint of quantum theory. According to Roger Penrose, consciousness is the result of deeply hidden and thoroughly adjusted quantum processes in the brain. His theory is called “the orchestrated objective reduction”.
The quantum mind theories have been dismissed for quite a long time, despite the fact that their author is considered to be one of the main living specialists of the theory of relativity, cosmology and mathematical physics. Indeed, the theory itself is quite weak and contradictory. Time after time, its predictions proved wrong. But at the end of the last year there was a qualitative shift related to the Japanese scientists’ discovery.
Quantum Dead End
Let’s sum up an intermediate summary. So, what have we found out from the previous parts?
- The laws of physics describe reality only when we know the initial conditions.
- Initial conditions are quite specific, to say the least. They obviously represent some critically important “law”.
- Modern physics moves towards creating as general laws as possible. To my mind, it’s quite hopeless.
- Physics cannot give answers to some questions and does not even try to find them. This concerns, in particular, the description of single quantum phenomena that happen to be more widespread than we used to think.
Relying on the above-mentioned, I can easily explain the reason it is impossible to duplicate or reproduce the human brain. To begin with, the knowledge of “physic laws”, according to which our brain’s working, is insufficient. We should also know how to reproduce initial conditions error free. Secondly, within the limits of the current physical representations, it is impossible to describe the brain functioning, as well as any single quantum object. Since it is impossible to repeat the experiment plenty of times, it’s no use to distribute probabilities.
After all, as well as physics, the artificial intelligence as a scientific discipline, denies the reality. The artificial intelligence history dates back to 40s. In 70 years, there has been created nothing resembling a step towards the Strong AI. All of us understand that computers for chess or any kind of quiz games are just smart algorithms plus computational power. They are not the real artificial intelligence. The chat-bot that has passed the Turing test is just a program to cheat judges.
Finally, let’s take a look at the theory I’m most concerned about. Since it is unfalsifiable by definition, the theory is not scientific. Still, there’s something about it….
Let’s imagine that there are no physic laws. There’s just the precedent law: if a phenomenon has already happened in the past, it will always happen the same way in the future. But if the same phenomenon occurs for the first time, for example, during a physical experiment, we choose a random result. All subsequent experiments will take place by the sample of it. For instance, Erdős experiments could show inequality between gravitational and inert masses, and we would now be living in a completely different Universe.
Nonsense? Maybe. Still, it explains the monstrous complication of the modern physics. At the beginning of the 19th century Beethoven wrote that he was trying to be familiar with the modern science progress and that there was no scientific treatise that could be too difficult for him. To tell the truth, nowadays I can hardly imagine a person who could read scientific articles of even just one branch of science.
Now let’s try to define the reasons of the current state of things. Obviously, there are two ways out. We can either accept an agnostic point of view and postulate that the mentioned questions cannot be solved, or we can try to find the reason, due to which the scientific idea follows the wrong direction.
All of us are used to the fact that a physic law is some mathematical equation. Mathematics describes perfect objects. Though there are no perfect circles and straight lines, mathematics works with them anyway. So, why do we describe the imperfect world, having nothing accurate and invariable, with the help of accurate and invariable mathematical abstractions?
One could object that we can always introduce some ambiguity and imperfection into any equation. But it’s not true. Referring to the basics of mathematics, we will see that mathematics stems from the Set theory. It operates on the most perfect entities that are sets and set elements and nothing like some probabilistic approximations. On the contrary, the Probability theory stems from the Set theory, not vice versa.
The fact that our perception of the world takes place in terms that cannot be met in this world, should obviously reflect some fact or principle that we cannot actually understand.
For example, Pythagoras (and some of the modern scientists as well) thought that perfect objects, meaning geometric figures, exist in a parallel reality. We can neither see, nor feel it. But we can “see” it with our mind. In other words, the idea of perfect entities is our internal eyesight into the world of perfect things.
What a metaphysical rubbish, — you might say. Maybe you’ll tell us about a better theory then?
Those of the readers, who have not yet written something like “the author is talking the absolute nonsense”, might be interested in the things we can do about all of it.
Smolin dedicates the bigger part of his book to this question. But it’s just a retelling of requirements to the future perfect Theory of Everything and does not help to write the theory itself. The theory should meet the following requirements:
- tell about the distinguished role of time;
- unite physical laws and initial conditions;
- be applicable not only to the laboratory, but also to the universe as a whole;
- its laws must not be absolute, but also time dependent.
Of course, this description is pretty far from a real theory. What we need is a breakthrough, like the Probability theory by Einstein.
After all, what was the genius of Einstein? It is not in the equations, as indicated by many of his critics. He adopted the bigger part of the theory from the earlier works of Lorentz, Poincaré, Minkowski, Mach and other scientists.
But Einstein looked at the theory from a different perspective, like no one did before. Perhaps, the Aether history is the most demonstrative story of the physical mistake throughout the history of physics. So why was “aether” invented? It did not originate from the physical theory of the time. Moreover, Einstein’s assumptions should have been considered as those resulting from the modern physics of the day.
The Aether theory derived from the mentality of physicists at the beginning of the 20th century. They needed a simple mechanical explanation of the conflict between mechanics and electromagnetism. So they invented it as something given. What could have been more natural in the epoch, when scholars of authority considered that there was nothing in physics to be discovered? As I see it, there’s a direct analogy with string theories discovering hidden dimensions.
The new “Theory of Everything” should look at plenty of things, including the human reasoning, from an absolutely different perspective. There should be no difference between the perfect and real, as well as between the law of physics and the system it describes. To tell the truth, I am not sure it is going to happen during my lifetime.
Let’s Sugar the Pill
In the end, I am going to please mechanists by dispelling one of the popular science mistakes. I have recently read an article about the Bell’s theorem (that is as misunderstood as those who did not hear about it, and those who read non-fiction articles). It bars hidden variables, but only local ones, meaning the bound of some internal “hidden state” to the object itself. Global hidden variables, meaning the presence of some central computer that possesses the complete information about hidden variables of the object and can transfer it to any distances, are not forbidden by the theorem.
Despite all the mentioned above, we should not say that it is impossible to build the Theory of Everything and the real artificial intelligence. We could only say that it is impossible within the limits of the current notions of physics and we need a new radical approach to describe reality.