Something went wrong. Try again later

deactivated-629f1111f1060

This user has not updated recently.

471 25 0 0
Forum Posts Wiki Points Following Followers

Is Man Machine? A 2011 essay on philosophy of computers.

For no specific reason I decided to put up here a recent essay of mine regarding philosophy of computers and mathematics. I don't expect many or any readers, but those curious enough are welcome to it. The text and paragraph format is a bit messed up, but nothing serious.

In the early years of computers, around the middle of the 20th century, there was much excitement within academic circles over questions of the nature and possibilities of these new machines; their potential for good (and, to a greater extent with science fiction authors, bad) and whether further developments and investigations might shed a new light on man and his mind.

While imagination was often a ruling factor in projections for this technology, where some prophesied much smaller and slower advances than we are familiar with today and others saw no objections to the possibility of outlandish science fiction coming true in the near future, there were those who took a different, more logical and systematic approach.1

What they did was ask a simple, 'binary' question: Are minds machines?2 If so, the possibilities of advances in computer technology would seem without boundaries, aided by the progress in knowledge of the functions of the human mind, each progressing field feeding of the other. But if not, there would appear to be at least some recognizable boundaries; we can not say what a computer might be able to do in the future, but we can say what it can not do no matter the technological breakthroughs: be human.

The arguments for either answer have found new support with each step up in computers' complexity, so what evidence one finds depends largely on the date of each argument. The same is true for most sciences, but computers are in a unique position due to the speed and near instant applicability of its advances. Still, there is a general consensus that primary logical and mathematical truths will not be affected by your everyday, regular technological revolution.

This is what allowed logically thinking and computer curious academics the smart move of viewing computers of their day purely as systems and by clever application of logical advances, made in the decades before them, define the nature of the thinking computer and its limitations.3

In this paper I will make clear one of the more famous and elementary theories or explanations for the difference of computer and man, the one made using Gödel's theorems. It was made quite long ago, before the biggest leaps in computers' evolution had been even foreseen, so many may and do use the argument's age against it. But consideration of a few definitions, which I will begin with to regulate our concepts, should help the reader to recognize the missteps, the fallacies, the category mistakes in those counter-arguments, as the argument shows quite convincingly that ignoring it leads to contradictions and inconsistencies right out of the gate.4

After explaining some basic terms and concepts, I will explain briefly how Gödel's theorems apply to the problem at hand, along with propositions to keep in mind when it comes to further objections. Then , to approach each main problem posed by the mechanistic view of the mind.

Mainly they include matters of a mind's deductive abilities, consistency and whether it is to be considered a formal system or not. Finally I hope to shed some light on the significance of this question and topic, and in what way its conclusion may affect philosophical thinking.

To begin with we should be clear on the terms with which the arguments are made. First and foremost, when we talk about computers in the case of our question, we mean any functional Turing-machine (TM). In Turing's own words, a TM consisted of:

“...an infinite memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behaviour of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.”

His use of the word 'infinite' may give a clue that this is indeed only an allegory for what an algorithm in fact does, since infinite things are of little help in a construction such as this. Each TM goes through a computable sequence of commands according to its state at each time.

In addition to this is the Universal Turing-machine (UTM) which is capable of simulating the functions of any other TM or collection of them and will be the concept by which we define the machines we discuss and compare to mind in this paper. The common computer we know and love today is (most likely) the closest approximation to this UTM yet. Now it may seem easy to point to the brain as an extremely complex UTM, but familiarity with computing's background in logic may make the comparison more difficult.

On that note that we turn to Gödel's theorems, or more specifically, his two incompleteness theorems. A quick, borrowed definition from the mathematician Stephen Cole Kleene of the first theorem might look something like:

“Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.”

In short, in a formal system, which is without contradictions and able to solve basic mathematical problems, one can build a statement which is recognizably true, but out of the system's powers to be proven as such. From that follows the second incompleteness theorem:

For any formal properly generated theory F, which includes basic arithmetical truths and certain truths on formal provability, if F states its own consistency then F is inconsistent.

That is, should said system manage to state its own consistency, something must be wrong and the system therefore inconsistent.

So, each consistent system can state a truth which it can not prove, and neither can it prove its own consistency. A convenient truth-statement for the issue at hand here would be along the lines of “This formula is unprovable-in-the-system”. This is a type of statement which we, from Gödel's work, suppose any consistent system can make. But obviously it is quite loaded, since if the system would manage to prove it, it would completely contradict itself, and therefore reveal itself as inconsistent. We, however, can recognize its truth value within the system. We accept it and move on.

Although the matter of fully proving and solidifying the preceding definitions would take a lot more effort and space than this paper allows, along with fitting and impressive formulas, the basic understanding presented here will do for us to move towards the main arguments for and against the proposed identity of computers and minds.

I will refer to those who deny the difference of computers and minds as 'mechanists', since to say that a mind or brain works like a computer, although a remarkably complex one, is to say that it obeys the same laws as computers and machines, and therefore mechanically deterministic, or mechanistic. For the mechanist to prove his theory, above simply pointing out similarities, he would need to be able to build a computer not only comparable but equal to a mind. We would even accept his argument if it were only demonstrable in principle, and not practice, within our imagination or lifetime.

The strong mechanistic view holds, in short, that minds are nothing more than complex machines, UTM, whose only call to mystery is the lack of human understanding of such complicated neural systems. With further advances in science and knowledge of the brain, it would be within human might to explain how sensory experience works on the brain just as computational input does on computers, how the brain stores it like a computer would, and how the person's actions and reactions may be explained as output. As with a computer, the mechanist would explain human behavior (internal as well as external) as obedience to programming which declares how the system should react to each current state. Turing's imaginary, infinite ribbons would thus belong to the mind as well.

What would follow is the fact that should we know as much as would be needed, for this analysation of the mind, every action of a person should be entirely predictable. Although an interesting and important point to keep in mind, we're still focusing on what a, let us call it a computer-mind (CM), would do if it existed. But more to our point, we must consider what a CM could do. In other words, we will take care of 'practice' by tackling 'principle', and since near- infinitely complex computers may be capable of fantastic things, for brevity's and convenience's sake our question might thus be: What demonstrable human qualities could a CM not have? Note that I choose not to ask what it could 'simulate', since surely we are not looking for the possibility of a computer seeming human, but actually becoming human (as far as 'human' can be used as an adjective about anything that is in every aspect like a human being).

Since predictability was mentioned already, we may do away with that option first so not to complicate the string of arguments too much. Since I can not hope to prove the idea of free will in this paper, I will make a concession and mark human decisions down as acts of randomization. We can easily imagine granting a computer advanced randomizing elements. So advanced, in fact, that they are humanly and practically unpredictable. A computer might be free to add or subtract any number to each side of an equation. Imagine that its decision is determined by how many radiated atoms of a certain sort in a certain led box have deteriorated in the last millisecond, or whether the amount of leaves of a certain weight the computer can hear rustling in the wind outside its office is a prime number or not.

The results would seem entirely unpredictable, but the fact remains that the computer would make the calculations with the purpose of choosing between determined options. It can not choose just anything to add or subtract in the equation we put before it, since it must choose from a selection of numbers that make sense. Anything else would cause a contradiction with following inconsistencies. A machine is programmed, and would thus always be picking out of its designated bag of tricks when faced with mathematical problems, no matter how random its eventual choices might seem.

From this it seems granted that however complex and intricate a computer and its possible actions would be, a human mind could observe it for long enough to collect and write down its various states, decisions and resulting states to finally have a complete, formal explanation of that computer's system. Even if we assume that the computer might go on to surprise the human mind infinitely, we will also assume an infinite presence of human minds with infinite amounts of pens and paper to achieve this formal account.

Furthermore, we would only be interested in the system's sorts of operations, which must be finite in number, each one adding to our understanding of its various underlying axioms. Imagining a machine with infinite sorts of operations would be imagining something else and more than a machine and no longer relevant to the issue at hand. When all of the system's states and actions would finally be accounted for, we humans could proceed to 'Gödelizing' it by producing the infamous truth-statement which could not be proved within that system. We would expose its Achilles' heel which the system itself was incapable of recognizing as such.

In summary, a computer in so far as it is a machine, however complex, will always follow a formal system which is liable to being Gödelized by a competent human mind. The fact is, as J.R. Lucas put it, that

“[w]e are trying to produce a model of the mind which is mechanical – which is essentially “dead” – but the mind, being in fact “alive”, can always go one better than any formal, ossified, dead, system can. Thanks to Gödel's theorem, the mind always has the last word.”

A common mechanistic objection applies, in some way, to most of the points I have made so far. I mean the suggestion that a more complex computer might always be built after a previous one has been trumped by a Gödelian formula. Obviously there are immediately two faults with that suggestion. First, constantly building another system is only to admit the deficiencies of all earlier systems. And second, the same would still hold, that each renewed system would have its own chink in the armor to be exploited by Gödel's theorem, rendering all the mechanists' hard work unnecessary. Could we then not instead integrate some fail-safe in the system which would allow it to resolve each Gödelian truth-statement as soon as it was made, and thereby prove itself consistent in an infinite sequence of 'leveling up'? Would we allow such an infinite process, the computer would be above and beyond any definition of a 'machine' relevant to us, so the possibility is dismissed without much consideration.

The point here is not entirely to argue for the mind's superiority over computers. After all, it is easy to find relatively simple computers which can answer mathematical questions which, in practice, no one human's lifetime would last to solve. Turing himself commented that a human being's 'victory' over a computer can only be a petty one, since there can always be so many other computers far more capable.

But again, the aim for the mechanist must be to prove that a single computer could exist which could do everything that a mind could do, not just any thing. It may be easy to find a question which a computer gets wrong, but even easier to find one for a human. But isn't there a fundamental difference in what actually happens when computers make mistakes, as opposed to humans? I will argue for my answers to that question further down.

Before I get further off track, the point must be made again, in answer to the above discussed objections, that we are not looking for superiority of mind or machine, but identity. So the slightest difference, even if discovered through petty victories, is enough to lay ruins to the mechanistic argument.

Now we can face and move to a series of more radical and fundamental objections raised by the mechanist against a difference of machine and mind. Gödel's theorems apply to systems which are (i) deductive, (ii) consistent and (iii) formal. We may ask ourselves whether human beings can be said to truly have any of these three qualities, and if they can then be treated as systems comparable to computers. It may probably be safely assumed that a human mind is at least not completely deductive, consistent and formal. So, will the mechanists then propose their computer be built to be equal to humans in their lack of these qualities? I will now approach the possibility and relevance of each in an orderly fashion.

Deduction

In order to free a computer from totally relying on deduction, one might imagine giving it a feature similar to creativity or original thought. In addition to its built-in axioms, which dictate all of the system's inferences and actions, the computer would ponder various propositions not generated by its own axioms. By some standard, it would then decide whether to add this new proposition to its collection of axioms or discard it.

The computer would consider keeping the proposition at least as long as the it did not cause a contradiction with the preexisting axioms. In that case the negation would be added instead. So, this computer would then have axioms and propositions not approachable through its system's axioms and native rules of inference. This would make this particular computer more similar to a human mind, the mechanists hope.

However, problems follow. A system like this might end up in odd circumstance, like having both accepted a Gödelian formula and its negation, since both are unprovable in the system. It might also choose to accept a yet undecidable formula, and ignore its negation from then on, when in fact the opposite would be right. In any case, even if the system would not appear inconsistent immediately, it would be unsound. This would disqualify it from being considered a model for a mind.

The argument would actually not even have to go that far. Consider what we have already argued earlier. We will say that since the system is not random, and thus still deterministic, and we grant the imagined possibility of observing it infinitely, there would come a time when a formal schematic of the system had been drawn up and a Gödelian formula then constructed.

Consistency

Possibly the weakest point in the defense of a mind's superiority over machine is the fact that we can never fully, absolutely, categorically state that a system is consistent. The truth of the matter is that we can only examine a system and tell if it seems consistent as far as we can tell. This is because, as Gödel showed, a system can not prove its own consistency.

We use another system to prove the former's consistency, but that latter system then requires another to prove its own, ad nauseam. Furthermore, wherever we are placed in this infinite regress, it is the human system which judges whichever other system's consistency, and we have only ourselves to state our consistency. The problem is obvious and awkward, and academics like Putnam have indeed stated that man is a machine, and an inconsistent one. Gödel's second theorem would forbid us to state our own consistency.

The only way to save the mind would seem to be to discover some distinctive attribute which allowed the mind to bypass this last hindrance and state its own consistency.

However, this pigeonhole which the mind is squeezed into under the banner of not having the right to judge about consistency assumes that a machine and mind would be inconsistent in the same way. After all, only one inconsistency need be exposed for a system to be deemed inconsistent. But common sense will show us that there is distinct difference. A machine's inconsistency comes about when it believes both a statement and its negation, and tries to work through its processes and computations unaware of this illness. A person will not believe a contradiction in the same way, but may be mistaken for a while until encountering circumstances where the misunderstanding will be cleared, hopefully without much effort.

The mind isn't functioning while a blaring logical inconsistency thrives within its system in the same way a machine could be said to. In an inconsistent, formal system anything can be proved. An inconsistent (yet sane) person will however not believe anything and everything. She chooses between beliefs at various points, and may change her mind when confronted with evidence. She is fallible. After all, errare humanum est.

The mechanist must be allowed a proposal here, and most likely he would suggest a moderately inconsistent machine, to simulate humanity. It would have to be self-correcting, as humans are selective; fundamentally inconsistent, so as not to be entirely deterministic; impervious to Gödel's theorem and yet entirely credible as a thinking mind.

There are various ways of going about this, but most seem to involve severing the system's strings of deductions and inferences so as not to arrive at any troubling inconsistencies. What we would need to introduce is some kind of a stop-rule. It would allow/force the machine to choose between deductions based on convenience, it would seem. That is hardly comparable to a human mind, although it may serve a purpose in the odd occasion for a human to suspend an inconvenient truth at least temporarily.

But to have each case of modus ponens (MP), to take an example of a simple enough deduction, subject to this treatment does not fill one with trust in the system. In an argument with another mind, the computer could use MP for its own evidence, but dismiss the mind's use of the same method simply because of the prospect of a contradiction. A similar argument between equally capable minds works in a way that each mind can follow through with each case of MP, weight and measure the relevance and soundness of the evidence, and finally come to some agreement or conclusion.

In the larger scope of things, human beings indeed live under the threat of discovering their belief systems and even logical truths to be inconsistent and contradictory. While a computer would in that case, presumably, come to a standstill or break down, the human race has the potential to rearrange and adapt to the new situation. We would recognize our opposition, the contradictions, and mend them. Again, J.R. Lucas phrases the conclusion well:

“We may be consistent; indeed we have every reason to hope that we are: but a necessary modesty forbids us from saying so.”

Formality

It now may seem like the human mind is stuck under the mechanist's heel, because of the insurmountable doubt whether minds are to be considered above machines, since we are unable to state our own consistency. Gödel's second theorem, which we meant to use against the mechanists, has turned in our hands. That is, if we keep consenting to being considered formal systems for the sake of the issue at hand.

The theorem, in fact, would state that a mind, if it was in fact a machine, could not conclude to be consistent. If the mind is not a machine, however, the theorem does not apply. Even though a mind can not construct a proof of consistency of a system within that same system, the possibility of what the mind is in fact doing, stepping outside the system, is not excluded. We are able to stand back from each system and construct arguments, both formal and not, for systems, both formal and not.

The result is what we can not completely formalize our argument about the system, but that is the point of Gödel's theorems after all. In the end, we simply decide that we are consistent, since we are capable of thought and function. A mind built out of an inconsistent system would not function, since it could hold and believe anything and everything.

Now to mention the proverbial elephant in the room: self-reference. The gist of a Gödelian formula is, almost ironically, its implicit self-reference. It is constructed and tailored to a particular system in each case, so the system is being asked to consider its own processes. We ask of the system to tell us what it is and is not capable of doing, and should it actually try and answer it would inescapably end in a paradox.

A similar process is involved when a human mind asks itself how and whether if actually knows what it thinks that it knows that is knows. It leads inevitably to an annoying, puzzling regress before the mind simply drops the question as pointless. Were a computer asked to consider the same question it would check internal states and processes as something it holds to be true, and not as something it finds interesting to analyze and mull over. It would do this step by step, going through deduction after deduction. A person, rather than going through a systematized sequence of thought-processes, looks at itself and its thoughts as a whole before realizing the pointlessness and may choose to give up.

In summary, for a machine to be able to consider itself as a person does, it would have to be have a way to step back, with the help of some added part, like the proposed infinite Gödelizing additions mentioned earlier. That would, as explained, make it a different machine than the one asked to consider itself, and the regress goes on. A mind needs no extra part or help to metaphorically step back and consider itself. It considers itself, other things, and itself as other things (figuratively speaking), yet always remaining a complete entity. This paradox of the human consciousness has more to do with the question of divisibility of parts of thought, body, personality and so on, and is beyond the scope of this paper.

A very interesting point raised by Turing has found a growing number of supporters as computer technology advances. He realized how early in the computer's infancy many of the arguments I have touched on here were made, and implored his critics to consider the future. The computers available for testing back then were fairly simple, so a machine's consciousness might lie in its eventual super-complexity. After all, were a computer to be compared to even the simplest of human brain, it would have to be astronomically complex. He imagined some critical level of complexity which would have to be reached before original thought sparked into existence. He described the “qualitative difference” between simple and more complex entities.5 Today these are described as “emergent properties”, such as water's wetness in spite of each atom not being anything we would call 'wet'.6

There are even currents within phenomenology today which deal with the same questions and terms. These debate what collection and degree of psychological and phenomenological characteristics an entity needs before a mind/personality emerges, and whether this self is a part of natural evolution or something external, secular and independent.7 But that is again beyond our scope.

Still, the communal quest is after the definition of what is needed for a consciousness to thrive. Is it complexity, or some unfamiliar element without which we can only hope to make more complex but never quite human machines? From what we have covered here I feel comfortable in concluding that, whatever turns out to be needed when this artificed consciousness sparks to life, what we will be dealing with is no longer a machine in the relevant sense and thus outside the mechanist's view. It will be unpredictable, and not only due to randomness or inconsistency. It will act outside the rules of any analyzable programs and axioms. It will no longer be determined by its internal states and the rules of inferences therein. And, of course, it will escape the clutches of the Gödelian formula, since it is not a machine. This creation will be the equivalent of the conception of a human being, even though this person is made up of mechanical parts.

What may we now have gained from dismissing the mechanistic view? A debate like this may seem to belong among philosophy-minded computer enthusiasts, and in some ways epistemology (What are thought and knowledge?) and metaphysics (When is an entity human?) but not have much relevance to more critical fields of philosophy such as ethics.

Closer consideration will reveal to us more severe implications, though. If people are mechanistic, determined vessels, they are anything but free, autonomous beings able to act morally according to their own convictions. Without probable explanations, like Gödel's theorems have allowed us to make, morality often ends up being tucked under semi-mystical or religious agendas, looking away from advances in fields of science to do with neurophysiology and others that give insights into functions of us as thinking beings. We no more find ourselves as torn between cold science or leaps of faith. There are fantastical advances in store for computer sciences still, but so long as the human mind is hovering over the circuit boards there will be an added element in the mixture.

Notes

  1. I am reminded of an anecdote from 1969, year of the Moon-landing, about NASA's fantasy of one day fitting their computers into a single room, while I type this essay using a comfortably sized laptop computer capable of far more complex calculations than NASA's in those days.

  2. The reader should keep in mind that throughout the paper I will use the words 'computer', 'machine' and to a lesser extent 'system' almost interchangeably. A machine is any mechanistic contraption, and with an added formal system we have a computer. For all intents and purposes of this paper this should not cause too much confusion.

  3. We are, of course, discussing only binary computation here, where computations and processes follow rules of 1 and 0, Yes and No. The mentioned, curious academics might perhaps specifically be called 'bi-curious'.

  4. The term 'category mistake' was termed by the philosopher Gilbert Ryle in his 1949 book The Concept of Mind. It describes an error where “things of one kind are presented as if they belong to another”.

  5. One of the more famous thought experiments to do with complexity is the China brain experiment, originally introduced by Lawrence Davis in 1974. It proposes that every person in China were given a role similar to a neuron in a brain, made to interact with each other, and then asks whether something like a consciousness might emerge.

  6. This particular example comes from Kristinn R. Þórisson's paper Machine Consciousness, Consciousness and Self-Consciousness in Meat Computers and Robots (authors transl. of Vélvitund, meðvitund og sjálfsvitund í kjötvélum og vélmennum) from the collection Is Matter familiar with the Mental? (author's trans. of Veit efnið af andanum?) where he wonders where the 'wetness' is stored in a waterfall, discussing the significance of complexity in artificial intelligence.

  7. A very prominent thinker on this subject is Thomas Metzinger whose book, Being No One. The Self-Model Theory of Subjectivity (MIT Press, 2003) has stirred many in the phenomenological field, as he claims there is in fact no such thing as the popular understanding of a self.

Sources

Penrose, Roger. Shadows of the Mind: a Search for the Missing Science of Consciousness. London: Vintage, 2005.

Lucas, John Randolph. "Minds, Machines and Gödel." Oxford Philosophical Society. 30 Oct. 1959. Lecture.

Kleene, Stephen Cole. Mathematical Logic. New York: John Wiley & Sons, 1967. Turing, Alan. "Intelligent Machinery." Cybernetics: Key Papers. (1968).

Steinar Örn Atlason and Þórdís Helgadóttir (edit.), Is Matter familiar with the Mental? (Icel. Veit efnið af andanum?). Reykjavík: Heimspekistofnun, 2009.

Copyright; Gestur H. Hilmarsson, 2011

7 Comments