Something went wrong. Try again later

deactivated-57beb9d651361

Selling @BRIANMBENDIS complete Daredevil run. Ultimate Collection Vol 1-3. http://ebay.eu/1ePazhR

4541 0 0 0
Forum Posts Wiki Points Following Followers

Artificial Intelligence vs. The Human Mind and other musings.

Related to my post of about a week back, here is the final portion of my dissertation sans proof. A little knowledge of Gödel's theorems is necessary to understand where I'm approaching from, but otherwise it should be fairly self-evident. 
 
-  -  -  -  - 
 

 Part III

The Impact

Gödel’s theorems play, what one would assume to be, an unexpectedly major role in a breadth of other fields. Because mathematics, physics and computing are inherently and intrinsically related, the repercussions of the incompleteness theorems still resonate in the world of science today. And so, we will take a brief, winding journey through these effects, with an eye to exploring the difference between mind and machine. In order to contextualize what I aim to do in this section, I turn once again to 'Gödel, Escher, Bach', the thoughts expressed in which play a particularly important role in this final portion.

Looked at this way, Gödel's proof suggests – though by no means does it prove! – that there could be some high-level way of viewing the mind/brain, involving concepts which do not appear on lower levels, and that this level might have explanatory power that does not exist – not even in principle – on lower levels. It would mean that some facts could be explained on the high level quite easily, but not on lower levels at all. No matter how long and cumbersome a low-level statement were made, it would not explain the phenomena in question. It is analogous to the fact that, if you make derivation after derivation in Peano arithmetic, n o matter how long and cumbersome you make them, you will never come up with one for G – despite the fact that on a higher level, you can see that the Gödel sentence is true. What might such high-level concepts be? It has been proposed for aeons, by various holistically or "soulistically" inclined scientists and humanists that consciousness is a phenomenon that escapes explanation in terms of brain components; so here is a candidate at least. There is also the ever-puzzling notion of free will. So perhaps these qualities could be "emergent" in the sense of requiring explanations which cannot be furnished by the physiology alone. [1]

A paradigm of clarity, Hofstadter succinctly captures what shall be the train of thought throughout   Part III. I aim to take a somewhat anti-mechanism stance, though not necessarily a 'soulistic' one, and hope to convey such in a satisfactorily expressive manner. Though the section is split by various sub-headings, they will each, hopefully, tie into one another and show a common theme by the paper's end. We will come back to Gödel’s theorems regularly as a point of reference, though, we are now primarily focused with what they ontologically, rather than epistemologically, entail - with regard to our other focuses of interest.

•                      Artificial Intelligence vs. The Human Mind

To quote the King, in Alice in Wonderland, I shall “begin at the beginning” - Gödel as starting point - but probably not, as it happens, even make my way to the end. Such is the enormity of the subject. As I am sure you are aware, there remains a schism between man and machine: Gödel’s theorems highlight that, in their inability to 'see' an intrinsic truth, computational processes exhibit an explicit lack of... of what? Understanding. Rationality. Creativity. Consciousness. All of these things play a part in the human mind's ability to perform at a higher rate of functionality than any machine is capable of. Even that, though, is selling it short. What exactly is it that sets us apart from our mechanical counter-parts?

Turing, the creator of the well-known Turing Machine (a theoretical logic device intended to simulate simple computer commands and, thus, give explanation and to provide, to an extent, some understanding for their procedures), disregarded these questions as being without merit:

The original question “can machines think” I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. [2]

Considering that we are now well beyond the end of the century, it would seem Turing's hypothesizing was incorrect, though, that is somewhat of a moot point. Far more interesting is his comment that the thought process of machines, specifically, the question of whether they have one or not, is so meaningless as to not require discussion. Well, this is exactly the kind of thing that is apt to provoke a philosophical dialogue (the irony of which does not go unnoticed). The discussion, of course, is entirely meaningful. More so today than ever, given that when Turing designed his machine, computers, as we know them today, did not exist.

To put the comment's context within the realm of Gödel’s incompleteness theorems, we know that, assuming the computer is programmed with our axioms, it is unable to prove that our T-sentence, G , is true. In fact, there are an infinite number of these improvable T-sentences. Indeed, it suggests that, on a larger scale, computers in general would be incapable of, what amounts to, rationality. However, while they say that, programmed with this set of directives, machines lack the cognitive ability to 'see' that this is so, computing has come an incredibly long way since the theorems were originally published.

What is it then, which separates human ingenuity in thinking, from a machine's simple and restricted processes? It would seem to be come down to reasoning. Given a specific problem, a computer must have its directives laid out for it in order to find a solution. The human mind is not so limited in scope. Where first we fail, there are other routes to be taken; if those do not succeed, then we are able to think outside of convention.

Turing quotes a “Professor Jefferson's Lister Oration for 1949” in the same paper:

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain - that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants. [3]

Jefferson suggests that there is an intangible element that defines human thinking. In resorting to emotion, though, he somewhat misses the point. We do not solve problems as a result of regarding our emotions. In fact, they are likely to cloud our judgement, if anything: a student becoming increasingly angry at his inability to grasp said theorem, e.g.

His point on music, however, bears consideration. Given a set of notes and directives - a scale and a specific time-signature, say - a computer could likely produce something, albeit mechanical   and without any 'feeling' (this intangible quality would impact the result, so Jefferson is not wrong; it may produce something, but whether it provides an enjoyable sonic-experience is another matter – more on this shortly), unaffected by Gödel’s theorems. That is, computers are not simple proof-checking machines; they are much more complex than first-order theory-abiding, arithmetic churning systems. Further, “theorem proving is among the least-subtle of ways of trying to get computers to think. Consider the program 'AM', written in the mid-1970's by Douglas Lenat. Instead of mathematical statements, AM dealt with concepts; its goal was to seek 'interesting' ones, using a rudimentary model of esthetics and simplicity. Starting from scratch, AM discovered many concepts of number theory. Rather than logically proving theorems, AM wandered around the world of numbers, following its primitive esthetic nose, sniffing out pattern, and making guesses about them. As with a bright human, most of AM's guesses were right, some were wrong, and for a few, the jury is still out.” [4]

Hofstadter again, though this time in his foreword to Nagel and Newman's seminal 'Gödel’s Proof'. This thought-provoking example serves to illustrate the fact that not all machines/computers are so limited by, and are free (to some extent) from, the theorems. While the system would have been unable to prove the same theorems that it had found, it could still perform its actual function with some aplomb.

Again, though, the idea of rational rears its head. The computer never 'considered' the concepts it chose. Where the term 'interesting' is used, the computer had no means to appreciate the concepts on that level. It simply made a correspondence between the directives it was fitted with, and those concepts which sufficiently enough matched them (or, more likely didn’t, given its objective). AM, though, seems a different breed of processing: granted the picks it made were still based on its programming, but there bears a closer resemblance to human thinking than in other examples of machine 'intellect'. It is prototypical in the sense that it displays, without there actually being any, rational through a flicker of ingenuity; as Hofstadter says, we aren't even sure of some of its picks yet.

Thus, while it isn't prohibited by Gödel’s theorems in performing its action, there is still a relevantly epistemological worry which bears utmost resemblance to that which arises in the mathematician's work: the notion of understanding is of primary consideration in such examples. Where the computer fails on some higher-level to appreciate that G is, in fact, true in T, these examples serve to show the same lack of understanding across the board.

At this point, we shall revisit the musical example above. There are, it turns out, programs not entirely unlike the one envisaged above. One without musical freedom: its limitations are defined and restrictive; the other with: it has a certain amount of musical creativity.

Max Mathews of Bell Laboratories ... fed in the scores of the two marches 'When Johnny Comes Marching Home' and 'The British Grenadiers', and instructed the computer to make a new score – one which starts out as 'Johnny', but slowly merges into 'Grenadiers'. Halfway through the piece, 'Johnny' is totally gone, and one hear 'Grenadiers' by itself … Then the process is reversed, and the piece finishes with 'Johnny', as it began. [5]

The result is described as droll, if turgid. There is absolutely no consciousness involved, and the program takes the necessary steps as required, omitting that essential human element: creativity. Further, the author could be said to be just as much the author as the machine. Allow me to expand: Max Mathews wrote a program, and in doing so, told it exactly what he wanted it to perform. The machine, abiding, proceeded to bridge the gap between the scores, mechanically. Feeding certain data into geographical analysis programs, it performs a similar function: unthinkingly converting said data into a visual representation. One would surely not consider the machine, in either example, the author of the results.

Before we consider the second example, let us take a moment to bring in Gödel’s theorems in a more direct manner. Given this notion of authorship, what does it mean to have a computer program author a piece of work? How would we refer to it? Further, does it refer to itself? A brief detour, then, into the relation of self-reference and authorship.

The first incompleteness theorem says 'I cannot be proven in T'. In this case 'I' is the sentence G; the sentence which is actually being presented. We, as rational (there is that implication heavy expression again) beings, understand that the sentence isn't making a declaration; at least, not in the typical sense.

...in all first-person statements, including 'psychological', or 'experience' statements, the word 'I' serves the function of identifying for the audience the subject to which the predicate of the statement must apply if the statement is to be true (what it indicates, of course, is that the subject is the speaker, the maker of the statement). And this is precisely the function of a referring expression.

            Yet some philosophers have often found the referring role of 'I' perplexing, and some have been led to hold that in at least some of its uses it is not a referring expression at all. [6]

Even more problematic than Shoemaker lets on, is that in our example we aren't even dealing with a conscious subject. In the same paper, he suggests that we substitute our use of 'I' with 'it', in the sense that 'it is raining'. So, though he applies it to our use of 'I think' ('its thinking that...'), it would seem more appropriate, in this context, to apply it to our theorem’s claim. We then formulate the sentence, 'It cannot be proven in T', which seems somehow less intuitive. What then is it that allows us to read what should be considered a very strange proposition indeed ('I am unprovable in T'), without batting an eye and knowing intuitively what it means. Hofstadter asks

On what other occasions, if any, have you encountered a sentence containing the pronoun 'I' where you automatically understood that the reference was not to the speaker of the sentence, but rather to the sentence itself? Very few I would guess. The word 'I', when it appears in a Shakespeare sonnet, is referring not to a fourteen-line form of poetry printed on a page, but to a flesh and blood creature behind-the-scenes, somewhere off-stage. [7]

'I' it seems, is a perfectly acceptable, intuitive 'statement' for a machine (or any semantically similar proposition from any inanimate object) to make. Of course, the machine does not actually 'make' any statements; we interpret them. In that case, it appears we intuitively, on some undefinable level at least, accept the authorship of the machine, going as far as to personify and apply communicative traits unto them.

There remains, however, the second example, to provide us with a glimpse of the machine-rational we are searching for. David Cope, who has written a trilogy of books on the subject of creativity in musical programs, describes the musical algorithm 'Alice' (Algorithmically Integrated Composing Environment), which attempts to reconcile the human grasp of a concept, with the machine's automated response:

The Alice program extends the user-composed passages, develops potentials of composer-created germ ideas, and offers new ideas when inspiration temporarily wanes. The program can also compose as much relevant music as desired – from a single note to an entire section in the user's general style as evident in the music in its database and in the style of an in-progress composition. Alice works with composers by extending user-composed music rather than for composers by creating entire works in a given style. This collaborative approach should interest composers who wish to augment their current composing processes as well as experiment in new directions. [8]

This is far more the sort of thing we have been searching for. While the program is not intended to be as wholly creative as a human, and it has a starting point, there is no pre-determined end. Yes, its programming is simply lines of code, and we still have not found that human sense of understanding, but it belies a nuance which suggests we may. The computer is able to take a musical style and run with it. Like AM, this is a display of not rational or consciousness, but ingenuity. Not quite the missing link, but it serves a purpose: given an objective not reliant on first-order formal systems, or at least one that doesn't look inwardly on such systems, machines are capable (I considered the phrase 'becoming able', though this suggests a level of autonomy, or rather, learning, which I was uncomfortable with) of creativity. It doesn't, however, come close to answering what the specific difference is between mind and machine. We could not axiomatize human consciousness, and expect anything other than a mechanical interpretation of the processes of human thought. While it could very well display creativity, the key to which consciousness rests would remain out of reach. Consciousness may very well rely on a tangible, tactile component, which we have no chance of reproducing in a program. For a final thought on the relative-safety of the human mind from Gödel’s theorems (in that we are working on such a high order that questions of consistency and completeness become redundant), consider J. R. Lucas' reflections on his own paper, years after its publishing and having received some not-miniscule amount of criticism.

Many philosophers question the idealisation implicit in the Gödelian argument. A context is envisaged between “the mind” and “the machine”, but it is an idealised mind and an idealised machine. Actual minds are embodied in mortal clay; actual machines often malfunction or wear out. In the short span of our actual lives we cannot achieve all that much, and might well have neither the time nor the cleverness to work out our Gödelian formula. Any machine that represented a mind would be would be enormously complicated, and the calculation of its Gödel sentence might well be beyond the power of any human mathematician. But he could be helped. Other mathematicians might come to his aid, reckoning that they also had an interest in the discomfiture of the mechanical Goliath. The truth of the Gödelian sentence under its intended interpretation in ordinary informal arithmetic is a mathematical truth, which even if pointed out by other mathematicians would not depend on their testimony in the way contingent statements do. So even if aided by the hints of other mathematicians, the mind's asserting the truth of the Gödelian sentence would be a genuine ground for differentiating it from the machine. [9]

So, the problem which we keep coming back to, that Gödel’s theorems highlight a lack of rational 'thought' (though, 'thought' by itself seems just as reasonable), is echoed everywhere. Should we (human beings) even have a Gödelian sentence (in the sense that theory T has Gt), we have the cognitive ability to problem solve, and further, we can appeal for, and provide, assistance for one another. Increasing the processing power of a machine by running two simultaneously (in RAID, or what have you) does not seem a relevant comparison; while the storage capacity, speed and processing power are increased, it lacks the alternate viewpoint which another consciousness can provide. In which case, while artificial intelligence is impacted by Gödel’s theorems (to varying extends, as we have seen), and while there remains the possibility of addressing it in the future, the problems of consistency and completeness can never arise with regard to actual intelligence and there it would seem, lies the difference.

•                      Physics, the natural laws and God

We have bore witness to the far-reaching, and substantial effects of Gödel’s theorems; the concepts of artificial intelligence and the human mind are better defined thanks to their definitions. The ripple effect is more surprising, though. We know that formalising a theory results in syntactical problems, and applying the same measures to the operations of the human mind is unsuccessful (though, not entirely fruitless as we've seen), and given the universe is built of similarly troublesome matter (or rather, we don't know entirely what it is built of), what are the results when we try to formalise the rules by which our world abides?

For one, it could be argued that, should we try to axiomatize a physics-based theory and succeed, the Gödelian sentence it may produce would be so unwieldy and cumbersome as to lack almost any meaning at all. One the other-hand, there is the distinct possibility that physics remains unable to properly be axiomatized. Hawking asks:

What is the relation between Gödel’s theorem and whether we can formulate the theory of the universe in terms of a finite number of principles? According to the positivist philosophy of science, a physical theory is a mathematical model. So if there are mathematical results that can not be proved, there are physical problems that can not be predicted... In the standard positivist approach to the philosophy of science, a model can be arbitrarily detailed and can contain an arbitrary amount of information without affecting the universes they describe. [But we do not view the universe from outside]. Instead, we and our models are both part of the universe we are describing. Thus a physical theory is self referencing, like in Gödel’s theorem. One might therefore expect it to be either inconsistent or incomplete. The theories we have so far are both inconsistent and incomplete. [10]

The premise is this: while certain theories may work in practice, in this case M-Theory, ultimate theories (axiomatized or otherwise) tend to come with the baggage of self-reference. We could form this theory of theoretical physics, in which we amalgamate the necessary explanatory branches of the subject, but it would not suffice in explaining itself. So far, attempts to apply such methods to physics have proved unsuccessful and unsubstantiated, if not fruitless.

Though analogous (with regard to the incompleteness theorems, as opposed to being a specific example of them in action) it serves to highlight their cascading effect: theoretical studies have little hope of proving, one way or another, the completeness of their systems. The question arises: what happens when we attempt to axiomatize something so large that it is all-encompassing?

We can map arithmetic to a finite set of axioms, and fail to produce a satisfying result. The truth of an infinite number of statements is unprovable by the systems syntax; surely, at the very least, this result is peculiar. Gödel, himself, a known-theist, at one point in his life went as far as attempting to 'prove the existence of God by accepted rules of inference. He chose the framework of modal logic, a useful formal language for proof theory which also has important applications in computer science. This logic is the study of the deductive behaviour of the expressions ‘it is necessary that’ and ‘it is possible that,’ which arise frequently in ordinary language. However, according to his biographer John Dawson, he never published his ontological argument for fear of ridicule by his peers. [11]

He needn't have have worried. While his proof is insubstantial (we won't go into it here), his incompleteness theorems intuitively cause one to ask some specific ontological questions. In fact, Gödel’s theorems infer that while we can, by induction, accept that the infinite may stem from the finite, the finite cannot prove the infinite. By this intuition, then, we rely on an outside influence for comprehension (as examined in the last section, if humans could successfully axiomatize consciousness and the result was the discovery of 'our' Godelian number, we could appeal for help in solving it); so, in the instance of axiomatizing theories of mathematics, physics or the entire universe, what is the external force? If the universe were a finite thing (expressible in a formal theory, like a computer, say), then it would not be able to sufficiently understand itself, and would only be expressible in terms outside of itself. It could be argued that, as God is all-knowing, only he is capable of proving everything – thus the flaw in axiomatic systems.

This logic, though, begs the question: given our all-knowing deity, if we were to apply the same system to it, surely the results would be syntactically similar. That is, if one were to fully axiomatize the deity's consciousness (or whatever would be necessary for one to have been considered to have distilled the relevant godly-attribute into a formal system), it could potentially result in an incomplete system. In this, we have a paradox: an omniscient deity which is incapable of fully comprehending itself.

There is the option of God existing outside of the grasp of Gödel’s theorem, though that leaves us with the undesirable possibility of it looking like we do not fully invest our belief in the theorems themselves. We may have to rely on faith in order to accept anything at all, be it scientist or theist. “ Michael Guillen has spelled out this implication: the only possible way of avowing an unprovable truth, mathematical or otherwise, is to accept it as an article of faith.” [12]

Years after the reductionist programme fell apart, Bertrand Russell, who had been one of its key proponents, offered this similar line of though on the mathematician's troubles; neatly tying our divergent strands together, in finality:

I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than elsewhere...   After some twenty years of very arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable. [13]

2 Comments