Something went wrong. Try again later

deactivated-57beb9d651361

Selling @BRIANMBENDIS complete Daredevil run. Ultimate Collection Vol 1-3. http://ebay.eu/1ePazhR

4541 0 0 0
Forum Posts Wiki Points Following Followers

Are 'reviews' irrelevant?

Given Jeff's recent assertions, and this community's piggy-backing on them, I thought I'd give the thought more than a cursory musing. Forewarning, I am tired and this will likely meander absolutely everywhere. Apologies in advance.

Now, I don't know how the rest of you use Twitter, but I follow a lot of those involved in the games industry. With the embargo on Assassin's Creed 3 being lifted (itself a totally revolting concept that requires a write-up of its own) this morning, my feed lit up with maybe a dozen different writers all linking to their own reviews.

Amused, I read the first two or three and leafed (well...) through the others. All followed a pretty standard format: put the game into context in the series, describe the setting/plot in brief, mission structure, aesthetics, technical issues, multiplayer component and conclude. An utterly arbitrary quantification of their estimation of the game's merits is then slapped at the bottom, with a single sentence summarily completing the experience. Besides, is ever ever necessary to have a four page review that I'm required to click through?

No, it is fucking not.

The thing is, there isn't just a standard format; given a game is a somewhat mechanical thing, a measure of common ground is more easily found than when compared to other entertainment mediums. Virtually every review picked up on the same technical issues, though some were more perturbed by them than others. Of course, it isn't certain that two reviewers will necessarily both enjoy the game regardless of it flaws, but I'd wager that most sites wouldn't assign a writer who dislikes a certain franchise/genre to a new entry in that arena. They all end up reading exactly the same way.

They go no way to actually providing the tactile sensation of playing a game. I'm sure we've all had an experience where a game has received raved reviews, we've been incredibly excited for it and when the time plays to get your hands round the controller... it just doesn't click.

Dismal.

Given that, we end up with a fairly homogeneous 'critiques' which the industry loves, and lots of them. Publishers can quite easily lift phrases and buzzwords from reviews verbatim and slap that shiny number on the box - and consumers fucking love numbers, man. Indeed, these numbers are so important that Metacritic averages have long been known to heavily weigh on developer's minds when it comes to possibly bonuses and determining who will develop possible sequels (assuming the title perfoms well commercially). A poor reception can also drastically effect the franchise potential of a title. ~Thanks to Brodehouse for clarification on this point. See here, here and here.

Now, there is a strong argument that we are to blame for the state of the industry at the moment. The truth is, the vast majority of us knew whether we were getting this game long before reading through one of these quite rote pieces, and arguing over scores given is merely a qualification of our own tastes. There are other factors, no doubt, but this must be the most prevalent one, as we are the factor that determines a products eventual success or failure. That is, the relationship with between publishers and the outlets through which we consume information about their products has come about as a result of behaviour. Indeed, the vast majority of games sites are marketing resources. This is not to condemn these sites, it is intrinsically what they are. A new game is on the horizon and they publish each and every tidbit the publisher throws their way. I don't want to delve too deeply into this now, considering there's been a lot of talk on this subject in the last week, but it's a salient point, regardless.

What is the alternative, then? I'm unconvinced quick-looks are the future. The issue is that there are two ends of the video content spectrum: on one hand, we have a 30-minute video where the narrator chatters away, totally failing to describe anything that could not be put into written-word, while idly browsing through menus and intermittently playing the fucking thing (and often poorly at that). On the other, we have riotous, hour-long videos that successfully show the wealth of multiplayer options, fluidity of combat, etc that we want to see and provide a satisfactory alternative to actually playing. This analogue more closely resembles actually playing, and is far more likely to sway opinion than an interchangeable 'opinion' piece. I'd argue, though, that often these are the (somewhat) structured EX videos, but these of course come with their own issues. The gang is far less inclined to savage something, or indeed be anything other than totally demure, given that they are sitting on the same couch as the developers. So yes, we're given a better understanding of the game, but any fundamental issues we can't couldn't possibly appreciate without playing are unlikely to be relayed.

In an attempt to try and tie this up quickly (typing this at an awkward angle on a very small laptop has caused me to become fidgety), I'll sum up. As far as I'm concerned, the future lies somewhere in the middle. No one wants to watch someone monotonously read their review script for five minutes in front of camera, but there needs to be some sort of structure in place in order that they accurately reflect what the game is like to play. Though reviews themselves are quickly becoming outpaced by the technology the internet has provided us, they most definitely serve a purpose. A meeting in the middle is the best option - one qhich Giantbomb has pioneered, but not mastered.

The written word is important, and if we see movement away from the classical review style, I would hope it would cause a surge in actual criticism, real opinion pieces and editorials. In this way, Giantbomb (along with Eurogamer, and likely a small handful of other sites) has really begun to push what games journalism can be.

30 Comments

Prometheus: there's a great film in there, somewhere.

Note: some spoilers within.

Prometheus is effectively two films, and neither one has the punch to succeed completely. In fact, one falters entirely.

The first two-thirds of the movie ask some big questions and tease us with themes of faith, creation and destruction and parenthood, but none of them are fleshed out fully enough. Much like our introduction to David, the first, and to a lesser extent, second act are deliberately paced and quite thoughtful. There are plenty of allusions and mythic-come-religious symbolism (the flashing cross in Shaw's dream-sequence, though, was an ugly, obvious step over the line), and the gentle pace serves to aid these ideas. The main cast are even given some time to develop during this period. The problem is what comes next.

By the time the third act comes around, everything they've spent the last hour and a half contemplating is totally discarded and we're left with something of a hollow monster-movie. Prometheus is too concerned with, and obviously self-conscious about, its ties with Alien that these ideas aren't given the room to breath that they require.

From the moment Shaw loses consciousness following Holloway's death, the film barely gives a single scene a moment to register. It bombards us with plot points and monstrous abominations. From this point on, the film is only concerned with getting Shaw into that 'other' ship and flying off into the great unknown. Whether this is a problem with the cut we won't know until the inevitable extended edition. What is for sure, though, is that the end falls flat. The much talked about caesarian scene is fantastic, and full of menace, but the blink-and-you'll-miss-it lead up, and the events that follow shortly after serve only to dampen the moment. Within the context of the movie, the single purpose this scene had was to provide a means of despatching the last Engineer, effectively providing a tentacled deus ex machina. The less said about the film's coda, the better: ticking of impregnation by extra-terrestrial is one thing, ending the film on a screaming infant xenomorph is another (not to mention the awful creature design exhibited).

The Lindelof argument is well-worn at this point, but had the script dealt with these elements more resolutely then the film would have been tighter overall.

This is my biggest gripe: the last act is far too muddled and rushed to really have much effect at all. It feels like a cut of a longer, better film, with particular scenes jarring painfully. It could be argued that the film mirrors Shaw's mental state having gone through what she just has, but it doesn't hold much water. Audiences will be left scratching their heads, no doubt. Should we see a 'director's cut', I'd wager that it would be a definite improvement over the theatrical version (obviously the problems at script-level can't be helped much regardless of length), but that is yet to be seen.

To further the scripts problems, much of the character motivation is purely in aid of plot contrivancies, which is a huge shame and tarnishes the great work that some of the other support put in. After venturing into an alien temple, two scientists (one of whom has been mapping the area) become lost, and unable to return to the ship. Not content with being cut off from their crew-mates, one indulges in a spot of reefer, while the other makes pillow-talk with a particularly aggressive looking snake. The parts are played with perfunctory stupidity. On the other side of the fence, though, are Fassbender, Elba, Theron and Green. Rapace is something of a mixed-bag, playing certain moments with an intense believability and poorly delivering others in a questionable accent.

Of course, Fassbender steals the show, but I actually thought Holloway's short arc was a triumph considering his relative lack of screen-time - Charlie's ultimate fate being the singular moment where a character's death meant much at all. Elba and Theron did well with the small parts they had to work with, the former elevating some truly awful material with as much professionalism as she could muster. Small looks here and there, and inflections in her speech sell what could easily have been cookie-cutter (if not worse) as portrayed by a lesser actor. Elba, as the ship's captain, displays the expected world-weariness, but with enough gravitas and wit that it works. If what we've heard with regard to scenes left in the editing room is true, their characters will be better served in the longer cut.

Scripting issues and thesps aside, Prometheus is basically state-of-the-art in marrying digital wizardry with gorgeous practical effects and sets. It is a complete knock-out in that regard, and I doubt a single person will come away without specific images burned into their memories. On approach to the moon LV-223, for example, the Prometheus is a spec against the enormous backdrop of a ringed-planet. On an IMAX screen, sights like this are genuinely breathtaking.

Further, the film absolutely oozes atmosphere. The brooding score, while ineffective at times, does its best to convey the marvels that the crew comes across. The first scene of the film, a wistful fly-over some incredible terrain before our first encounter with the alien race that apparently birthed us, is lifted with a beautiful musical flourish and genuinely had me in awe. The repeated journeys to the temple (granted, a little meandering on the script's part), gradually uncover more mysteries and wonders. There wasn't a moment where I didn't feel, at the very least, intrigued with what I was seeing. In its finest moments, Prometheus really does have you feeling like one of the crew.

There are issues, for sure - huge, fundamental issues - but the thing is, weighing all those things up, I can't help but love it. I've already seen it twice, and I plan on going once again before the end of its run. Whether that shows a lack of character on my part, for so easily being swept up in the grandeur, I don't particularly care: it's a fucking science-fiction movie by Ridley Scott and I'm a nerd.

21 Comments

Artificial Intelligence vs. The Human Mind and other musings.

Related to my post of about a week back, here is the final portion of my dissertation sans proof. A little knowledge of Gödel's theorems is necessary to understand where I'm approaching from, but otherwise it should be fairly self-evident. 
 
-  -  -  -  - 
 

 Part III

The Impact

Gödel’s theorems play, what one would assume to be, an unexpectedly major role in a breadth of other fields. Because mathematics, physics and computing are inherently and intrinsically related, the repercussions of the incompleteness theorems still resonate in the world of science today. And so, we will take a brief, winding journey through these effects, with an eye to exploring the difference between mind and machine. In order to contextualize what I aim to do in this section, I turn once again to 'Gödel, Escher, Bach', the thoughts expressed in which play a particularly important role in this final portion.

Looked at this way, Gödel's proof suggests – though by no means does it prove! – that there could be some high-level way of viewing the mind/brain, involving concepts which do not appear on lower levels, and that this level might have explanatory power that does not exist – not even in principle – on lower levels. It would mean that some facts could be explained on the high level quite easily, but not on lower levels at all. No matter how long and cumbersome a low-level statement were made, it would not explain the phenomena in question. It is analogous to the fact that, if you make derivation after derivation in Peano arithmetic, n o matter how long and cumbersome you make them, you will never come up with one for G – despite the fact that on a higher level, you can see that the Gödel sentence is true. What might such high-level concepts be? It has been proposed for aeons, by various holistically or "soulistically" inclined scientists and humanists that consciousness is a phenomenon that escapes explanation in terms of brain components; so here is a candidate at least. There is also the ever-puzzling notion of free will. So perhaps these qualities could be "emergent" in the sense of requiring explanations which cannot be furnished by the physiology alone. [1]

A paradigm of clarity, Hofstadter succinctly captures what shall be the train of thought throughout   Part III. I aim to take a somewhat anti-mechanism stance, though not necessarily a 'soulistic' one, and hope to convey such in a satisfactorily expressive manner. Though the section is split by various sub-headings, they will each, hopefully, tie into one another and show a common theme by the paper's end. We will come back to Gödel’s theorems regularly as a point of reference, though, we are now primarily focused with what they ontologically, rather than epistemologically, entail - with regard to our other focuses of interest.

•                      Artificial Intelligence vs. The Human Mind

To quote the King, in Alice in Wonderland, I shall “begin at the beginning” - Gödel as starting point - but probably not, as it happens, even make my way to the end. Such is the enormity of the subject. As I am sure you are aware, there remains a schism between man and machine: Gödel’s theorems highlight that, in their inability to 'see' an intrinsic truth, computational processes exhibit an explicit lack of... of what? Understanding. Rationality. Creativity. Consciousness. All of these things play a part in the human mind's ability to perform at a higher rate of functionality than any machine is capable of. Even that, though, is selling it short. What exactly is it that sets us apart from our mechanical counter-parts?

Turing, the creator of the well-known Turing Machine (a theoretical logic device intended to simulate simple computer commands and, thus, give explanation and to provide, to an extent, some understanding for their procedures), disregarded these questions as being without merit:

The original question “can machines think” I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. [2]

Considering that we are now well beyond the end of the century, it would seem Turing's hypothesizing was incorrect, though, that is somewhat of a moot point. Far more interesting is his comment that the thought process of machines, specifically, the question of whether they have one or not, is so meaningless as to not require discussion. Well, this is exactly the kind of thing that is apt to provoke a philosophical dialogue (the irony of which does not go unnoticed). The discussion, of course, is entirely meaningful. More so today than ever, given that when Turing designed his machine, computers, as we know them today, did not exist.

To put the comment's context within the realm of Gödel’s incompleteness theorems, we know that, assuming the computer is programmed with our axioms, it is unable to prove that our T-sentence, G , is true. In fact, there are an infinite number of these improvable T-sentences. Indeed, it suggests that, on a larger scale, computers in general would be incapable of, what amounts to, rationality. However, while they say that, programmed with this set of directives, machines lack the cognitive ability to 'see' that this is so, computing has come an incredibly long way since the theorems were originally published.

What is it then, which separates human ingenuity in thinking, from a machine's simple and restricted processes? It would seem to be come down to reasoning. Given a specific problem, a computer must have its directives laid out for it in order to find a solution. The human mind is not so limited in scope. Where first we fail, there are other routes to be taken; if those do not succeed, then we are able to think outside of convention.

Turing quotes a “Professor Jefferson's Lister Oration for 1949” in the same paper:

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain - that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants. [3]

Jefferson suggests that there is an intangible element that defines human thinking. In resorting to emotion, though, he somewhat misses the point. We do not solve problems as a result of regarding our emotions. In fact, they are likely to cloud our judgement, if anything: a student becoming increasingly angry at his inability to grasp said theorem, e.g.

His point on music, however, bears consideration. Given a set of notes and directives - a scale and a specific time-signature, say - a computer could likely produce something, albeit mechanical   and without any 'feeling' (this intangible quality would impact the result, so Jefferson is not wrong; it may produce something, but whether it provides an enjoyable sonic-experience is another matter – more on this shortly), unaffected by Gödel’s theorems. That is, computers are not simple proof-checking machines; they are much more complex than first-order theory-abiding, arithmetic churning systems. Further, “theorem proving is among the least-subtle of ways of trying to get computers to think. Consider the program 'AM', written in the mid-1970's by Douglas Lenat. Instead of mathematical statements, AM dealt with concepts; its goal was to seek 'interesting' ones, using a rudimentary model of esthetics and simplicity. Starting from scratch, AM discovered many concepts of number theory. Rather than logically proving theorems, AM wandered around the world of numbers, following its primitive esthetic nose, sniffing out pattern, and making guesses about them. As with a bright human, most of AM's guesses were right, some were wrong, and for a few, the jury is still out.” [4]

Hofstadter again, though this time in his foreword to Nagel and Newman's seminal 'Gödel’s Proof'. This thought-provoking example serves to illustrate the fact that not all machines/computers are so limited by, and are free (to some extent) from, the theorems. While the system would have been unable to prove the same theorems that it had found, it could still perform its actual function with some aplomb.

Again, though, the idea of rational rears its head. The computer never 'considered' the concepts it chose. Where the term 'interesting' is used, the computer had no means to appreciate the concepts on that level. It simply made a correspondence between the directives it was fitted with, and those concepts which sufficiently enough matched them (or, more likely didn’t, given its objective). AM, though, seems a different breed of processing: granted the picks it made were still based on its programming, but there bears a closer resemblance to human thinking than in other examples of machine 'intellect'. It is prototypical in the sense that it displays, without there actually being any, rational through a flicker of ingenuity; as Hofstadter says, we aren't even sure of some of its picks yet.

Thus, while it isn't prohibited by Gödel’s theorems in performing its action, there is still a relevantly epistemological worry which bears utmost resemblance to that which arises in the mathematician's work: the notion of understanding is of primary consideration in such examples. Where the computer fails on some higher-level to appreciate that G is, in fact, true in T, these examples serve to show the same lack of understanding across the board.

At this point, we shall revisit the musical example above. There are, it turns out, programs not entirely unlike the one envisaged above. One without musical freedom: its limitations are defined and restrictive; the other with: it has a certain amount of musical creativity.

Max Mathews of Bell Laboratories ... fed in the scores of the two marches 'When Johnny Comes Marching Home' and 'The British Grenadiers', and instructed the computer to make a new score – one which starts out as 'Johnny', but slowly merges into 'Grenadiers'. Halfway through the piece, 'Johnny' is totally gone, and one hear 'Grenadiers' by itself … Then the process is reversed, and the piece finishes with 'Johnny', as it began. [5]

The result is described as droll, if turgid. There is absolutely no consciousness involved, and the program takes the necessary steps as required, omitting that essential human element: creativity. Further, the author could be said to be just as much the author as the machine. Allow me to expand: Max Mathews wrote a program, and in doing so, told it exactly what he wanted it to perform. The machine, abiding, proceeded to bridge the gap between the scores, mechanically. Feeding certain data into geographical analysis programs, it performs a similar function: unthinkingly converting said data into a visual representation. One would surely not consider the machine, in either example, the author of the results.

Before we consider the second example, let us take a moment to bring in Gödel’s theorems in a more direct manner. Given this notion of authorship, what does it mean to have a computer program author a piece of work? How would we refer to it? Further, does it refer to itself? A brief detour, then, into the relation of self-reference and authorship.

The first incompleteness theorem says 'I cannot be proven in T'. In this case 'I' is the sentence G; the sentence which is actually being presented. We, as rational (there is that implication heavy expression again) beings, understand that the sentence isn't making a declaration; at least, not in the typical sense.

...in all first-person statements, including 'psychological', or 'experience' statements, the word 'I' serves the function of identifying for the audience the subject to which the predicate of the statement must apply if the statement is to be true (what it indicates, of course, is that the subject is the speaker, the maker of the statement). And this is precisely the function of a referring expression.

            Yet some philosophers have often found the referring role of 'I' perplexing, and some have been led to hold that in at least some of its uses it is not a referring expression at all. [6]

Even more problematic than Shoemaker lets on, is that in our example we aren't even dealing with a conscious subject. In the same paper, he suggests that we substitute our use of 'I' with 'it', in the sense that 'it is raining'. So, though he applies it to our use of 'I think' ('its thinking that...'), it would seem more appropriate, in this context, to apply it to our theorem’s claim. We then formulate the sentence, 'It cannot be proven in T', which seems somehow less intuitive. What then is it that allows us to read what should be considered a very strange proposition indeed ('I am unprovable in T'), without batting an eye and knowing intuitively what it means. Hofstadter asks

On what other occasions, if any, have you encountered a sentence containing the pronoun 'I' where you automatically understood that the reference was not to the speaker of the sentence, but rather to the sentence itself? Very few I would guess. The word 'I', when it appears in a Shakespeare sonnet, is referring not to a fourteen-line form of poetry printed on a page, but to a flesh and blood creature behind-the-scenes, somewhere off-stage. [7]

'I' it seems, is a perfectly acceptable, intuitive 'statement' for a machine (or any semantically similar proposition from any inanimate object) to make. Of course, the machine does not actually 'make' any statements; we interpret them. In that case, it appears we intuitively, on some undefinable level at least, accept the authorship of the machine, going as far as to personify and apply communicative traits unto them.

There remains, however, the second example, to provide us with a glimpse of the machine-rational we are searching for. David Cope, who has written a trilogy of books on the subject of creativity in musical programs, describes the musical algorithm 'Alice' (Algorithmically Integrated Composing Environment), which attempts to reconcile the human grasp of a concept, with the machine's automated response:

The Alice program extends the user-composed passages, develops potentials of composer-created germ ideas, and offers new ideas when inspiration temporarily wanes. The program can also compose as much relevant music as desired – from a single note to an entire section in the user's general style as evident in the music in its database and in the style of an in-progress composition. Alice works with composers by extending user-composed music rather than for composers by creating entire works in a given style. This collaborative approach should interest composers who wish to augment their current composing processes as well as experiment in new directions. [8]

This is far more the sort of thing we have been searching for. While the program is not intended to be as wholly creative as a human, and it has a starting point, there is no pre-determined end. Yes, its programming is simply lines of code, and we still have not found that human sense of understanding, but it belies a nuance which suggests we may. The computer is able to take a musical style and run with it. Like AM, this is a display of not rational or consciousness, but ingenuity. Not quite the missing link, but it serves a purpose: given an objective not reliant on first-order formal systems, or at least one that doesn't look inwardly on such systems, machines are capable (I considered the phrase 'becoming able', though this suggests a level of autonomy, or rather, learning, which I was uncomfortable with) of creativity. It doesn't, however, come close to answering what the specific difference is between mind and machine. We could not axiomatize human consciousness, and expect anything other than a mechanical interpretation of the processes of human thought. While it could very well display creativity, the key to which consciousness rests would remain out of reach. Consciousness may very well rely on a tangible, tactile component, which we have no chance of reproducing in a program. For a final thought on the relative-safety of the human mind from Gödel’s theorems (in that we are working on such a high order that questions of consistency and completeness become redundant), consider J. R. Lucas' reflections on his own paper, years after its publishing and having received some not-miniscule amount of criticism.

Many philosophers question the idealisation implicit in the Gödelian argument. A context is envisaged between “the mind” and “the machine”, but it is an idealised mind and an idealised machine. Actual minds are embodied in mortal clay; actual machines often malfunction or wear out. In the short span of our actual lives we cannot achieve all that much, and might well have neither the time nor the cleverness to work out our Gödelian formula. Any machine that represented a mind would be would be enormously complicated, and the calculation of its Gödel sentence might well be beyond the power of any human mathematician. But he could be helped. Other mathematicians might come to his aid, reckoning that they also had an interest in the discomfiture of the mechanical Goliath. The truth of the Gödelian sentence under its intended interpretation in ordinary informal arithmetic is a mathematical truth, which even if pointed out by other mathematicians would not depend on their testimony in the way contingent statements do. So even if aided by the hints of other mathematicians, the mind's asserting the truth of the Gödelian sentence would be a genuine ground for differentiating it from the machine. [9]

So, the problem which we keep coming back to, that Gödel’s theorems highlight a lack of rational 'thought' (though, 'thought' by itself seems just as reasonable), is echoed everywhere. Should we (human beings) even have a Gödelian sentence (in the sense that theory T has Gt), we have the cognitive ability to problem solve, and further, we can appeal for, and provide, assistance for one another. Increasing the processing power of a machine by running two simultaneously (in RAID, or what have you) does not seem a relevant comparison; while the storage capacity, speed and processing power are increased, it lacks the alternate viewpoint which another consciousness can provide. In which case, while artificial intelligence is impacted by Gödel’s theorems (to varying extends, as we have seen), and while there remains the possibility of addressing it in the future, the problems of consistency and completeness can never arise with regard to actual intelligence and there it would seem, lies the difference.

•                      Physics, the natural laws and God

We have bore witness to the far-reaching, and substantial effects of Gödel’s theorems; the concepts of artificial intelligence and the human mind are better defined thanks to their definitions. The ripple effect is more surprising, though. We know that formalising a theory results in syntactical problems, and applying the same measures to the operations of the human mind is unsuccessful (though, not entirely fruitless as we've seen), and given the universe is built of similarly troublesome matter (or rather, we don't know entirely what it is built of), what are the results when we try to formalise the rules by which our world abides?

For one, it could be argued that, should we try to axiomatize a physics-based theory and succeed, the Gödelian sentence it may produce would be so unwieldy and cumbersome as to lack almost any meaning at all. One the other-hand, there is the distinct possibility that physics remains unable to properly be axiomatized. Hawking asks:

What is the relation between Gödel’s theorem and whether we can formulate the theory of the universe in terms of a finite number of principles? According to the positivist philosophy of science, a physical theory is a mathematical model. So if there are mathematical results that can not be proved, there are physical problems that can not be predicted... In the standard positivist approach to the philosophy of science, a model can be arbitrarily detailed and can contain an arbitrary amount of information without affecting the universes they describe. [But we do not view the universe from outside]. Instead, we and our models are both part of the universe we are describing. Thus a physical theory is self referencing, like in Gödel’s theorem. One might therefore expect it to be either inconsistent or incomplete. The theories we have so far are both inconsistent and incomplete. [10]

The premise is this: while certain theories may work in practice, in this case M-Theory, ultimate theories (axiomatized or otherwise) tend to come with the baggage of self-reference. We could form this theory of theoretical physics, in which we amalgamate the necessary explanatory branches of the subject, but it would not suffice in explaining itself. So far, attempts to apply such methods to physics have proved unsuccessful and unsubstantiated, if not fruitless.

Though analogous (with regard to the incompleteness theorems, as opposed to being a specific example of them in action) it serves to highlight their cascading effect: theoretical studies have little hope of proving, one way or another, the completeness of their systems. The question arises: what happens when we attempt to axiomatize something so large that it is all-encompassing?

We can map arithmetic to a finite set of axioms, and fail to produce a satisfying result. The truth of an infinite number of statements is unprovable by the systems syntax; surely, at the very least, this result is peculiar. Gödel, himself, a known-theist, at one point in his life went as far as attempting to 'prove the existence of God by accepted rules of inference. He chose the framework of modal logic, a useful formal language for proof theory which also has important applications in computer science. This logic is the study of the deductive behaviour of the expressions ‘it is necessary that’ and ‘it is possible that,’ which arise frequently in ordinary language. However, according to his biographer John Dawson, he never published his ontological argument for fear of ridicule by his peers. [11]

He needn't have have worried. While his proof is insubstantial (we won't go into it here), his incompleteness theorems intuitively cause one to ask some specific ontological questions. In fact, Gödel’s theorems infer that while we can, by induction, accept that the infinite may stem from the finite, the finite cannot prove the infinite. By this intuition, then, we rely on an outside influence for comprehension (as examined in the last section, if humans could successfully axiomatize consciousness and the result was the discovery of 'our' Godelian number, we could appeal for help in solving it); so, in the instance of axiomatizing theories of mathematics, physics or the entire universe, what is the external force? If the universe were a finite thing (expressible in a formal theory, like a computer, say), then it would not be able to sufficiently understand itself, and would only be expressible in terms outside of itself. It could be argued that, as God is all-knowing, only he is capable of proving everything – thus the flaw in axiomatic systems.

This logic, though, begs the question: given our all-knowing deity, if we were to apply the same system to it, surely the results would be syntactically similar. That is, if one were to fully axiomatize the deity's consciousness (or whatever would be necessary for one to have been considered to have distilled the relevant godly-attribute into a formal system), it could potentially result in an incomplete system. In this, we have a paradox: an omniscient deity which is incapable of fully comprehending itself.

There is the option of God existing outside of the grasp of Gödel’s theorem, though that leaves us with the undesirable possibility of it looking like we do not fully invest our belief in the theorems themselves. We may have to rely on faith in order to accept anything at all, be it scientist or theist. “ Michael Guillen has spelled out this implication: the only possible way of avowing an unprovable truth, mathematical or otherwise, is to accept it as an article of faith.” [12]

Years after the reductionist programme fell apart, Bertrand Russell, who had been one of its key proponents, offered this similar line of though on the mathematician's troubles; neatly tying our divergent strands together, in finality:

I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than elsewhere...   After some twenty years of very arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable. [13]

2 Comments

Deus Ex Machina: There is no God in this Machine.

So I handed in my dissertation today, after the most intense writing period of my life. Probably should have started it earlier than a week before deadline. Alas...  
 
The title is as above, but specifically, it's about Gödel's Incompleteness Theorems and how they impacted maths and philosophy. 
 
As it is, there are definitely things that a few advisory meetings would have cleared up (I'm very vague on the proofs and there are aspects of my ontological argument which don't hold), but I'm genuinely pleased it's out of the way. Anyway, I'm going to post it in 3 parts over the next days. Feel free to have a read through. I'm only really doing this because I haven't read a finished copy (family and friends proof-read it) and I'd like to hear some opinions/instigate discussion.  
 
Part III is where most of the meat is, so consider Part I introductory (it's actually neither very focused nor argumentative, so pretty useless... still.) and Part II the technical portion (which, as I said, is pretty rough and not entirely spot-on). 
 
- - - - - - 
 

 Part I


The Pledge


In 1931, a relatively unknown Austrian mathematician by name of Kurt Gödel published his paper 'On formally undecidable propositions of Principia Mathematica and related systems I' to little reaction at all; its contents were virtually impenetrable, and even those who were in possession of the acutely mathematical requirements necessary for understanding failed to comprehend the resounding impact this would have on their field. This paper serves to enlighten on the subject, challenge some existing beliefs and examine some of the philosophical repercussions that the theorems entail.


  • Background


To give a sense of his character, Gödel was described as cold and aloof : he was always thinking analytically. In a particularly memorable anecdote, as retold by Stephen Hawking, Gödel was involved in the necessary preparation for citizenship through naturalization:


He studied diligently for his hearing, much more diligently than necessary. The economist Oskar Morgenstern, one of his closest friends, noticed Gödel becoming more and more upset as the hearing date approached. Morgenstern simply thought that Gödel was nervous in anticipation of the hearing. A few days before the hearing, Gödel confided to Morgenstern that he had found a serious flaw in the American Constitution. The President could fill vacancies without Senate approval, while the Senate was in recess. This, Gödel reasoned, could lead to a dictatorship.


When the examination day eventually came, Einstein (with whom he shared a very close friendship, and whom Gödel became something of a confidante for) and Morgenstern accompanied him to the meeting. Einstein, very amusingly (though, at the time worried his friend may have caused problems for himself) divulged the events:


And then [the examiner] turned to Gödel and said, Now, Mr. Gödel, where do you come from?

Gödel: Where I come from? Austria.

The examiner: What kind of government did you have in Austria?

Gödel: It was a republic, but the constitution was such that it finally was changed into a dictatorship.

The examiner: Oh! This is very bad. This could not happen in this country.

Gödel: Oh, yes, I can prove it.


Entertaining, but it also goes some way towards helping us understand Gödel. So, while he was intently focused on producing one of the most important mathematical papers of his time, his peers were busying themselves in an attempt to reduce mathematics to a system based on formal logic: the desire to produce a set of axioms which would define any theory of mathematics. For example, a computer could be programmed, with said axioms, to understand all of a given theory of arithmetic.


Key among these efforts was that of a German mathematician, David Hilbert. He developed a programme by which he intended “ to vindicate classical mathematics, including Cantor's transfinite set theory, by (I) expressing that mathematics in a formal language which could then itself be regarded as an object of mathematical study (in proof theory, or meta-mathematics), and (II) using only finitary methods to prove that this formal system of infinitary mathematics is consistent by proving that no formula of the form '0 = 1' is provable in it.”


There was one, however, who before all else realised the importance of Gödel’s work and findings. John Von Neumann “who was lecturing on David Hilbert’s work at the time, read Gödel’s 1931 paper, [and] cancelled what was left of his course and began lecturing on Gödel’s findings.” When Gödel delivered a lecture, in which Von Neumann was in attendance, he approached the young mathematician to clarify if he had completely understood what Gödel had just said (indeed, the organisers were so oblivious to his findings, that the “session transcript featured no mention of Gödel’s findings”).


Kurt Gödel had effectively ruined Hilbert's attempts (specifically (II)), and further, the entire reductionist programme. His paper dealt a marriage of blows to the community: Incompleteness Theorems One and Two .


  • The First Incompleteness Theorem


Gödel posited that, for any sufficiently consistent axiomatic system, there are true statements which it is unable to prove (completeness, of course being that if a theorem is true, it can be proved). That is, we can formulate a sentence G (Gödel sentence) from the arithmetic theory T's axioms, which encodes the statement “G is unprovable in T” . This bears what would seem to be a somewhat problematic resemblance to the Epimenides, or Liar, Paradox, “All Cretans are liars”, commonly reworded as “This sentence is false”, though it's implications and method are of a much more subtle, beautifully realised sort. If G is true, it must be unprovable, and by negation, if it is provable, it must be false. The statement, however, is most definitely true and, hence, undecidable in T: where the Liar paradox goes round in circles (if it is true, it is false, and vice-versa) Gödel successfully avoids a paradox, and actually provides us with a substantial proof. We will examine the proof in acute detail in Part II, the technical portion of this paper; for now, we will focus on what it says and, exactly, what this means.


Douglas R. Hofstadter, in his fantastic 'An Introduction to Gödel’s Theorems', offers that “Godelian incompleteness immediately defeats what is otherwise a surely attractive suggestion about the status of arithmetic – namely the logicist idea that it all flows deductively form a simple bunch of definitional truths that articulate the very ideas of the natural numbers, addition and multiplication.” Such carefully worded descriptions should not be taken out of context. We should, in fact, hang upon them. One could imagine that, by showing the unprovability of certain mathematical truths, Gödel's theorem could be construed as defeating mathematics wholly and completely. Thankfully, this isn't the case. While it does put some important mathematical foundations on very shaky ground, indeed, the first incompleteness theorem is specifically targeted at an elementary theory of arithmetic, one which is concerned entirely with integers. Only an axiomatic theory of the natural numbers, capable of supporting the Peano axioms, is at risk directly (though, in Part III we will examine the theory's effect on other principles). While the theory is proven to be incomplete, this is far favourable to being ruled inconsistent; because all statements are either provable or not provable (that is, they are either true or false, but not necessarily decidable within the axioms of the theory), Gt and ¬Gt are not both proven within the same theory. And so it remains consistent.


Real numbers, for example, are safe from Gödel’s results. In fact, Gödel himself proved that a formal system of real numbers is both consistent and complete by virtue of integers being undefinable within its parameters. Thus, the axiom of succession, the peano axioms, are not represented, and so the problem which corrupts the formal theory of the natural numbers doesn't arise for the reals (we will not examine the actual proof any more closely than this).


The problem lies (again, directly; there are indirect, far-reaching implications and problems to tackle later) with computational arithmetic. Where Hilbert's Programme tried to emulate all of mathematics in a formal system, the first incompleteness theorem shows us why it was unsuccessful. Effectively, our computer from before, which has been pre-programmed with the necessary axioms to form the theory T, would not perform as desired. Here we have a machine that would not be able to understand (in this case, prove) certain true statements which it is programmed to not only utilize, but comprehend (though, this is where a large portion of the trouble lies, and we will investigate the further nuance of a machines ability to 'fathom' at another point).


  • The Second Incompleteness Theorem


The second theorem differs slightly, though is intrinsically tied with the first. In fact, its existence is logically entailed by the former. It states that, simply put, 'theory T cannot prove its own consistency' (consistency being that no proof contradicts another within the theorem). More unassuming than the first, but just as damning. Gödel, though, never laid out an exact proof; rather, he sketched the idea which takes form below.


Consider: from the first incompleteness theorem, we have a T-sentence which encodes something to the effect of 'G is unprovable in T'. Therefore, we know that facts are encodable within T-sentences. Given this, we can posit that there will be a sentence which encodes the claim that '0 = 1 cannot be derived in T'. We will call such a sentence Con , as it proves T's consistency. Then, as we know that for T to be consistent, G must be unprovable, we can also encode our sentence 'G is unprovable in T', with Con (as we should be able to encode any given sentence with a statement regarding the theory's numerical consistency, assuming both the truth and negation isn't proved): specifically, 'Con → G '. We know that T can only be consistent exactly because it cannot decide G in T. Thus, if T is consistent, it cannot prove Con .


The second theorem is easier to follow, simply for the reason that it is fairly self-evident. A theory cannot appeal self-referentially in an attempt to prove its own consistency. Any example would do here, but a simple analogy will suffice. Given the sentence 'I am right', it will not do to say, 'I am right because I am right' when explaining why . Though, obviously, there is more to the consistency reasoning than this. Further, if it were inconsistent , then we could derive anything at all that we wished, from T. We could formulate “T is consistent”, “T is inconsistent”, “T is a strange loop”, or whatever takes your fancy.


  • Implications


The impact that Gödel’s theorems had on mathematics, logic and formalism was slow, but significant. They proved that no attempt to completely axiomatize mathematics could be entirely successful. There will, for one, as the first incompleteness theorem shows, always be true but unprovable sentences, so the reductionist desire to relate truth and provability is corrupted. With regard to the second theorem, Gödel showed that, for similar theorems which substantiate the necessary clauses (those which appeal to Peano Arithmetic), are unable to prove their own consistency.


In fact, considering Hilbert's Programme relied up 'safe reasoning', as a foundation for proving 'riskier' proposition, we can inductively come to the conclusion that, since we cannot prove T to be consistent, it leaves any richer theory T in danger. If we cannot prove the trunk (a metaphorical trunk, not that of analytic tableaux) of a theory, the 'safe' proposition, how are we to prove that a 'riskier' one is consistent.


Moreover, and this is the truly devastating part, since it is applicable to any sufficiently rich theory-T, it renders the entire of classical mathematics unable to be formalised. Further, The necessary spark, or flair, for understanding that the human mind has, cannot be replicated in machine (we will look at this more closely in Part III).


While we can see the abject disappointment this brings to Hilbert's programme; and though it is almost universally agreed to do so, there are those who still defend formalism to an extent. Michael Detlefson, of the University of Notre Dame, posits that, while formalism is not entirely without fault, there are at least reaches of it which Gödel’s theorems would appear to leave unscathed:


Is the distinction between a theory and its efficiency reduct significant? Certain facts suggest that it may be. These include the facts that:

A. For the average familiar formal system (e.g. PA, PA , ZF , etc.) we are far from knowing what any of its efficiency reducts would look like. In particular, we don't know whether or not it is a strict subtheory of the system in question.

B. Some of these theories have significant capacities to prove the consistency of important families of their subsystems. PA and ZF, for example, are reflexive – they prove the consistency of each of their finitely axiomatizable subtheories.


This seems acceptable, if a little weak. The strength of Gödel’s theorems is that, as we have just discovered, they dash the stronger theory-T's, and allow us to disregard riskier ones. Given that Detlefson appeals to subtheories, and theories which are not specifically first-order, I take some issue with it. His point is not wrong, but relying on one formal theory to prove the consistency of one of its own subtheories seems futile. Regardless, the strength of Gödel’s theorems has been proved: first-order formal systems remain incomplete. 

 
Part II: The Proof - coming tomorrow.
5 Comments

GetEveryone's Shameless Top 10 Plug!

  
Here, in its full, copypasta glory, is my favourite games of 2010 list. Oh, what a year it's been!
 
Pretty conventional, and nothing particularly boat-rocking, but I enjoyed them all immensely none-the-less:

1. Mass Effect 2 
  
I dislike the term "epic" when thrown around loosely, but this game meets the criterion to the very letter. For the week that I played ME2 I dreamed I was a Space Captain every night: It infected my subconscious, unconscious, consciousness. The universe it created is unparalleled, the characters utterly believable and it handled like a joy. The only not-entirely-convincing downside was the lack of imagination bestowed on the actual planets; A small gripe, and one that I can't fully endorse, myself.
2. Super Street Fighter IV

I have played a combined 900 hours of SF4 since its initial release in '09. 300 of those were SSF4 and it remains my most played game of all time. There is little I can say to express the feeling I have playing this game, and while it may not have been my game of the year (simply for the reason that ME2 provided me with a very different, albeit incomparable experience), I stand by everything I said.

3. Darksiders

Combing an intriguing story with a fantastic world inhabited by some of the best character designs I've seen in years, devilish puzzles and a fully-formed combat system, Darksiders is not only a truly awesome debut, but one of the best games of the year. While the progression system and game structure is certainly reminiscent of the Zelda franchise, it's entirely unfair to disregard it only as such: It is true that the game lifts from that series, and it also finds inspiration from Legacy of Kain, God of War and, atmospherically, Shadowman (the towering Dark Throne stands, looming, a next generation counter-part to Legion's Asylum) but, most importantly, Darksiders finds its own voice within a heavily stylised mythos. Bring on the sequel!

4. Heavy Rain

My housemates and I played through Heavy Rain in a single evening: By 5am we had refused to turn it off and were fighting fatigue. Come the home stretch, we failed to vindicate Ethan, while poor Agent Jayden fell to his gruesome demise and the real killer escaped scot-free. Heavy Rain provided such a complete story experience that when I attempted to replay the last section, it felt hopeless. My Ethan had died saving his son, and that's where his story ended. "But what about the true ending?", you ask? I'll leave the semantics to you; make mine a tragedy.

5. God of War III

God Of War III's scope was without fathom. Whether this had been to its benefit may still be in doubt, but what is unquestionable is that the genre-reinventing GoW series was finished in a spectacularly bloody manner: Kratos disemboweled Titans, dismembered Gods and put to bed his own demons. More importantly, SCEA gave us an to opportunity to hammer O until Zeus' head was little more than a pulpy mess covering our screens. What more could you ask than that?

6. Super Meat Boy

While not instantly-gratifying, SMB merged a beautifully responsive and intuitive control scheme with fantastic level-design and the best soundtrack of the year. A game so entirely and unashamedly caught up in its own gameyness, that it feels fresher than almost anything else this year.

7. Assassin's Creed: Brotherhood

Had I not discovered it so late, ACII would probably have been my 2009 GOTY. I disliked the original, and so, was hesitant to play the sequel: big mistake. I was enraptured with the story, characters and most importantly game progression. Pacing is something so few games fail to get right, but ACII was pitch-perfect. The reason I'm discussing the original so heavily is that I'm only around 6 hours into Brotherhood (plus a handful of hours in the multiplayer) and I'm having exactly the same feelings; I'd hate not to give this installment its due on account of not having finished it in time.

8. Alpha Protocol

Don't worry: I know Alpha Protocol is broken. It is broken as shit. That doesn't detract from my absolute adoration of the game, though. While AP fails in key areas (namely, shady enemy AI and poor gun-play that cause frustration I can't begin to put into words), it has such great intentions that I genuinely can't do anything but love it. The branching dialogue options offer depth close to what ME2 presented, the rewarding RPG elements allow you to play the game the way you want (not to mention the great "perks" system) and the story is a mix of typically riveting, though somewhat cliched, espionage tropes. If more people had given it a chance, we may have hoped to see a better polished sequel somewhere on the horizon. Alas...

9. Enslaved: Odyssey to the West

Enslaved is somewhat indicative of this year's output. Where one area excels to a degree not previously seen on the medium, it is hampered by irritating flaws. What works? A beautifully realised world, brought to life with thoroughly engaging characters. Unfortunately, it is mired by generic gameplay and a fairly dull combat-system. If only the other 5 people who bought it had told their friends...

10. StarCraft II: Wings of Liberty

Though I've never been an RTS nut, my housemate is an unadulterated Starcraft fanboy. It is solely for this reason that SCII stole more hours from me than I ever thought it would. Unwilling to put in the time and energy required to become adept at 1v1 skirmishes by myself, I was coerced into learning the ropes and before I knew what was happening I had begun to give up evenings in order to play. I haven't touched the campaign...

   
9 Comments

GE_Fight! Series One

Sup, fellas? 
 
I'm going to be bringing a video series of (as yet an undecided number of) episodes. The general idea is that I want to dramatically improve my game and each aspect therein, and since its the holiday period, I thought now was the time to get started. 
 
The basic conceit is this: I will have 3 player profiles (GE_Fight!_Ryu/Cody/Bison) and within each of those I'll have specific aims I want to achieve. The reason for 3 separate profiles is that, simply, at the end of the series I want to be able to see the statistics for each character.
 
With Ryu, I want to begin to implement OS, and gain a greater understanding of when and how to use them; With Cody, I want to learn more about frame-traps, and use them accordingly; and with Bison, I want to improve my pressure game, knowing when to hit, and when to run and pushing/keeping the opponent locked in the corner.
 
Throughout all this I have two over-arching goals: 
 
1. To refrain from jumping unless A. necessary or B. for mix-up purposes (on opponent's wake-up etc.) and 2. Generally improve my footsies. 
 
Don't expect high production values, but do expect some sort of critique with the fights: written or otherwise. That's it!
 
Peace. x

16 Comments

My game of the year, for the second year running.

Super Street Fighter 4 is a game I love and hate. Love to hate. Hate to love. Certainly, at least one of those.
 
Before 2009 I could never have been branded a fighting-game fan. Despite being exposed to Soul Calibur, SFA3, Super Smash Bros. Melee, I never properly understood them, and understanding is an essential factor in brawlers.  
 
Understanding not only in the sense of what the hell is going on, but also of the fundamental requirements necessary for a better appreciation of the genre. Mind-games, single-frame timing, perfect spacing: this is the life-blood of the series.  
 
Street Fighter 4 refined and revived not only the SF franchise, but the genre. By allowing it to be accessible to those without an already firm grasp of the technicalities involved (SF3, despite its splendour, was highly inaccessible and an abject failure on that front), it attracted a whole new generation of players; the actual game-mechanics were simple yet hid depth only obvious to those willing to put in their time and patience. 
 
Giantbomb wasn't free from its seductive charms, either. Never before have I been part of a gaming community, and now I can gladly say I have been. The SSF4 forums on GB are some of the friendliest around, and it is a reflection of the game's qualities that it can bring people together in such a positive way. The players are multi-national, of varying skill and, without fail, lovely. Even Pessh.
 
Then, in late-April of this year, its follow-up, SSF4, was released. This update added a further 10 characters to the roster and tweaked the game to near perfection. Needless to say, the switch-over was painless.
 
There comes a time, though, when enough is enough. Anything can become tiresome in large doses, and there is a point at which 99-second rounds can begin to feel a little familiar. Despite taking breaks of months at a time, I always return. Eventually. A fresh perspective provides surprisingly positive results: any agitation I had, of which there was plenty, dissipates rapidly, and I find myself looking at each match with a new sense of clarity.
 
Almost 2 years later, and with 900 hours played (I have literally played 900 hours. This is unfathomable.), I'd say I have a reasonable grasp of the game at this point; if not, I at least have a respect for it. No other game has ever grabbed me the way (Super) Street Fighter 4 has, and I doubt another ever will. 
 
Game of the year. Game of the decade. Game of my life. 
 
Love.

26 Comments

The Dark Knight Rises

So, its fairly old news now that the new Nolan Batman film is scheduled for release, and while it's still a while till Summer 2012, the news that the script was due on Monday has basically given me an excuse to rave about how much Iove the first two films and talk a bit about the series as a whole.
 
There are a few rumours kicking around: namely, which part they are looking to fill with the prospective female leads (Talia Al' Ghul, Catwoman or generic love-interest), and who Tom Hardy would be portraying (Max Cort and Dr. Hugo Strange are the characters being bandied about at the moment). How do you guys feel about the bones of the story allegedly coming from the "Prey" storyline? Bear in mind it's unlikely to be a direct adaptation, given Nolan's previous influence from "Year One" and "The Killing Joke".
 
On to the films we've seen!
 
I originally saw Batman Begins before I was hit with pre-release saturation. These days I can barely go into a film without knowing the majority of the plot beforehand. Upon its release, though, I'd only seen a single trailer (attached to another film) and was blown away with hype of my own making. Needless to say, I adored it. In fact, I'm fairly sure I saw it a further 2 times before it left screens.
 
When I eventually got my hands on the dvd things weren't so pretty. The film suffered from a few glaring flaws. Specifically: the 'narrows' looked like a sound-stage bathed in green-light; the film lacked a "true" villain and didn't have a real sense of urgency; and there were definite pacing issues. The middle of the film is flabby and could do with a little more focus.
 
Conversely, by the time TDK came round I'd hunted out any tid-bit I could find: teasers, virals, leaks. You name it, I'd seen it. This saturation almost destroyed the film for me. Almost. Going into the IMAX screening, I wondered if it could possibly be as great as everything pointed to. For that evening it definitely was. The sound was incredible, and the crowd was in raptures: clapping and cheering were pretty common. Mainly, I felt totally enveloped in the film. If you have IMAX in your area try your best to catch something there; the difference is like night and day. It didn't hurt that the film had an absolutely incredible nemesis in Ledger's 'Joker'.
 
Similarly, though, when the Blu-Ray came out I was less than impressed. If it was possible, Katie Holmes had out-acted Maggie Gylenhall's 'Rachel'; Batman moved precariously from one scene to the next on the loosest of principles and the ending was a mess.
 
I think the problem, however, lay as much on my side. I re-watched both films back to back the other day and was surprised how much more I enjoyed them. None of the issues I mentioned were as blatantly obvious; I gave myself up much more willingly when I wasn't influenced by the fantastic memories I had of the first screenings. If you haven't watched either in a while, I definitely recommend you sit down and get a bowl of pop-corn. Nolan has crafted two of the finest comic-book films in history. Hopefully by the time TDKR comes along, we'll finally have the great Batman trilogy we deserve.
 
*SPOILERS*
 
The film is 2 years old, so I can't imagine this will affect many people, but...
 
Given Dent's death, and Batman's fugitive status, do you feel the 'Prey' storyline is the right way to go? For those who haven't read it, the short-series deals with a special unit set-up to apprehend "the vigilante" under consultancy from Dr. Hugo Strange - a psychologist with more than a few problems of his own. To top things off, the unit is led by the equally disturbed Max Cort, who is determined to capture Batman.
 
Seems pretty spot on to me: Batman on the run, two pretty awesome villains and the possibilty of ending the story on a real high. At least, that's what the new title suggests. I'll leave it there, anyway. I'd love to hear what you guys think.
 
Cheers. x

102 Comments

The year games exploded.

I always like when a tent-pole game is released and there are those who are incredibly quick to bash it.
 
God of War 3 wascame out this week to near perfect reviews; GB's own Ryan Davis gave it a great score of 4/5 citing only familiarity as its main flaw. But there are those who would have you believe that, regardless of the fine-tuned mechanics or incredible scope, that GoW and other great releases like it, are not only unimportant, but simply disregard them. "Oh right then, so its God Of War", I read on the forums.
 
Well, yes, it is... but its God of War 2010 style!
 
Opinions are exactly that, and everyone is entitled to one, but consider: Over the last six months or so, roughly since Arkham Asylum's release, there have been a number of games which have kickstarted an evolution in game design. Mass Effect 2, Uncharted 2, Heavy Rain (and AA as mentioned) and many more have all pushed boundaries. Its likely that storytelling and fundamental gameplay mechanics will continue to push what we expect from the medium over the course of the next few years. Likewise, GoW3 creates an unparalleled experience; what it does well, it does incredibly so.

Appreciate the medium: you're witnessing a boom era. Games are exploding thanks to adventurous developers and risk-taking publishers. Revel in it and turn your vitriol to something that warrants it.......
 
Like Leisure Suit Larry.

13 Comments