Shivoa's forum posts

#1 Posted by Shivoa (623 posts) -

Presenting, PANOPTITRON.

(I don't actually know if this works anywhere other than my computer with my browser.)

*applause*

Bravo sir! That is marvellous.

Working fine in Unity for me here.

#2 Edited by Shivoa (623 posts) -

Forgot to submit the post for this so for consideration for next week: Every possible 1080p image

Thread going into one of the questions on the Bombcast. Scroll down to find out whose mouth a tiny Mariachi band is playing inside.

#3 Posted by Shivoa (623 posts) -

U.G.L.Y. images

I am curious how the math changes if you take all those sub-pixels and made them binary, so instead of each sub-pixel having 256 levels it only has 2. It would still technically be able to show every viewable image, but should take exponentially less time.

Well it would be able to show every image made up of only pixels that were fully saturated: RGB (white), RG (yellow), GB (cyan), RB (purple), R, G, B, & black. 2^3 = 8 different colours per pixel. Almost all of the viewable images on your TV wouldn't be visible on it or even reasonably approximable. It's better than an actual black/white binary pixel but way worse than a grayscale (what we refer to as black & white TV) image for giving good images to look at, especially with our 8 by 6 pixel grid that is killing any concept of detail (or pixel adjacency blurring into an effectively lower res image with more colours).

That does make the maths a lot smaller. 2^144 (our good old 8 by 6 screen; also 8^48 if you consider pixels rather than subpixels, maths is the same) is about 2 followed by 43 zeros of distinct images, that's 1 followed by 34 zeros of years. If we start right now then apparently the period of "stellar remnants escape galaxies or fall into black holes" will have long ended and we'd be deep in the period of "nucleons start to decay" but at least the time when "effectively, all baryonic matter will have been changed into photons and leptons" will not have arrived.

I can never find the shot I want

I wonder how long the editing process would be after all that. If you want to put together something coherent you have a lot of raw material to go through. But on the plus side, you could put together a n y t h i n g !

"tiny Mariachi band playing inside his mouth"

I bet Sony Vegas would give you an import error. But seriously, this actually becomes an interesting* insight into a search problem. Not a practical one, but indicating why search can be an issue even if creation becomes trivial. Once we've got all our images captured, we have every possible image and we can say we have the raw material to create every possible scene and, in those scenes, every possible shot. I use shot here to simply mean a scene with no cuts, each frame follows the previous in what a normal person would consider a fluid transition, but possibly using the most magical of CG so we could move the camera to space without a cut if we made the sequence look like a satellite eye transition.

So you grab your video editor and you see infinite possible images that are the first frame of each shot you captured, that's a search problem right there. But say you had a frame already in mind. Due to having every possible image on file you then have every possible viable transition from that frame available under that one thumbnail of the starting frame. Every shot that you can conceive of (and many others you couldn't even imagine until you saw it happen in front of you) are there for you. From that point, you have every shot where anything can happen. It's all in the database. How do you possibly find the right one? You start with a shot of Jeff, roughly facing camera, on the studio chair but while searching for the 5 second clip where he mouths to camera (we have no audio yet - we'll add it in post), "You're typing a lot over there." the search keeps coming back with the one where that happens but there's a tiny Mariachi band playing inside his mouth that you can see whenever he opens it, or the one where the camera zooms to Cool Baby and it mouths the line, or the one where Jeff realistically melts for no apparent reason while mouthing the line. All of those sequences of images are there too, and anything else you can think of as a single shot starting with that one frame. Literally everything that could possibly happen and everything that couldn't (!) would be shown in shots starting at that frame.

I wouldn't want to be the UX guy working out how to design that search tool; whatever precision the user specified, short of exact pixel-by-pixel details of every frame in the sequence, would return many more shots they didn't want than the narrow parameters of what they did want and nothing else. But here's the lesson in all this: the user would need to be taught to settle for 'good enough', when searches become too hard because there's to much data then providing the exact thing desired in the data set isn't viable, but providing something that is close enough will be the best case. Insert commentary on expansive government snooping and the expectation of generating faulty intelligence framing an innocent person here.

* YMMV.

Thanks

Thanks for all the positive messages guys, glad you enjoyed my bit of calculator spam.

#4 Edited by Shivoa (623 posts) -
@sgtsphynx said:

Crazy thought, what would happen if you upped the refresh rate?

To what? Moving from 60Hz to 600Hz (where it would be a blur you couldn't really make out anything on) would only make it improve by a factor of 10. 3 followed by 336 zeros of years. What about 6 BILLION Hz (a long way from any even speculative technology we might have to make a refreshing display)? Now it will only take 3 followed by 328 zeros of years. A massive improvement, but still basically infinite for all practical terms (and in vs lifetime of the universe terms, which are far from practical).

Think of a number. No bigger than that. The problem is we really can't really think of how long this time time span is. We can't reasonably think about how long the universe will continue to tick over for and that's nothing compared to how long this will take. Again, for an 8 by 6 pixel display only using standard colour depth.

#5 Edited by Shivoa (623 posts) -

A cc of a response to an email on the latest bombcast:

An aside

Your comments about copyright, while amusing, miss one of the great things about almost all copyright legislation: independent creation. If we both, independently, create exactly the same work then we both have copyright claims to it and the other cannot be infringing. The burden of proof is obviously high if you claim this when it looks like you copied someone else's work and the chances of it happening are low but this is a safety valve in copyright. The lack of a similar provision for patents is one of the reasons why everything is so messed up for patent law.

Think of a number. No bigger than that

Source: Wikipedia.

But on to that TV that will show you every possible image: even moving to SD or 320x240 is not going to help you. The problem is how quickly the numbers blow up.

As the original emailer mentions, each pixel is just an R G B triple and with a standard TV the brightness of each sub-pixel ranges from 0 to 255. So there are 256 different brightnesses for each sub-pixel. 256 to the power 3 (which is 256*256*256, 256^3) is about 16.7 million; the number of different colours possible for a single pixel to display.

Lets scale that up to an 8 by 6 screen, not 800 by 600 but just 8 pixels wide by 6 pixels high. There are now 8*6*3 = 144 sub-pixels taking a range of 256 values. 256^144 is approximately equal to a 6 with 346 zeros after it and before the decimal point.

If this 8 by 6 screen was to show a brand new image 60 times a second, standard 60Hz, then it would take a 3 followed by 337 zeros of years to display all possible images. With the default zoom/font on GB (on Chrome on my PC) this 8 by 6 screen has about as many pixels are there are inside a single 0 in this text. Not big enough to render the entire 0, just the void in the middle. It's not a very good screen.

30​0​0​0​0​0​0​0​0​0​0​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0​0​0​00​0​0​0​0​0​0 years. (I am preparing to hit the edit and put some breaks in that if it doesn't automatically get broken up by the forum engine)

We should put that in context: After 1 followed by 14 zeros of years (100000000000000 years) the universe will no longer be forming new stars, the beginning of the end. Much much later, about 1 followed by 100 zeros of years later then even the supermassive black holes (potential mass: 100 billion solar masses) will have completed evaporated.

Maybe 1 followed by 150 zeros of years after now, there will be "a state of no thermodynamic free energy to sustain motion or life", the heat death of the Universe. In any case, at this theoretical point in time there would still be 3 followed by 337 zeros of years left before the 8 by 6 screen finished showing all possible images, so little progress would have been made that the percentage change can only sensibly be approximated to 0.<lots of zeros>% progress. None, not a sausage. The expected lifespan of the universe as a place where things occur is but an instant compared to how long that 8 by 6 screen would take to show all the possible images made up of RGB sub-elements when running at 60Hz. A billion expected lifespans of the universe in a row would be nothing compared to how long this 8 by 6 screen would take to display all images.

So moving from 1080p to SDTV or EDTV would not help. The bigness of the numbers involved is basically impossible to comprehend, our brains are just not able to deal with it. Every pixel you add to the display makes it take 16.7 million times longer to finish displaying all possible images. And that's just with normal 8 bits per sub-pixel TVs, at some point before the head death of the Universe we might have to start again and use a wide gamut (maybe xvYCC) colour space and deep colour (maybe 12 bits per sub-pixel).

Edit: I wanted to add some zero width space chars to that long number so it should break reasonably well for all views but we can't drop HTML into our text so experimented with pasting them in (Edit 2: seems to work).

#6 Edited by Shivoa (623 posts) -

Coming back to this, I find it interesting that some recent data (subscription required for fulltext report, possibly the authors will publish their own draft freely online or respond to email requests for a private copy for those without institutional access) that has hit the press as a slightly reshaped headline gives some clear data on different audio and visual cues and reaction times.

Caveat limited study. Someone with dyslexia is looking at being 70-100ms slower to react than someone in the control and the spread isn't that large as to be a significantly mingled population (massive caveat small scale study). Depending on the type of sensory input the control was reacting to then they pressed a button in 250-350ms so adding an extra 100ms on to the top of that is going to make responding in a timely fashion really hard (or impossible) for someone with dyslexia.

This data indicates someone with dyslexia can be 6 frames behind (at 60Hz) being capable of responding to a cue than someone without the condition. If you don't have impaired reactions then playing a game with a difficulty that requires strict reaction times means you are playing (life) on easy mode. L2P, etc. 5-10% of the population are playing all games which have reaction checks with this difficulty mode enabled. Maybe they should get a flag that allows them to be 6 frames late responding to input to save themself from harm to level the playing field. Difficulty isn't uniform, this is a textbook example of why considering it as such is a bad thing.

#7 Edited by Shivoa (623 posts) -

@sgtsphynx: Thanks. My formal education is as a software engineer so I only have a limited knowledge of European and America law as it related to my field (which is mainly in copyright/trademark/patent and liability areas). I try to read around the topic when I can find the time but I'm certainly not offering definitive solutions from a position of authority on copyright law.

As to where code splits between patent and copyright, the point of copyright is to provide exclusivity to duplication of an item's expression that is literary, artistic, or musical. Expression is generally defined (as always, laws are not always normalised by international treaty so where you are may change what the law says and how it is interpreted) as some sort of creative outpouring of novelty, the spark of humanity. So a machine doesn't get copyright for what it generates. In most places, copyright is automatically granted at the point of creation, if you can prove you made it then you've got copyright from then on, no registration required. The functionality of something is not expression and doesn't get copyright. The branding of something is not (necessarily) expression and may not get copyright.

So we have trademark law, which attempts to provide anyone with the ability to protect themself from being ripped off by someone pretending to sell their product. That's the normal test, would a reasonable person buying in the marketplace be duped into buying product Y when they actually intended to buy product X. So this applies to product names, style, even package shape (and colour). If you sell something called ABC then you get a trademark for it (as long as you are shown to protect it by telling anyone who comes later with something similar also called ABC that they need to stop it) but if you want to go further then you register it (this is the difference between TM and R).

And then we have patent law. Copyright protects the author of something from the text of his original work from being ripped off. The exact way in which my creativity is expressed in this post or in a recipe for a cake is protected. But the technical information I'm providing in this post or the functional steps in how to bake a cake to my special recipe is not considered expression. If you take my expression and make a work based on it then it is a derivative work and possibly copyright infringement or fair use (the test of often transformitaveness but this is a much larger topic than this post can cover). For reverse engineering to work and be free from concern of infringement, if I want to extract the non-copyrighted elements of something: I take the original material and try to strip all expression from it, break it down into completely functional specifications; then I give that to someone else and ask them to create something based on that functionality/spec (and with no knowledge of the original creative work); the new work is copyright by that guy and (assuming the process was done thoroughly) is not going to be copyright infringement. But if someone registered a patent on the functionality of the thing, then we've got a patent war.

Patents are meant to protect ideas, descriptions of functionality and processes, but they generally get given to people for the most ridiculous things and aren't even correctly vetted against prior art (someone else clearly demonstrated the idea before this patent was applied for). Software engineers generally avoid using them, because they're broken in such a core way as to totally destroy the industry if everyone went all trigger-happy on them to try and lock down the functionality of them. Generally engineers have to be very aware of how patent law works in different regions (sell something online and you're able to be sued by any location where a sale is completed, including patent-friendly locations like the Eastern District of Texas and Japan) to try and avoid getting sued by someone who made a vague claim of something without any understanding of the technical issues or ability to execute on his vague posturing. Years later then the thing actually gets invented independently by someone else and the patent holder smells money. It's effectively SciFi authors getting pissed when engineers invent something they once referenced in a book, when they had no idea if it would work or how to make it when they wrote about the general idea. That's when huge companies aren't fighting over who has the right to make tablets with slightly curved edges or ownership of the circle as part of a user interface. Patents also protect the billions of dollars invested in drug discovery so we probably can't just scrap the entire thing, but it'd be nice to try and reform it from the ground up, not slightly tweaking it to be slightly less terrible. For a start, patents do not accept independent creation: two people working in isolation who create the same good idea are racing to patent it first. For copyright then there is provision that allows for just such an event. It's crazy to think of these research teams who have million or billions invested in their invention being on a 'publish first' timer to get protection for the work they did and second place gets nothing. It incentivises broad patents that are applied for before the group even has invented something, speculation as patent because you might invent something like that someday.

So code (with the modern interpretation of the law in most places) is almost always entirely protected by copyright alone. The source code was a work of expression by some authors and when the computer compiles it to an executable program then enough of that expression is retained in the running program to retain the original copyright claim. The functionality of that program (what it does) can be reverse engineered without infringement, unless some of that functionality was patented (which is rare). Someone working independently can create a program that does the same thing and even contains the same expression, if they created it independently (but if you've got a line for line duplicate of someone else's program then you're probably going to have to prove it because the scale of the similarity will indicate it was probably copies and so infringement - same as with books etc). The look and feel of the product, the name or similar names designed to confuse, may also be protected by trademark (assuming the creator has been protecting it).

#8 Edited by Shivoa (623 posts) -

@mirado: But I think that fundamentally misrepresents what a skill check is and how it applies to individuals. Rather than asking for the paved road up Everest, (not to be too glib about this analogy) this is the person with one leg being shouted at for demanding he be able to bring his specialist equipment (like an artificial leg and specialist climbing gear) up the mountain on the climb. Someone with slow reaction times, a disability that impairs their ability to make swift & precise reactions, or equipment that add lag is already playing at a disadvantage when you tell them to pass a skill check with a fixed timer. The request is to provide an actually fair skill check they can pass, and in doing so also open up the ability to let even the most skilful find a suitable challenge (up to the accuracy possible from a tick based game world/limits of data rates from the input devices).

That's not to say this proposal is trivial to implement in games, but it's the path I see towards making better games and it explicitly rejects the static difficulty wall (that someone has to bang their head against) as discussed in the original article. When people talk about the strength of the medium, interactivity is the core strength of games; it would be a shame not to exploit that strength to make games better for everyone that currently plays them and let more play different types of games currently locked off to them because they play life in hard mode.

#9 Edited by Shivoa (623 posts) -

@mirado: So people were afraid that DS2 could have an optional alternative difficulty, that not everyone in this solo game would be held to the exact same standard, to prove themselves with the exact same skill checks (oh, except that based on which model of TV you have some people get a 100ms+ advantage on reaction checks due to input lag, so that entire notion makes no sense)? Can you not see this as a sign of the disease? Pointlessly requiring that others be excluded in a way that does not change your experience at all. It's like campaigning for the lag tuner in a rhythm game to be removed, damn people who don't want to play on the right difficulty with precise timing trying to cheat by getting longer to react! That's insane, that's bad design, we can do better. We can even make a game that better reacts to how you want to be challenged at the same time, that's the beauty of interactivity.

But this is going off my first topic. My original piece was merely saying the article about all games being defined by this specific form of difficulty was both wrong and a dangerous idea to take to heart. This weird outrage that other players may have different experiences and worries that someone is going to steal your hard games from you if they think about being inclusive seems to be worryingly familiar to several other calls for inclusivity in the last few years. I hope it is merely messages getting slightly crossed and talking past each other.

#10 Edited by Shivoa (623 posts) -

@fredchuckdave: Thanks, the ability to curve my spine like that is how I landed the paying gigs for my words in the past. I'm glad that comes through in my text, despite it being some years since I last had such a gig (and coming up for 15 years since I first landed one, back when all this was fields).

@xalienxgreyx: I'd say the exact same thing about Halo, below Heroic then the AI routines don't sing and you're missing the experience that makes the combat pop in Halo (but my reactions can't take the balance of Legendary for every encounter, sometimes I hit a wall). The way I played tLoU, Joel basically didn't use guns until the final combat section. I think this points to an undeveloped language with which we need to express our preferences (and allow us to change our preferences as we go into an experience and find out what we actually want from the game as we get a better idea of what it is we're playing and which systems we're drawn to refine). So far there have only been really coarse systems (like setting puzzle and combat difficulty separately) and few have adapted much (often it's a silent easy mode that gets activated until the next checkpoint when you fail in a spot too many times) as you play.

@yinstarrunner: This is not a piece mandating standards (although your idea that avoiding alienating some people is bad seems slightly concerning). That said, I do not lose out if someone adds a colourblind mode to their game but many people do win, this is not a fight to take anything from anyone. It is merely pointing to others who can be included if they are considered during development, who are already playing on a harder mode that you cannot perceive due to things outside of the game code (like vision or reactions). I was merely responding to an article that did define all games as being primarily about the difficulty. My (possibly) somewhat heavy-handed messaging you're responding to is merely attempting to balance against statements like "difficulty is the point, not the problem" and should be read in such a context. My desire is that, if a bar is to be set, we use the benefits of an interactive medium to allow reactivity to the position of the bar (being that trying to find a fixed point that is 'fair' to all who wish to play a game is an impossible task when we all have different abilities).