It still puzzles me today.
Does anyone know why bits made the graphics better?
no idea. is your profile icon from Izuna the Unemployed Ninja?
lol a little off topic but i'm curious...
It just really means more memory. If you filled a single integer up to it's maximum capacity on a 32-bit system it would be considerably smaller than a 64-bit system.
Quote from Wikipedia:
"The number 2,147,483,647 is also the maximum value for a 32-bitsigned integer in computing. It is therefore the maximum value for variables declared as in many programming languages running on popular CPUs, and the maximum possible score for many video games. The appearance of the number often reflects an error, overflow condition, or missing value."
And:
"The term word is used for a small group of bits which are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.As of 2008 practically all new desktop processors are of the x86-64 family and capable of using 64-bit words, they are however often used in 32-bit mode. Embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.
One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as a variable that will be used to store values greater than 2 − 1, the program will fail on computers with 16-bit integers. That variable should have been declared as , which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers."
While the second quote is mostly talking about the term "word", it does talk a bit about different -bit systems.
In the 8-bit, 16-bit and 32-bit 'eras', bits were not directly relevant to the graphics being created. There is no direct connection and was used as a simple marketing device in those days.
'Bits' as a term are beginning to be important again simply because 32-bit processors soon won't enable enough RAM for most people (4GB, but Windows handles RAM worse, so effectively 2GB).
"It just really means more memory. If you filled a single integer up to it's maximum capacity on a 32-bit system it would be considerably smaller than a 64-bit system.
Quote from Wikipedia:"The number 2,147,483,647 is also the maximum value for a 32-bitsigned integer in computing. It is therefore the maximum value for variables declared as in many programming languages running on popular CPUs, and the maximum possible score for many video games. The appearance of the number often reflects an error, overflow condition, or missing value."
And:"The term word is used for a small group of bits which are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.While the second quote is mostly talking about the term "word", it does talk a bit about different -bit systems."As of 2008 practically all new desktop processors are of the x86-64 family and capable of using 64-bit words, they are however often used in 32-bit mode. Embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.
One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as a variable that will be used to store values greater than 2 − 1, the program will fail on computers with 16-bit integers. That variable should have been declared as , which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers."
I'm not concerned with "what is a bit?" but "Why bits makes graphics better?"
"More bits when talking consoles usually means "more colours" as more different colours can be defined with a larger number of bits."Higher bit color does mean more colors, but they never told you much specifics about the old consoles. Basically NES was 8 bits but didn't have 8 bit color. SNES was 16 bits but didn't have 16 bit color.
Yeah thats all it is. It RAISES the limits of the system . Its like a less restricted medium, but these days that doesnt really play as much of a role since the limit is so high. It does play a role on the system hardware since 32bit can only realise a certain amount of ram. Besides the bits for colour systems theres also the bit system for memory.
A Game Centric Example 8 Bit Colour System
"8-bit color graphics is a method of storing image information in a computer's memory or in an image file, such that each pixel is represented by one 8-bit byte. The maximum number of colors that can be displayed at any one time is 256." Wikipedia.
So the limit is 256 colours.
So a practical explanation is colour banding
32bit Ram Limitation
"The emergence of the 64-bit architecture effectively increases the memory ceiling to 2 addresses, equivalent to approximately 17.2 billion gigabytes, 16.8 million terabytes, or 16 exabytes of RAM. To put this in perspective, in the days when 4 MB of main memory was commonplace, the maximum memory ceiling of 2 addresses was about 1,000 times larger than typical memory configurations. Today, when over 2 GB of main memory is common, the ceiling of 2 addresses is about ten trillion times larger, i.e., ten billion times more headroom than the 2 case." Wikipedia
"I'm not concerned with "what is a bit?" but "Why bits makes graphics better?""If you're referring to the "8-bit graphics" of the NES and the "16-bit graphics" of the SNES, the short answer is they don't. Those numbers are in fact irrelevant to the graphics.
While the NES did have an 8-bit processor, it used two-bit sprites and tiles. These two-bit sprites / tiles allowed for four colours to be displayed at once, with other hardware limitations restricting the number of sprites that could be displayed at once. Here's N-finity's explanation, which to this date is still the best explanation of this stuff that I can find.
Don't bother registering for those forums to ask him any further questions about sprites though; he passed away in early 2007. :(
Please Log In to post.
Log in to comment