Does anyone know why bits made the graphics better?

Avatar image for godwind
Godwind

2924

Forum Posts

345

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#1  Edited By Godwind

It still puzzles me today.

Avatar image for spilledmilkfactory
spilledmilkfactory

2085

Forum Posts

13011

Wiki Points

0

Followers

Reviews: 75

User Lists: 23

no idea. is your profile icon from Izuna the Unemployed Ninja?
lol a little off topic but i'm curious...

Avatar image for godwind
Godwind

2924

Forum Posts

345

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#3  Edited By Godwind
spilledmilkfactory said:
"no idea. is your profile icon from Izuna the Unemployed Ninja?lol a little off topic but i'm curious..."
arggggghhhhhh!!!!!!!   I was hoping to come in here with the hopes of someone with an answer.  
Avatar image for video_game_king
Video_Game_King

36563

Forum Posts

59080

Wiki Points

0

Followers

Reviews: 54

User Lists: 14

#4  Edited By Video_Game_King

I'm guessing bits meant more memory/power, and that meant more things possible in general.

Avatar image for psyx2
Psyx2

405

Forum Posts

282

Wiki Points

0

Followers

Reviews: 1

User Lists: 4

#5  Edited By Psyx2

Nope.
Wiki doesn't help much either.

Avatar image for purerok
PureRok

4272

Forum Posts

4226

Wiki Points

0

Followers

Reviews: 1

User Lists: 2

#6  Edited By PureRok

It just really means more memory. If you filled a single integer up to it's maximum capacity on a 32-bit system it would be considerably smaller than a 64-bit system.

Quote from Wikipedia:

"The number 2,147,483,647 is also the maximum value for a 32-bitsigned integer in computing. It is therefore the maximum value for variables declared as in many programming languages running on popular CPUs, and the maximum possible score for many video games. The appearance of the number often reflects an error, overflow condition, or missing value."

And:
"The term word is used for a small group of bits which are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.

As of 2008 practically all new desktop processors are of the x86-64 family and capable of using 64-bit words, they are however often used in 32-bit mode. Embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.

One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as a variable that will be used to store values greater than 2 − 1, the program will fail on computers with 16-bit integers. That variable should have been declared as , which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers."


While the second quote is mostly talking about the term "word", it does talk a bit about different -bit systems.
Avatar image for diamond
Diamond

8678

Forum Posts

533

Wiki Points

0

Followers

Reviews: 0

User Lists: 4

#7  Edited By Diamond

In the 8-bit, 16-bit and 32-bit 'eras', bits were not directly relevant to the graphics being created.  There is no direct connection and was used as a simple marketing device in those days.

'Bits' as a term are beginning to be important again simply because 32-bit processors soon won't enable enough RAM for most people (4GB, but Windows handles RAM worse, so effectively 2GB).

Avatar image for godwind
Godwind

2924

Forum Posts

345

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#8  Edited By Godwind
PureRok said:
"It just really means more memory. If you filled a single integer up to it's maximum capacity on a 32-bit system it would be considerably smaller than a 64-bit system.

Quote from Wikipedia:
"The number 2,147,483,647 is also the maximum value for a 32-bitsigned integer in computing. It is therefore the maximum value for variables declared as in many programming languages running on popular CPUs, and the maximum possible score for many video games. The appearance of the number often reflects an error, overflow condition, or missing value."

And:
"The term word is used for a small group of bits which are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.

As of 2008 practically all new desktop processors are of the x86-64 family and capable of using 64-bit words, they are however often used in 32-bit mode. Embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.

One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as a variable that will be used to store values greater than 2 − 1, the program will fail on computers with 16-bit integers. That variable should have been declared as , which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers."

While the second quote is mostly talking about the term "word", it does talk a bit about different -bit systems."

I'm not concerned with "what is a bit?" but "Why bits makes graphics better?"
Avatar image for tmthomsen
tmthomsen

2080

Forum Posts

66835

Wiki Points

0

Followers

Reviews: 1

User Lists: 3

#9  Edited By tmthomsen

More bits when talking consoles usually means "more colours" as more different colours can be defined with a larger number of bits.

Avatar image for diamond
Diamond

8678

Forum Posts

533

Wiki Points

0

Followers

Reviews: 0

User Lists: 4

#10  Edited By Diamond
TMThomsen said:
"More bits when talking consoles usually means "more colours" as more different colours can be defined with a larger number of bits."
Higher bit color does mean more colors, but they never told you much specifics about the old consoles.  Basically NES was 8 bits but didn't have 8 bit color.  SNES was 16 bits but didn't have 16 bit color.
Avatar image for johnny5
Johnny5

1436

Forum Posts

876

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#11  Edited By Johnny5

Yeah thats all it is. It RAISES the limits of the system .  Its like a less restricted medium, but these days that doesnt really play as much of a role since the limit is so high. It does play  a role on the system hardware since 32bit can only realise a certain amount of ram. Besides the bits for colour systems theres also the bit system for memory.

 A Game Centric Example 8 Bit Colour System
"8-bit color graphics is a method of storing image information in a computer's memory or in an image file, such that each pixel is represented by one 8-bit byte. The maximum number of colors that can be displayed at any one time is 256." Wikipedia.

So the limit is 256 colours.



So a practical explanation is colour banding




32bit Ram Limitation
"The emergence of the 64-bit architecture effectively increases the memory ceiling to 2 addresses, equivalent to approximately 17.2 billion gigabytes, 16.8 million terabytes, or 16 exabytes of RAM. To put this in perspective, in the days when 4 MB of main memory was commonplace, the maximum memory ceiling of 2 addresses was about 1,000 times larger than typical memory configurations. Today, when over 2 GB of main memory is common, the ceiling of 2 addresses is about ten trillion times larger, i.e., ten billion times more headroom than the 2 case." Wikipedia


Avatar image for johnny5
Johnny5

1436

Forum Posts

876

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#12  Edited By Johnny5

Well you know what I have to say about that? bloody bump man...

Avatar image for lordandrew
LordAndrew

14609

Forum Posts

98305

Wiki Points

0

Followers

Reviews: 0

User Lists: 36

#13  Edited By LordAndrew
Godwind said:
"I'm not concerned with "what is a bit?" but "Why bits makes graphics better?""
If you're referring to the "8-bit graphics" of the NES and the "16-bit graphics" of the SNES, the short answer is they don't. Those numbers are in fact irrelevant to the graphics.

While the NES did have an 8-bit processor, it used two-bit sprites and tiles. These two-bit sprites / tiles allowed for four colours to be displayed at once, with other hardware limitations restricting the number of sprites that could be displayed at once. Here's N-finity's explanation, which to this date is still the best explanation of this stuff that I can find.
Don't bother registering for those forums to ask him any further questions about sprites though; he passed away in early 2007. :(