Help me understand Sony using GDDR5 RAM for the PS4.

Avatar image for ericsmith
EricSmith

1436

Forum Posts

254

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#1  Edited By EricSmith

I know a bit about computer hardware. I build my on PCs, do some tech support, etc., etc. That said, I don't understand how or why Sony is using GDDR5 RAM for the PS4. GDDR5 is an offshoot of DDR3 that is manufactured solely for GPUs. As far as I know, GDDR5 RAM has very high timing, but still relatively low clock speeds, making it a poor choice for system RAM. I have two questions:

Can you help me understand? I am finding my knowledge to be lacking.

Is Sony having RAM custom manufactured for the PS4?

Any help would be appreciated.

Avatar image for magzine
MAGZine

441

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#2  Edited By MAGZine

The RAM is high-bandwidth, and is super close to the GPU/CPU, since they're combined into an APU.

Since the RAM has physically less distance between it and the processing unit, and since you can build a tighter interface between them (memory bandwidth is a modern-day bottleneck that will be combatted by this), you can accept the slower timings for otherwise speedier RAM. (since the overall timings can be reduced by eliminating transmission latency)

All consoles use custom GPU/CPU/APUs, so I'm guessing that the RAM will be, too.

Avatar image for sooty
Sooty

8193

Forum Posts

306

Wiki Points

0

Followers

Reviews: 2

User Lists: 3

Timings and RAM speeds make very little difference to gaming performance.

Avatar image for seppli
Seppli

11232

Forum Posts

9

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#4  Edited By Seppli

@ericsmith:

I just assume that the consulted developers know their needs, and that it in fact makes sense for Sony to use GDDR5 RAM for the system architecture their engineers have devised for the PS4.

It's the memory bandwidth that's supposedly spectacular, and way above what we've been seeing to date in our consumer hardware. The PS3 works with 25 GB/s memory bandwidth. Standard current gaming PCs have around 50 GB/s memory bandwidth. Only extremely specialized and expensive gaming PCs have comparable memory bandwidths - and no current games properly support such a setup.

That's why the PS4's architecture is a big deal. Games will be optimized for the PS4's memory bandwidth, and it will enable a quantum leap in regards to game fidelity. PS4's supposed memory bandwidth is 178 GB/s, more than triple the amount of your average current gaming PC. It can shovel around massive amounts of data with unprecedented ease and speed.

I am a total layman, in case that isn't obvious, but I'm pretty confident in my assessment, that Sony's hardware strategy is sound, and will yield incredible gaming experiences.

Avatar image for magzine
MAGZine

441

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5  Edited By MAGZine

Oh, and this really helps with things like ansio/aa which are extremely bandwidth intensive.

Avatar image for damisterchief
DaMisterChief

612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#6  Edited By DaMisterChief

One is more expensive than the other

Avatar image for jgf
jgf

404

Forum Posts

14

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

Avatar image for sooty
Sooty

8193

Forum Posts

306

Wiki Points

0

Followers

Reviews: 2

User Lists: 3

#8  Edited By Sooty

@jgf said:

John Carmack says the design is good. So end of discussion I guess ;)

Thats what he said.

But will iD actually bring any good games out? Probably not, they haven't since Quake III.

Avatar image for hippie_genocide
hippie_genocide

2574

Forum Posts

1

Wiki Points

0

Followers

Reviews: 0

User Lists: 2

@jgf: That's pretty much the gold stamp of approval right there

Avatar image for mcain99
mcain99

24

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

To me is sounds like Sony is taking a page out of integrated graphic card solutions, in which the CPU and integrated graphic card use the same memory. In a computer that memory would be RAM (DDR3); however most add-on video cards use GDDR5 so Sony upgraded their memory to be the same as would be found in a modern video card. Therefore (in theory) you should get better performance between the CPU and video operations since they use the same memory. I also think this helps developers because it makes the PS4 more similar to a computer, but a better computer because everything is tightly integrated whereas a computer allows for upgrades at the price of performance.

Avatar image for bourbon_warrior
Bourbon_Warrior

4569

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

#11  Edited By Bourbon_Warrior

GPU\GPU on the same chip with the GDDR5 ram doing the heavy lifting. All a GPU is any way is a chipset and GDDR Ram on board. This is super high end spec, for example the Nvidia 690 has 4GB of GDDR5 ram and that runs at $1000. But I really don't know how they compare, but if those real time demos are any indication this thing is bonkers.

Also a tech company could be offering this to Sony for very cheap just because they know they will be guarenteed around 50-100,000,000 units in the consoles life time, a pretty decent contract to be tied with, I'm sure a lot of memory manufacturer fought over that contract.

Avatar image for azteck
Azteck

7415

Forum Posts

5

Wiki Points

0

Followers

Reviews: 0

User Lists: 2

Doesn't it make it so possible bottlenecks between components are largely eliminated since they all share resources?

Avatar image for moah2jd
Moah2JD

5

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#13  Edited By Moah2JD

@ericsmith: The reason why Sony chose to build in GDDR5 is because it's make it easier for the developers to create games on the PlayStation 4, since that is one of the goals of Sony. The feedback from the developers are really happy with the design by Sony to create a platform to give freedom to the developers to express their visual of games the way they want to make it.

Avatar image for truthtellah
TruthTellah

9827

Forum Posts

423

Wiki Points

0

Followers

Reviews: 0

User Lists: 2

#14  Edited By TruthTellah

@moah2jd said:

@ericsmith: The reason why Sony chose to build in GDDR5 is because it's make it easier for the developers to create games on the PlayStation 4, since that is one of the goals of Sony. The feedback from the developers are really happy with the design by Sony to create a platform to give freedom to the developers to express their visual of games the way they want to make it.

I uh... I know we've made a lot of jokes about "shills" around here lately, but am I wrong to be mildly suspicious about a commenter who created an account 3 minutes ago and their first post is basically pure Sony PR in response to a 3 month old thread?

I mean, I don't even necessarily disagree that Sony did it to appeal to developers, likely thanks to Mark Cerny's input, but the way it is being said just sounds like someone paid to say it.

Avatar image for moah2jd
Moah2JD

5

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

@truthtellah: lol. No I'm not being paid for saying this, and don't have any job. I'm 16 years old guy who just go to school and play games when I'm home. I just wanted to share my knowledge and what I think the reason behind why Sony putting GDDR5. However, it seems that it's a bit suspicious that an account made just 6 minutes ago and already just replayed an post. Still I just felt to do so. So yeah. :)

Avatar image for darji
Darji

5412

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#16  Edited By Darji

The point is that you can not look at it like a normal PC. Here is what Cerny Said about the differences between CPUGPU of a console and a PC

Just as an example…when the CPU and GPU exchange information in a generic PC, the CPU inputs information, and the GPU needs to read the information and clear the cache, initially. When returning the results, the GPU needs to clear the cache, then return the result to the CPU. We’ve created a cache bypass. The GPU can return the result using this bypass directly. By using this design, we can send data directly from the main memory to the GPU shader core. Essentially, we can bypass the GPU L1 and L2 cache. Of course, this isn’t just for data read, but also for write. Because of this, we have an extremely high bandwidth of 10GB/sec.

Also, we’ve also added a little tag to the L2 cache. We call this the VOLATILE tag. We are able to control data in the cache based on whether the data is marked with VOLATILE or not. If this tag is used, this data can be written directly to the memory. As a result, the entirety of the cache can be used efficiently for graphics processing.

This function allows for harmonization of graphics processing and computing, and allows for efficient function of both. Essentially “Harmony” in Japanese. We’re trying to replicate the SPU Runtime System (SPURS) of the PS3 by heavily customizing the cache and bus. SPURS is designed to virtualize and independently manage SPU resources. For the PS4 hardware, the GPU can also be used in an analogous manner as x86-64 to use resources at various levels. This idea has 8 pipes and each pipe(?) has 8 computation queues. Each queue can execute things such as physics computation middle ware, and other prioprietarily designed workflows. This, while simultaneously handling graphics processing.

This type of functionality isn’t used widely in the launch titles. However, I expect this to be used widely in many games throughout the life of the console and see this becoming an extremely important feature.

http://www.neogaf.com/forum/showthread.php?t=532077

as for why gddr5:

Cerny also touched on the system’s unified architecture with 8GB of GDDR5, suggesting that it could help the PS4 trump a gaming PC in hertz-for-hertz performance. He said it was something that developers wanted so they delivered. As he explained, a PC with 8GB of GPU memory would only be able to share about 1 percent of that memory on any given frame. It all comes down a limitation of the speed of PCIe, he noted.

The GPU and CPU in the PS4 are on a single, custom chip created by AMD that is similar to an AMD APU. The memory isn’t on the same chip but a 256-bit bus lets it access RAM at 176GB/s, eliminating any sort of bottlenecks. He said the strategy was simply to use GDDR5 memory and make sure it had plenty of bandwidth.

http://www.techspot.com/news/52367-sonys-mark-cerny-discusses-why-the-ps4-will-use-an-x86-architecture.html

Avatar image for dauthi693
Dauthi693

134

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Going of rumored specs of X1 and guessing microsoft knowing what there doing.

DDR3 won't bottleneck its 12 CU's much/at all.

AS for the PS4 using GDDR 5 i guess DDR 3 didn't have the bandwidth to fully utillise its 18 CU's.

Either that or they felt that it was a more senseable choice than microsofts path of using 32MB eSRAM to help the DDR3. (Microsoft annouces its box contains 5 billion transistors ~1.6 billion are for its eSRAM.)

Avatar image for kagato
kagato

1162

Forum Posts

3

Wiki Points

0

Followers

Reviews: 5

User Lists: 7

#18  Edited By kagato

I have GDDR5 in my graphics card and despite it being nearly 4 years old it still runs most things at high settings without any hiccups, i have to imagine its more of a future proofing thing for Sony than tackling the current needs of the machine. What i imagine seeing is what is happening now, the Xbox 360 struggling with games like Farcry and the PS3 not so much due to a better (but slightly harder to use) architecture. This time around, both machines are relatively easy to program for and the PS4 will have memory that will allow it to run games near the end of its cycle way better than the Xbox one can, though im sure the same games will appear on both. Its all guess work just now since we havent actually had anyone benchmark performance yet, but it should allow Sony a little more breathing room when designing the PS5 as their machine wont feel as dated as the other two.

Avatar image for theodacourt
theodacourt

591

Forum Posts

143

Wiki Points

0

Followers

Reviews: 0

User Lists: 8

#19  Edited By theodacourt

I assumed the main reason for it was the suspend and entire game into the ram while the console is switched off type of thing.

Avatar image for tourgen
tourgen

4568

Forum Posts

645

Wiki Points

0

Followers

Reviews: 4

User Lists: 11

Right now on PC the main gfx bottleneck is transmitting data to GPU memory over the pcie bus. This won't be an issue at all with the PS4. It's architecture is going to allow for some pretty cool highly dynamic worlds.

Avatar image for crusader8463
crusader8463

14850

Forum Posts

4290

Wiki Points

0

Followers

Reviews: 7

User Lists: 5

Because 5 is a bigger number than 3 so it's obviously better. Simple math people.

Avatar image for oraknabo
oraknabo

1744

Forum Posts

12

Wiki Points

0

Followers

Reviews: 0

User Lists: 6

#22  Edited By oraknabo

I'm going to assume the real reason is that they designed a bunch of benchmark apps somewhere in the course of development and ran them on a number of different hardware configurations and the one with GDDR5 performed the best.

Avatar image for mattyftm
MattyFTM

14914

Forum Posts

67415

Wiki Points

0

Followers

Reviews: 4

User Lists: 11

#23 MattyFTM  Moderator

The bit I don't understand (as someone who's not super knowledgeable about the inner workings of computers) is why this isn't the same situation as the PS3. PS3 had wildly different architecture to PC's. This made it difficult to develop for which lead to a lot of that consoles problems. This time round they're closer to a PC style architecture than before, but rather having separate graphics memory and RAM, they're having 8GB of GDDR5 shared between the two. Surely, like the PS3 before it, leads to a different architecture that developers have to learn and makes development difficult?

Developers are all talking highly about this thing and how easy it is to develop for, so I assume this is just my ignorance of not understanding how computers work.

Avatar image for pweidman
pweidman

2891

Forum Posts

15

Wiki Points

0

Followers

Reviews: 0

User Lists: 4

#24  Edited By pweidman

LOL at how no one knows a specific answer to tc's question. Everything here is referential at best.

Now if Sony's being totally truthful about putting GDDR5 in their system because of developer requests, then I'm totally satisfied. I really hope that is the case. I know some devs have come right out and said how much they like the new architecture, so those have been great signs.

I think you have to question any choice Sony has made tech wise because of their poor choices with the PS3. MS has already shown they get how all this works for gaming and development so I have complete confidence in their design. Hell they're almost identical except for this RAM business right?

Avatar image for dark
Dark

487

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

@pweidman: Other than the ram the rest of the hardware is pretty samey between the two. People are putting into question how much of an overhead the Xbone's task switching is going to be putting onto the system, overall it may end up having less 'omph' purely on the amount of resources needed to keep task switching as fast as they want it.

Avatar image for nivash
Nivash

249

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

@mattyftm said:

The bit I don't understand (as someone who's not super knowledgeable about the inner workings of computers) is why this isn't the same situation as the PS3. PS3 had wildly different architecture to PC's. This made it difficult to develop for which lead to a lot of that consoles problems. This time round they're closer to a PC style architecture than before, but rather having separate graphics memory and RAM, they're having 8GB of GDDR5 shared between the two. Surely, like the PS3 before it, leads to a different architecture that developers have to learn and makes development difficult?

Developers are all talking highly about this thing and how easy it is to develop for, so I assume this is just my ignorance of not understanding how computers work.

You are right in both ways: yes, GDDR5 as system memory is considerably different from PC and no, it will not make the console more difficult to develop for.

What Sony have said is that none of the launch titles actually take any real advantage of the GDDR5. This would mean that they can just treat it as any old DDR3 system RAM for ease of development. Sony assumes that as developers get more familiar with the console and are able to start tapping its potential, as happens in every cycle, they will start to be able to make real use of the GDRR5 and give the system an edge long-term.

This makes sense. The demos we have seen so far certainly look good but they don't look appreciably better than the Xbone demos and certainly not really any better than PC games on running high-end systems today, despite both of the latter running strictly on DDR3 system memory. So if there is a significant advantage to using GDDR5 we haven't seen it yet.

Switching to GDDR5 probably was a gamble when Sony did it, but it's hardly one now. It has no real downsides at this point: Sony managed to push it all the way up to 8 gig and the PS4 is still $100 cheaper than the Xbone. It's pure gold: even if it would turn out not to make that big of a difference it's an excellent buzzword for marketing.

I'm not sure what the theoretical advantages of GDDR5 as systems memory really are. It hasn't really been done before so there's nothing to truly compare it to. As the graphics card market moved on in the last few years, VRAM in general has taken the backseat to other features such as GPU clockspeeds and number of cores. For example, the Radeon HD 7950 has 50 % more VRAM than the GTX 670 but the Nvidia card still outperforms the AMD card in most games. Higher VRAM really only gives a true edge at very high resolutions (which the PS4 does neither appear to target nor support at the current time) and memory-heavy tasks like high levels of anti-aliasing, the latter being something that could actually differ between the two consoles even if most people probably wouldn't see that big of a difference. Really high levels of anti-aliasing provide diminishing returns as far as PC gamers looks at it, and it's usually the first thing to scale back in order to gain better frame rate or higher resolutions.

But to be honest that's not really comparable.

If pushed I could think of one thing: it could allow the unit to off-load more assets into the RAM pool for better preloading, minimizing things such as pop-in textures, allowing greater draw distances and in general provide the basis for larger sections of the game world to be truly dynamic. I think I've heard some developers say the same thing.