#1 Edited by Blu3V3nom07 (4136 posts) -
#2 Posted by falserelic (5270 posts) -

I think that D4 game is made by the people who made Deadly Premonition. I remember seeing the trailer awhile back, but the gameplay I wasn't digging much. Though it could have a charm like Deadly Premonition had going on.

#3 Posted by Blu3V3nom07 (4136 posts) -

@falserelic: Right, and that's sort of my point. The 2nd article I linked has a very interesting article that is all about why Xbox failed in Japan. And that eventually, 12 years later, developers will finally jump onto Xbox and churn out something interesting like D4 here. And it looks really-really good, which sort of is the point of that 1st article. And well some 1998 rap from the time is good too, to relate to the dangerous vibe at the time.. :D

#4 Posted by falserelic (5270 posts) -

@falserelic: Right, and that's sort of my point. The 2nd article I linked has a very interesting article that is all about why Xbox failed in Japan. And that eventually, 12 years later, developers will finally jump onto Xbox and churn out something interesting like D4 here. And it looks really-really good, which sort of is the point of that 1st article. And well some 1998 rap from the time is good too, to relate to the dangerous vibe at the time.. :D

Yeah, Hopefully Microsoft starts coming up with some interesting exclusive games, and not be heavily focus on releasing kinect games that the majority of people dislike. Kinect was probably Microsoft's biggest downfall.

#5 Posted by The_Laughing_Man (13629 posts) -

What I get from that is that both systems will look more or less the same but get to that point in totally different ways.

#6 Edited by BurningStickMan (201 posts) -

@the_laughing_man said:

What I get from that is that both systems will look more or less the same but get to that point in totally different ways.

Same message I got. PS4 has a bigger pool of resources, One has more "tools" to help squeeze efficiency. PC keeps on truckin'.

#7 Posted by TruthTellah (8409 posts) -

There's a lot to chew on, but I like what I hear. It sounds like both consoles will have different ways to get similar effects. Though, we may notice some issues if developers don't account enough for these differences when porting between them. Definitely not the level of disparity in development we saw in the last generation at least.

It's going to be interesting to see how it really pans out.

#8 Posted by Colourful_Hippie (4328 posts) -

@the_laughing_man said:

What I get from that is that both systems will look more or less the same but get to that point in totally different ways.

Same message I got. PS4 has a bigger pool of resources, One has more "tools" to help squeeze efficiency. PC keeps on truckin'.

And Ryse (among other titles) is running at a lower resolution (1600 x 900) instead of 1080p which is sad but it is launch so ehh I guess.

#9 Posted by chiablo (892 posts) -

On cross-platform games, they will look and perform identically. Developers will always develop for the least powerful system and then port it over to the rest. The hardware is nearly identical between MS and Sony (compared to this generation) so we probably won't see a significant difference for another 10 years, when games reach an insane level of detail and fidelity that requires coders to milk the system for all its worth.

#10 Edited by Korwin (2813 posts) -

Explanation of the unified pipeline between the cache and the system memory was useful, will help in some circumstance were not everything fits cleanly into ESRAM however you would typically only use that memory space for thing like post processing and shader work (low memory foot print procedural/computation stuff).

Also interesting to know that they were seeing bottlenecking with the CPU which I've kind of suspected would be the case with both machines for some time now. Jaguar is AMD's answer to the higher end Atom's being made by Intel and REALLY isn't typically what you would throw at a high performance machine (any desktop CPU available at the moment will devour that thing whole). Sure they've loaded it up with 4 times as many cores as you would get out of your standard off the shelf Atom/Jaguar type machine but you are still thread performance limited at the end of the day, there's only so much to be gained by spinning off tasks to other cores when the architecture can only do so much.

Edit: Also one thing that sticks out like a sore thumb is that Ryse isn't even a native 1080 game... at 30hz, seriously is this all these things can give us?

#11 Edited by AlexGlass (688 posts) -
@blu3v3nom07 said:

Eurogamer: Digital Foundry vs. the Xbox One architects

I don't know what the fuck this stuff means. Do you know what the fuck it means? I don't fucking know.

That looks purdy.

Also interesting:

Yes. I finally got around to reading it so I can try to explain some of the more important points.

On ESRAM:

-It's split up into more than just 4, 8MB memory blocks as it appeared in the diagrams. Each one of the eight is actually split into single 1MB blocks and each accessible memory block can be accessed in parallel. Which means that yes, you can do read/write simultaneously to achieve that top bandwidth as long as you are doing it to different blocks of your eSRAM. If you hit, read/write, the same ram block, it will top out at 140-150GB's under real life conditions. But obviously can be further optimized by spreading it out across the full 32MB.

-Coherency between eSRAM and DDR3, unlike the 360 eDRAM. Instead of being locked into having your render target stuffed into just the eSRAM, you can now split them between the both, as they have built in coherence. This is big.

"Oh, absolutely. And you can even make it so that portions of our your render target that have very little overdraw... for example if you're doing a racing game and your sky has very little overdraw, you could stick those sub-sets of your resources into DDR to improve ESRAM utilisation," he says, while also explaining that custom formats have been implemented to get more out of that precious 32MB.

If anyone's ever used a 3D program like Blender, you can sort of think of it kind of how Blender treats rendering with tiles to try to visualize this, but for memory access.

In this case the frame being rendered is being rendered one block at a time. If you actually pay attention to the process, blocks which are rendering the transparent table, which uses ray tracing for transparencies and reflections, will render much, much slower, than the blocks which are rendering a portion of the wall.

In gaming the frame that is getting ready to be drawn is being stored as a "render target" in your buffer(portion of video RAM). But before this final frame is actually drawn, developers often divide into multiple secondary buffers, for different portions of the frame, before they mix it all together, and draw the final image.

They're talking about this and in regards to overdraw and memory storage. Overdraw mainly deals with overlapping objects and where the respective objects are stored. Such as the portion of the image where pary of my ray traced table, leg, and floor overlap which has more overdaw compared to something like the ceiling. If the player were to point the camera up the ceiling would be composed of mainly one object and one texture. Since they are now coherent, and can be accessed in parallel, I can set the render target for the more intensive read/write portions in eSRAM, and my ceiling in DDR3, which won't require as much bandwidth for best optimization. On the 360, you basically had to fit it all in eDRAM.

-They've also designed some custom render target formats including upgrading from 16 bit to 32 bit 7e3 HDR from the 360 to X1. This is basically in reference to bits per pixel, and how RGB color is stored using floating point format(decimals), rather than just integers(1-255). More accurate color information.

On the Data Move Engines:

"Imagine you've rendered to a depth buffer there in ESRAM. And now you're switching to another depth buffer. You may want to go and pull what is now a texture into DDR so that you can texture out of it later, and you're not doing tons of reads from that texture so it actually makes more sense for it to be in DDR. You can use the Move Engines to move these things asynchronously in concert with the GPU so the GPU isn't spending any time on the move. You've got the DMA engine doing it. Now the GPU can go on and immediately work on the next render target rather than simply move bits around."

So this basically ties into the explanation above where the DMEs are moving this data around from buffer to buffer, keeping the GPU free from having to spare processing on these arbitrary tasks.

On the CU's in the GPU:

-Increasing clock speed gained them more efficiency in performance compared to increasing more CU's: 12CUs @ 857Mhz > 14 CUs @ 800Mhz. Not only because AMD CUs don't scale linearly, but because increasing the clock speed overall also netted them an increase in performance in other areas, like shaders, eSRAm, CPU etc.

In regards to CPU offloading:

"Interestingly, the biggest source of your frame-rate drops actually comes from the CPU, not the GPU," Goossen reveals. "Adding the margin on the CPU... we actually had titles that were losing frames largely because they were CPU-bound in terms of their core threads. In providing what looks like a very little boost, it's actually a very significant win for us in making sure that we get the steady frame-rates on our console."

Pretty straight forward. In addition they mention how the data move engines and SHAPE audio chip are both designed to further offload processing off the CPU.

On the GPU compute and eSRAM:

-As I have speculated before there do appear to be certain advantages to having eSRAM when it comes to GPGPU.

"Exemplar ironically doesn't need much ALU. It's much more about the latency you have in terms of memory fetch, so this is kind of a natural evolution for us," he says. "It's like, OK, it's the memory system which is more important for some particular GPGPU workloads."

One example they gave is Exemplar, which is Kinect's skeletal animation tracking. The other developer comment in edge mentioned procedural generation and ray tracing via parametric surfaces as other areas that would benefit from this set up.

I'm sure there will be a lot more talk about some of the additional advantages of MS's architecture with the X1.

#12 Posted by Hailinel (23676 posts) -

@falserelic: Right, and that's sort of my point. The 2nd article I linked has a very interesting article that is all about why Xbox failed in Japan. And that eventually, 12 years later, developers will finally jump onto Xbox and churn out something interesting like D4 here. And it looks really-really good, which sort of is the point of that 1st article. And well some 1998 rap from the time is good too, to relate to the dangerous vibe at the time.. :D

Here's the thing, though. Microsoft tried this whole "interesting Japanese console exclusives" strategy on the 360, and it didn't work out so well. Sure, when games like Blue Dragon or Lost Odyssey came out, the console saw a mild sales bump in Japan, but it would immediately flatten again. And a lot of those exclusives ended up being ported to the PS3 or PC. A game like D4, which is Kinect intensive, likely won't see release on other platforms, but D4 is also likely not going to be a system seller either here or in Japan.

#13 Posted by Sergio (2037 posts) -

It sounds like they had some designs that were inferior and they chose to optimize in other areas to try to compensate. Now they have to go out and spin control to try to salvage the Xbox One before launch due to the PR nightmare they've had since the initial announce.

#14 Edited by Seppli (10251 posts) -

Read the whole thing, and while I understand little of the specifics, it sounds like *look, it's not as bad as it's looking on paper - just wait 'til you see the games* and *trust us* to me. While that may likely be right, it still comes off as downplaying and spinning the truth, namely that they've built the less powerful, more expensive at retail hardware.

Obviously they've been playing catch-up since the 8 GB of DDR5 RAM reveal at Sony's PS4 event in New York in February. Obviously they've made some improvements, and that's cool - just don't make it sound like suddenly your shit is mana from heaven.

#15 Edited by AlexGlass (688 posts) -

@seppli said:

Read the whole thing, and while I understand little of the specifics, it sounds like *look, it's not as bad as it's looking on paper - just wait 'til you see the games* and *trust us* to me. While that may likely be right, it still comes off as downplaying and spinning the truth, namely that they've built the less powerful, more expensive at retail hardware.

Obviously they've been playing catch-up since the 8 GB of DDR5 RAM reveal at Sony's PS4 event in New York in February. Obviously they've made some improvements, and that's cool - just don't make it sound like suddenly your shit is mana from heaven.

They're not and I don't think that's all they said.

There will be some advantages this architecture will provide to certain graphical techniques. GDDR5 provides a large amount of bandwidth for a large pool of memory which will obviously have its own advantages. But eSRAM provides an even faster amount of bandwidth, for a much smaller pool. However, that small pool of very fast on chip ram, will lend itself to prove useful to certain processes, such as GPU compute that requires a lot of memory reads and writes, frame buffer operations, etc. Not everything that makes videogame graphics look good deals with moving around assets from RAM on a large buss. There's a lot of things that go into making your graphics look good and resolution and textures is just one aspect of that.

#16 Edited by jgf (381 posts) -

@seppli said:

Read the whole thing, and while I understand little of the specifics, it sounds like *look, it's not as bad as it's looking on paper - just wait 'til you see the games* and *trust us* to me. While that may likely be right, it still comes off as downplaying and spinning the truth, namely that they've built the less powerful, more expensive at retail hardware.

Obviously they've been playing catch-up since the 8 GB of DDR5 RAM reveal at Sony's PS4 event in New York in February. Obviously they've made some improvements, and that's cool - just don't make it sound like suddenly your shit is mana from heaven.

Thats pretty much my conclusion as well. Sony got lucky with the gddr gamble while Xone was already targeting 8gb ram in early design stages. So they went with ddr3 and built a system around it to mitigate the memory performance problem. They chose onboard esram which takes much space on the chip and in turn the GPU could not take up as much space as on the PS4.

The ddr3 + esram choice was forced upon the MS engineers by the constraints they got: Include Kinect, have 8gb ram and stay below 500$. They should just acknowledge that as a fact and stop spreading marketing nonsense. It was never a "games first" choice.