I'm still wish PGR 5 gets announced soon for next year.. This is nice and all but.. Le sigh.
But, with Bizarre Creations gone would it really be a PGR game?
You mean without the short ass runs, and 90 degree corners that always make you go no faster than 50MPH? I'm in! I hated PGR for that reason.
If you're an arcade racer, let me freaking race at 100MPH+ 80% of the time. Not interested in slowing down that much. I mean honestly I'm not a big sim racing fan, but even Forza doesn't seem to be that bad about it and it's tracks seem to be much better for high speed racing.
In addition, PGR's car physics always felt ULTRA heavy and bulky to me. Ridge Racer and Outrun FTW! :)
Read the whole thing, and while I understand little of the specifics, it sounds like *look, it's not as bad as it's looking on paper - just wait 'til you see the games* and *trust us* to me. While that may likely be right, it still comes off as downplaying and spinning the truth, namely that they've built the less powerful, more expensive at retail hardware.
Obviously they've been playing catch-up since the 8 GB of DDR5 RAM reveal at Sony's PS4 event in New York in February. Obviously they've made some improvements, and that's cool - just don't make it sound like suddenly your shit is mana from heaven.
They're not and I don't think that's all they said.
There will be some advantages this architecture will provide to certain graphical techniques. GDDR5 provides a large amount of bandwidth for a large pool of memory which will obviously have its own advantages. But eSRAM provides an even faster amount of bandwidth, for a much smaller pool. However, that small pool of very fast on chip ram, will lend itself to prove useful to certain processes, such as GPU compute that requires a lot of memory reads and writes, frame buffer operations, etc. Not everything that makes videogame graphics look good deals with moving around assets from RAM on a large buss. There's a lot of things that go into making your graphics look good and resolution and textures is just one aspect of that.
Thanks for posting the download link. It's true that YouTube compression kills a lot of the image quality from next-gen games. Oh and I wonder why they didn't show the cockpit of the Huayra in the video.
Yeah we need to start petitioning Youtube to change their compression and frame rate. They really need to do something. I'm actually starting to wonder if they're even 1080p compressed. It's so bad, I'm starting to think they upscale everything, from either a compressed 720p video or maybe even lower, and call it 1080p. Compression alone typically doesn't create those type of clearly visible pixel blocks I always see in Youtube videos.
OP, I don't think most people understand the magnitude of what you just posted when you say Halo 4 PC streaming because you merged it with 2 different topics. Ok so basically the idea that Azure is going to be just a dedicated server can officially be put to rest now.
Microsoft is building its own cloud gaming service. Company officials demonstrated a prototype of the service during an internal company meeting today. Sources familiar with the meeting revealed to The Verge that Microsoft demonstrated Halo 4 running on a Windows Phone and PC, both streaming the game from the cloud. We're told that the concept service runs smoothly on both devices, and that Microsoft has managed to reduce the latency on a Lumia 520 to just 45ms.
For the record, 45ms is basically incredible and that's on a phone, so I'm assuming we're talking cell phone networks and cell phone wireless.
Here's the Lumia 520:
45ms is less than typical LCD TV lag, and half the latency the latest and newest cloud services running Nvidia's GRID servers can offer. And roughly 3x-5x less latency than OnLive. In other words, we're there.
At that latency, it's just a different world.
Microsoft has been researching cloud streaming, especially for cell phones, for a long time now btw. You can read in depth papers on it here and here. And not just for gaming, but for applications like MS Office, 3D design programs and many other uses.
Can't wait to see what they come up with as far as hybrid X1/server offloading goes. Will be amazing to see something like Cloudlight implemented in X1 games. If they can reach that type of latency worldwide, or even 60ms, you could realistically do your entire real time lighting engine on a server.
Well direct feed as it goes for image quality since it seems the video is still encoded at 29.9fps but the difference between the download version and Youtube compressed version is quite drastic. Glad to see MS finally doing something about it and putting it up on their media website.
Is it me, or did they just deactivate shadows in the XboxOne version or something?
If you look to the screen next to it that one has shadows. From what I heard its an Alpha build or something.
Pretty sure they're still using Enlighten for BF4 which goes back to BF3 and even something they ran on the 360 and PS3 as well. There's nothing fancy going on here. It's a mixture of shadow maps and dynamic shadows on all platforms so I wouldn't expect all objects to cast dynamic shadows.
Which is pretty sad for 2013, but that's a different matter.
Yes. I finally got around to reading it so I can try to explain some of the more important points.
On ESRAM:
-It's split up into more than just 4, 8MB memory blocks as it appeared in the diagrams. Each one of the eight is actually split into single 1MB blocks and each accessible memory block can be accessed in parallel. Which means that yes, you can do read/write simultaneously to achieve that top bandwidth as long as you are doing it to different blocks of your eSRAM. If you hit, read/write, the same ram block, it will top out at 140-150GB's under real life conditions. But obviously can be further optimized by spreading it out across the full 32MB.
-Coherency between eSRAM and DDR3, unlike the 360 eDRAM. Instead of being locked into having your render target stuffed into just the eSRAM, you can now split them between the both, as they have built in coherence. This is big.
"Oh, absolutely. And you can even make it so that portions of our your render target that have very little overdraw... for example if you're doing a racing game and your sky has very little overdraw, you could stick those sub-sets of your resources into DDR to improve ESRAM utilisation," he says, while also explaining that custom formats have been implemented to get more out of that precious 32MB.
If anyone's ever used a 3D program like Blender, you can sort of think of it kind of how Blender treats rendering with tiles to try to visualize this, but for memory access.
In this case the frame being rendered is being rendered one block at a time. If you actually pay attention to the process, blocks which are rendering the transparent table, which uses ray tracing for transparencies and reflections, will render much, much slower, than the blocks which are rendering a portion of the wall.
In gaming the frame that is getting ready to be drawn is being stored as a "render target" in your buffer(portion of video RAM). But before this final frame is actually drawn, developers often divide into multiple secondary buffers, for different portions of the frame, before they mix it all together, and draw the final image.
They're talking about this and in regards to overdraw and memory storage. Overdraw mainly deals with overlapping objects and where the respective objects are stored. Such as the portion of the image where pary of my ray traced table, leg, and floor overlap which has more overdaw compared to something like the ceiling. If the player were to point the camera up the ceiling would be composed of mainly one object and one texture. Since they are now coherent, and can be accessed in parallel, I can set the render target for the more intensive read/write portions in eSRAM, and my ceiling in DDR3, which won't require as much bandwidth for best optimization. On the 360, you basically had to fit it all in eDRAM.
-They've also designed some custom render target formats including upgrading from 16 bit to 32 bit 7e3 HDR from the 360 to X1. This is basically in reference to bits per pixel, and how RGB color is stored using floating point format(decimals), rather than just integers(1-255). More accurate color information.
On the Data Move Engines:
"Imagine you've rendered to a depth buffer there in ESRAM. And now you're switching to another depth buffer. You may want to go and pull what is now a texture into DDR so that you can texture out of it later, and you're not doing tons of reads from that texture so it actually makes more sense for it to be in DDR. You can use the Move Engines to move these things asynchronously in concert with the GPU so the GPU isn't spending any time on the move. You've got the DMA engine doing it. Now the GPU can go on and immediately work on the next render target rather than simply move bits around."
So this basically ties into the explanation above where the DMEs are moving this data around from buffer to buffer, keeping the GPU free from having to spare processing on these arbitrary tasks.
On the CU's in the GPU:
-Increasing clock speed gained them more efficiency in performance compared to increasing more CU's: 12CUs @ 857Mhz > 14 CUs @ 800Mhz. Not only because AMD CUs don't scale linearly, but because increasing the clock speed overall also netted them an increase in performance in other areas, like shaders, eSRAm, CPU etc.
In regards to CPU offloading:
"Interestingly, the biggest source of your frame-rate drops actually comes from the CPU, not the GPU," Goossen reveals. "Adding the margin on the CPU... we actually had titles that were losing frames largely because they were CPU-bound in terms of their core threads. In providing what looks like a very little boost, it's actually a very significant win for us in making sure that we get the steady frame-rates on our console."
Pretty straight forward. In addition they mention how the data move engines and SHAPE audio chip are both designed to further offload processing off the CPU.
On the GPU compute and eSRAM:
-As I have speculated before there do appear to be certain advantages to having eSRAM when it comes to GPGPU.
"Exemplar ironically doesn't need much ALU. It's much more about the latency you have in terms of memory fetch, so this is kind of a natural evolution for us," he says. "It's like, OK, it's the memory system which is more important for some particular GPGPU workloads."
One example they gave is Exemplar, which is Kinect's skeletal animation tracking. The other developer comment in edge mentioned procedural generation and ray tracing via parametric surfaces as other areas that would benefit from this set up.
I'm sure there will be a lot more talk about some of the additional advantages of MS's architecture with the X1.
I saw it too. But we've heard so many other sources give positive impressions of Kinect that I don't think it's time to hang your hat on one report about one demo of an unfinished game.
We've seen KSR at Gamescom and the scanning tech and it looked to work fine to me. I don't think Kotaku should pass judgement of the hardware based on one game.
What? 30 frames? Why not 60? We're talking next-gen, for fuck's sake.
We're talking current gen PC hardware actually, and 1.3/1.8 Tflops. That's half the TFLOPs of a GTX780 or 7970. They're console, not miracle machines. Just what do you expect out of these next generation consoles? I accepted it a long time ago. They're not that hot.
Don't get me wrong, devs will still get great looking games out of it, console development environment will help, and DR3 has a ton of enemies on screen and is basically an open world game but they're modest machines at best.
In addition, I just highly doubt you're going to see too many open world games running at 60fps. Even, next next-generation. 10 years from now, you'll still have 30fps games just because some games benefit more from graphics, physics, or more enemies on screen than frame rate. So unless we ever get to the point where developers have so much power they barely use up 50% of it, this trade-off will always be made(i.e. Probably never).
If you want 60fps as a baseline your best bet is cloud gaming. Turns out 60fps is beneficial to latency, so there devs might have a reason to shoot for 60 more consistently.
Excellent and never doubted it. The builds we have been seeing were from E3, so optimization, spec bumps, improved software tools. More than enough to get the game running smooth.
Log in to comment