Something went wrong. Try again later
    Follow

    Xbox One

    Platform »

    The Xbox One is Microsoft's third video game console. It was released on November 22nd 2013 in 13 countries.

    XBox One CPU gets a 150Mhz bump from estimated specs

    • 99 results
    • 1
    • 2
    • 3
    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    4. Sure but I don't think I ever talked about the X1 having an advantage in pure compute. That is well known. But it depends on how they are being used and what for. What I'm more interested in, however, is how the eSRAM, tile/untile of the move engines and compute units work together and what kind of software techniques they're designed to accelerate. For example, voxel cone ray tracing is a pretty hot topic currently, and it was the driving force behind Unreal Engine 4 before they had to yank it out because mid-range PC's and next gen consoles were not considered powerful enough in terms of pure TFLOps to run it. But it's a technique that can benefit greatly from partial resident textures which precisely deals with a lot of memory reads and writes. So if it turns out that MS designed the X1 to be suitable towards handle something like this, I think it's going to be a pretty big deal. It's going to have a much bigger impact on graphics than how many high res textures you can stuff into your RAM compressed in their native resolution and move around through a large bus.

    I have to wonder though, if they did feel that mid-range PCs aren't powerful enough to handle it, and maybe PS4 doesn't do it as well as Xbox One, but the latter can pull it off. Would developers put in the resources to do voxel cone ray tracing for one lone platform depending on the effort it takes? I could expect first- and second-party developers making exclusives for Xbox One exploiting every bit of power the system has, but a third-party developer making a multiplatform game will probably do what will look best on the most hardware out there.

    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    4. Sure but I don't think I ever talked about the X1 having an advantage in pure compute. That is well known. But it depends on how they are being used and what for. What I'm more interested in, however, is how the eSRAM, tile/untile of the move engines and compute units work together and what kind of software techniques they're designed to accelerate. For example, voxel cone ray tracing is a pretty hot topic currently, and it was the driving force behind Unreal Engine 4 before they had to yank it out because mid-range PC's and next gen consoles were not considered powerful enough in terms of pure TFLOps to run it. But it's a technique that can benefit greatly from partial resident textures which precisely deals with a lot of memory reads and writes. So if it turns out that MS designed the X1 to be suitable towards handle something like this, I think it's going to be a pretty big deal. It's going to have a much bigger impact on graphics than how many high res textures you can stuff into your RAM compressed in their native resolution and move around through a large bus.

    I have to wonder though, if they did feel that mid-range PCs aren't powerful enough to handle it, and maybe PS4 doesn't do it as well as Xbox One, but the latter can pull it off. Would developers put in the resources to do voxel cone ray tracing for one lone platform depending on the effort it takes? I could expect first- and second-party developers making exclusives for Xbox One exploiting every bit of power the system has, but a third-party developer making a multiplatform game will probably do what will look best on the most hardware out there.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #103  Edited By AlexGlass

    @sergio said:

    @alexglass said:

    4. Sure but I don't think I ever talked about the X1 having an advantage in pure compute. That is well known. But it depends on how they are being used and what for. What I'm more interested in, however, is how the eSRAM, tile/untile of the move engines and compute units work together and what kind of software techniques they're designed to accelerate. For example, voxel cone ray tracing is a pretty hot topic currently, and it was the driving force behind Unreal Engine 4 before they had to yank it out because mid-range PC's and next gen consoles were not considered powerful enough in terms of pure TFLOps to run it. But it's a technique that can benefit greatly from partial resident textures which precisely deals with a lot of memory reads and writes. So if it turns out that MS designed the X1 to be suitable towards handle something like this, I think it's going to be a pretty big deal. It's going to have a much bigger impact on graphics than how many high res textures you can stuff into your RAM compressed in their native resolution and move around through a large bus.

    I have to wonder though, if they did feel that mid-range PCs aren't powerful enough to handle it, and maybe PS4 doesn't do it as well as Xbox One, but the latter can pull it off. Would developers put in the resources to do voxel cone ray tracing for one lone platform depending on the effort it takes? I could expect first- and second-party developers making exclusives for Xbox One exploiting every bit of power the system has, but a third-party developer making a multiplatform game will probably do what will look best on the most hardware out there.

    Well there's amateurs, one man programmers, all over the net implementing voxel cone ray tracing in a couple of months. Doesn't seem like it's very hard.

    The fact is if you can get it up and running, why wouldn't a developer want to use it? It's superior in just about every way to current global illumination techniques, and it basically gives you ray tracing lite on consoles. Not to mention, once you get the engine up and running all the effort and dev time that goes into faking it with rasterized graphics gets thrown out the window. Shadow maps, reflection maps, specular maps, etc.....it should help development big time. Not the other way around. It makes all the impossibly hard effects that require manpower and lots of work easy. And you get all the effects that just currently aren't possible without ray tracing.

    Most of the demos are running on mid-range GPUs. Anywhere from a GTX660 without partial resident textures and without octree implementations which looks rough to 7870's without octrees which looks fantastic.

    Loading Video...

    The one key are that seems to make it or break it, and everyone that's attempted it talks about, is the need for partial resident textures to be able to handle the large 3D textures. Which is why I keep digging all over the place to find out just exactly how this architecutre, this tile/untile feature of the data move engines, and eSRAM all fits into place. MS is pushing PRT hard in DirectX. And the X1 definitely seems to be equipped for this. The only question, that I don't yet have an answer for, is just how well equipped is it? Is it powerful enough to do it justice, and is it enough to make the difference between a demo and running an actual game while maintaining a stable 30fps.

    If so, then that's a big deal.

    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    #104  Edited By Sergio

    @alexglass: You didn't really answer the question and just did another of your "if so, then that's a big deal" suppositions.

    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    #105  Edited By Sergio

    @alexglass: You didn't really answer the question and just did another of your "if so, then that's a big deal" suppositions.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #106  Edited By AlexGlass

    @sergio said:

    @alexglass: You didn't really answer the question and just did another of your "if so, then that's a big deal" suppositions.

    I thought I did and explained why. Isn't that the same old argument though? If developers won't take advantage of unique aspects of the X1 hardware then why would it they do it on the PS4?

    Besides, I don't care if it's a multiplatform game or not. Exclusives using this would be just fine by me. That's where I expect to see any major difference in games anyway.

    I will say this much though. It will be on PC this gen. So that might make a difference. PC will have both games and game engines running SVO cone tracing and ray tracing, by the end of this upcoming console generation. I'll go out on a limb and say the first game to use it will be out within 3 years tops.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    #107  Edited By jgf

    @jgf said:

    @the_laughing_man: Yeah there is this fat dude next to you. He can't move fast, but he eats all the candy you give to him in an instant. He keeps drawing nice pictures in return though.

    Will the games be pretty? Will they haev a decent frame rate? Thats all I care about.

    If you feed candy to the fat guy fast enough to keep him happy, games will look spectacular.

    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    @sergio said:

    @alexglass: You didn't really answer the question and just did another of your "if so, then that's a big deal" suppositions.

    I thought I did and explained why. Isn't that the same old argument though? If developers won't take advantage of unique aspects of the X1 hardware then why would it they do it on the PS4?

    Besides, I don't care if it's a multiplatform game or not. Exclusives using this would be just fine by me. That's where I expect to see any major difference in games anyway.

    I will say this much though. It will be on PC this gen. So that might make a difference. PC will have both games and game engines running SVO cone tracing and ray tracing, by the end of this upcoming console generation. I'll go out on a limb and say the first game to use it will be out within 3 years tops.

    I don't expect them to go the extra mile to leverage anything especially unique in PS4, because they certainly didn't with PS3. First-party, late generation games on it has blown any multiplatform (excluding PCs) and Xbox 360 games out of the water, and I say that as someone who had primarily chosen the 360 as my multiplatform games console of choice. However, Sony's approach of simplifying things means it's easier to leverage that power before trying to specialize anything.

    Exclusives will look great on either machine. I still think PS4 exclusives will look better than Xbox One exclusives, but probably not as noticeably better than multiplatform games on each system.

    Yes, I'd expect these techniques to be used sometime during this console generation, unless better techniques are used. That's simply because PC components will continue to improve and the mid-range components will be better than what we consider mid-range now, while consoles remain the same as when they were released. I'm not going to make predictions on when that may be. At that point, there may be little to no difference in how games look.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    #109  Edited By jgf

    @alexglass said:

    1. Excuse me I had them reversed. In which case, as you said, it could be worse and only 10GB/s of that being coherent. That's something that still needs to be clarified.

    2. Yeah of course texture streaming from a hard disk can result in pop-in as well and that's a bigger bottleneck. But that 's a different subject and it's a far more obvious problem. On your calculations, your numbers add up but from my understanding latency doesn't typically affect it in the matter you describe. I'm not qualified in any matter to begin guessing on numbers like that, but what I know is that it often results in missed clock cycles, which can cause your bandwidth to run less efficiently overall because this request happens multiple times, not once, and you're not streaming a single 20mb texture. So in your case, aren't you using ideal conditions? Again I'm not saying that was the reason for the pop-in, but rather just a question. It certainly appears to be more of a trend with PS4 games from what we have seen so something is going on.

    3. The rest of the world? As in people who believe MS and Sony are spying on each other and trying to out do or mitigate each others advantages when designing chips, rather than having a particular goal in mind that they're trying to achieve? I'm ok with disagreeing with them. I think MS designed the X1 and DirectX as it has from its inception to best take advantage of one another and I believe the eSRAM isn't there to deal with streaming a whole bunch of stuff in bulk, as opposed to the main ram bandwidth.

    1. Or it could be equal at 30gb/sec. Somehow you always seem to forget to mention that.

    2. If you don't stream textures from hard disk, from where are you streaming it then? The numbers are more then under ideal condition, because I simply left the latency of DDR3 and ESRAM out of the picture. So in practise they would be worse for the Xone. Also if you are streaming more then a 20mb texture, things also get worse for the Xone, because the faster part of its memory is only 32mb. So in reality things like high-res textures will always go into the DDR3.

    The main reason for pop-in is imho drawing distance and streaming. How do you conclude that PS4 suffers more from this issue? I can't recall any severe pop-ins in the game demos we saw. Also pop-ins are much more likely to happen in an huge open world scenario, like gta, saints row or infamous. Where the whole world is to huge to fit into memory or to be drawn completely at any time. You won't see much pop-in in an on-rails shooter where the space is confined and camera angles are known before hand. Like in Ryse. I don't recall that we have seen any open world game on Xone hardware yet (I don't even know if we have seen anything running on Xone hardware), so talking about pop-in issues is meaningless at this time.

    I would even go so far to say that it is theoretical impossible that the memory of Xone is better suited to streaming then the PS4. You only write textures to memory when you stream and you write them now because they where to big to fit into memory beforehand (or the harddrive is to slow to finish loading them as the game starts - load a new level in rage and you see what I mean). So you're left with the 68gb/sec write speed of ddr3 (if you would use the esram it would be full in about 0.5ms with a single texture or two) vs. the 172gb/sec of gddr5. Of course thats all theoretical mambo jambo as the real bottleneck is hard drive and that thing is utterly slow compared to any type of ram.

    Hell even if you only use the 10gb/sec cache coherent onion bus in the PS4 the whole memory would be full in under a second. Assuming 60fps you can stream 170mb of new textures per frame. Noticeable pop-in occurs when it takes at least half a second until the stuff occurs on screen. Thats 30 frames or 5gb data with a 10gb/sec bus. With full gddr5 speed the entire ram is filled in about 0.05 seconds or *ninja edit* three frames at 60fps. Its just ridiculous to claim that pop-in streaming issues occur because of the memory configuration of any of those two consoles.

    3. DDR3 is too slow for a decent graphics card. Thats just a fact. No recent card uses it. If you want to mitigate that you have to do something. MS went with ESRAM. Its simple as that.

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    @jgf said:

    @the_laughing_man said:

    @jgf said:

    @the_laughing_man: Yeah there is this fat dude next to you. He can't move fast, but he eats all the candy you give to him in an instant. He keeps drawing nice pictures in return though.

    Will the games be pretty? Will they haev a decent frame rate? Thats all I care about.

    If you feed candy to the fat guy fast enough to keep him happy, games will look spectacular.

    Thats all I want. I am happy and wanna play Watch dogs and AC4 and BF4.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #111  Edited By AlexGlass

    @jgf said:

    1. Or it could be equal at 30gb/sec. Somehow you always seem to forget to mention that.

    2. If you don't stream textures from hard disk, from where are you streaming it then? The numbers are more then under ideal condition, because I simply left the latency of DDR3 and ESRAM out of the picture. So in practise they would be worse for the Xone. Also if you are streaming more then a 20mb texture, things also get worse for the Xone, because the faster part of its memory is only 32mb. So in reality things like high-res textures will always go into the DDR3.

    The main reason for pop-in is imho drawing distance and streaming. How do you conclude that PS4 suffers more from this issue? I can't recall any severe pop-ins in the game demos we saw. Also pop-ins are much more likely to happen in an huge open world scenario, like gta, saints row or infamous. Where the whole world is to huge to fit into memory or to be drawn completely at any time. You won't see much pop-in in an on-rails shooter where the space is confined and camera angles are known before hand. Like in Ryse. I don't recall that we have seen any open world game on Xone hardware yet (I don't even know if we have seen anything running on Xone hardware), so talking about pop-in issues is meaningless at this time.

    I would even go so far to say that it is theoretical impossible that the memory of Xone is better suited to streaming then the PS4. You only write textures to memory when you stream and you write them now because they where to big to fit into memory beforehand (or the harddrive is to slow to finish loading them as the game starts - load a new level in rage and you see what I mean). So you're left with the 68gb/sec write speed of ddr3 (if you would use the esram it would be full in about 0.5ms with a single texture or two) vs. the 172gb/sec of gddr5. Of course thats all theoretical mambo jambo as the real bottleneck is hard drive and that thing is utterly slow compared to any type of ram.

    Hell even if you only use the 10gb/sec cache coherent onion bus in the PS4 the whole memory would be full in under a second. Assuming 60fps you can stream 170mb of new textures per frame. Noticeable pop-in occurs when it takes at least half a second until the stuff occurs on screen. Thats 30 frames or 5gb data with a 10gb/sec bus. With full gddr5 speed the entire ram is filled in about 0.05 seconds or *ninja edit* three frames at 60fps. Its just ridiculous to claim that pop-in streaming issues occur because of the memory configuration of any of those two consoles.

    3. DDR3 is too slow for a decent graphics card. Thats just a fact. No recent card uses it. If you want to mitigate that you have to do something. MS went with ESRAM. Its simple as that.

    1. But we don't know. My question is why would they mark one side coherent and not the other if they both were? What we do know is 10GB/s is coherent. If things change, we'll change the argument but for the time being, this is what we have.

    2. I'm referring to normal gaming situation. You're pulling hundreds of textures all at different times and different mip-map levels, out of RAM to the GPU depending on what you need to show each and every frame. Again I'm not really qualified to really talk about this area in relation to actual timings, but I would imagine each time you pull a texture or one mip-map, you're making a memory access request. You don't make one, and get all 100, especially since each individual texture would need to be requested asynchronously. And not just textures, but different LOD models as a whole. And that's typically the area where you notice pop-in. So you're potentially making hundreds and thousands of requests each and every second which is where latency might play a role. Also I don't think Infamous is any more open world than Dead Rising 3 from what I saw. You can compare the two on this issue.

    It's theoretically very possible when dealing with partial resident textures.

    3. DDR3 seems to be working just fine from the X1 launch games we've seen. And once again, I re-iterate, that's not the only reason. We know for a fact MS is big on the X1's operating system features and Kinect. They have a very sophisticated approach with 2 OS's tied together by a Hypervisor, which will certainly benefit from DDR3. You keep forgetting that unlike a PC, these consoles are using it as unified RAM. GDDR5 has yet to be used for applications in any serious way. And despite both GPUs supporting PRTs, so far the only one we've heard pushing the feature has been MS and the only built in hardware support we have confirmed to handle tilling/untiling this has been the data move engines. Again, things can always change as we find out more about the PS4, but what we do know for a fact, is MS took the additional step in this direction. So yeah to simply pass it off as just a reason to mitigate bandwidth, when we already know there's quite a few things MS had in mind from the get go with the X1 that fits this set up, and perhaps some very exciting techniques that will drive graphics into the future which might benefit from this approach, is short sighted.

    This edit will also create new pages on Giant Bomb for:

    Beware, you are proposing to add brand new pages to the wiki along with your edits. Make sure this is what you intended. This will likely increase the time it takes for your changes to go live.

    Comment and Save

    Until you earn 1000 points all your submissions need to be vetted by other Giant Bomb users. This process takes no more than a few hours and we'll send you an email once approved.