Something went wrong. Try again later
    Follow

    Xbox One

    Platform »

    The Xbox One is Microsoft's third video game console. It was released on November 22nd 2013 in 13 countries.

    XBox One CPU gets a 150Mhz bump from estimated specs

    • 99 results
    • 1
    • 2
    • 3
    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    #51  Edited By Sergio

    @jgf said:

    @sergio: I bet MS has access to some PS4 devkits, as well as Sony has access to Xone devkits. Perhaps through a third-party involvement. There is so much money on the line, it would be stupid not to check out the competition. They can't admit it though for legal reasons, but I would be surprised if no one at MS had laid their hands on a PS4 devkit.

    That would break NDAs and open people to lawsuits.

    As a programmer myself, there are things we tend to do to make sure we don't open the companies we work for to any lawsuits. I don't think Microsoft nor Sony would sanction trying to get the other's devkits prior to launch. Not even a, "we'll just turn our backs here and pretend you aren't doing what we told you not to do." No doubt once they are publicly available, they would get a hold of the other's released hardware.

    There's a difference between telling a colleague friend an anecdote about how the two compare, versus actually giving them closed-door access to the system to compare for themselves.

    Avatar image for professoress
    ProfessorEss

    7962

    Forum Posts

    160

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 11

    After the money MS lost on the RRoD I'm almost more worried that the XBox One chips with freeze due to overcooling.

    BRoD?

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #53  Edited By AlexGlass

    @jgf said:

    @alexglass said:

    The problem isn't just not revealing the source though, but rather the statement. It's not self evident, which is what it would need to be, to be taken seriously with a unanimous source. Verifiable by either including logical explanations, context, or referencing other verifiable facts or numbers.

    As it was made, it's just junk and meaningless and does nothing to clarify the issue everyone's talking about, but rather throws more questions and confusion into the mix.

    I pretty much told you my logical explanations to why those performance gaps may actually have been observed. Those explanations were not included in the rumored statements though. But have you read the Edge article? It pretty much supports my assumptions and the rumors. They are a well respected source.

    So while I think these rumors are true, I agree with you that they cannot be used to determine the absolute performance gap. The only things I read from those rumors are:

    • PS4 is more powerful - how much exactly we don't know yet.
    • Xbox is harder to program/optimize for.
    • When programmed with minimal effort, the performance gap is significant.
    • The spec numbers lead to believe that the gap should be smaller when both systems are fully utilized.

    Your logical explanations are fine. Theirs were not. Edge does the same thing and I wouldn't be surprised in the least to find out the same source is responsible for all of these rumors being spread right now.

    Edge also starts off their article with the PS4 being 50% more powerful...but then go on and talk specifically about 50% more ALU computational power which is the only thing the 50% number can be pinned on and memory reads. So yeah, it's misleading to apply it across the board, since it's just not true and it's strictly referring to those two areas. At least when Gamespot reported the story, they had integrity to add in "key areas" and not just regurgitate the bullshit Edge printed.

    Then they mention an anonymous source by a dev who claims working with eSRAM is a "pain in the ass", and that unoptmized code runs faster on the PS4. I find that very surprising to hear it's such a pain in the ass since devs have been working with similar architecture for 8 years, unless it's an Indie developer who's used to working on PC hardware and dumped his code off directly on each dev kit. And if that's the case, what's the point of releasing unoptimized stats? I would hope they don't plan on ever going gold with unoptimized code on the Xbox One.

    Yeah since the PS4's architecture is almost identical to its PC counterpart, taking PC code and dumping it on the PS4 will perform a lot better than dumping that code on the X1. But no self respected developer will ever do that for the Xbox One. After all, no one did this for the PS4 last generation either. They dealt with Cell.

    If by harder you mean as hard as the 360, sure, but the 360, by all accounts, just isn't that hard a platform to program for. We're not talking an insane mountain of complexity like Cell was. We're just talking about 32MB of eSRAM that will have to be programmed for the most taxing, redundant processes in graphics that chew up bandwidth. Which by this point, most devs should know very well what those are. But the Xbox One still has a unified memory architecture as well.

    In addition the X1 probably offers some benefits in other areas such as the SHAPE audio chip being able to process everything on chip, versus devs having to use the PS4's CPU and program it to handler some of 3D digital sound effects. So that part will likely be easier on the X1 to deal with. And on the subject of CPU, whatever deficiencies the PS4 has from this, along with the speed of the CPU, along with the smaller bandwidth on the PS4's CPU, then developers will also have to take the type to program those parts on the PS4's GPGPU to maintain parity.

    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    While I don't think the PS4 is 50% more powerful, I still find bringing up the possible small performance increase between the Xbox One's CPU compared to the PS4 naive at best, disingenuous at worse, as if it can make up the difference between their graphical prowess.

    Most things that the CPU is used for will most likely be capped by the lowest common denominator. Graphically, they can still choose to go for the high-end for a PC version and scale-down for consoles. The PS4 will be impacted by this less than the Xbox One. By how much? Who knows right now. Could be the frames per second, the resolution, or other graphical effects.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    @sergio: Maybe your're right. I'm no lawyer and I haven't read the NDAs. I don't know what exactly is forbidden and what is allowed. I would guess it pretty much forbids you to talk to the media. Doing tests in your own room should be fine. Hey they are even handing the devkits out to Indie developers. So you just have to find a single guy that is willing to run some tests for you. If I was working for Sony or MS and wasn't allowed to peek at the competitors system, my curiosity would kill me ;)

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    #56  Edited By The_Laughing_Man

    @jgf: they are sending out the kits to indies at the end of this month.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    @the_laughing_man: Isn't that just a special program that starts then? Sending them out for free or so? How was Jonathan Blow able to talk about his experience with the PS4 if he hasn't got a devkit yet?

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    @jgf: I am talking about xboxone.

    Avatar image for syed117
    Syed117

    407

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    It's easy to call someone a shill when people are as biased as the sony fans on this and every forum.

    There are countless threads on the PS4 forum started by the same people over and over about how amazing every little thing that sony does is. Those same people then come to xbox forum and start negative xbox threads. It's ridiculous.

    Yeah, alexglass is a bit more aggressive and wants to go into crazy amounts of detail, but that's not much different than what the sony fans do. Except for that fact that they don't go into any detail at all. They blindly repeat everything everyone else is saying and then call others shills. We don't know exactly what these consoles can do. Any child on the internet can scream about gddr5 and tflops. Is the PS4 shaping up to be more powerful? Sure, but no one on an internet forum that isn't a developer working with both consoles understands how these machines work.

    I really hoped the giantbomb community would be better than the trash on neogaf, but it's almost the same people with 12 year playground mentalities fighting over stuff they didn't seem to care about a generation ago.

    Sony fans spent an entire generation claiming that differences in frame rate didn't matter to them. It didn't matter if the vast majority of multiplatform games to this day look and run better on the xbox 360. None of that mattered then, but now it's a biggest deal in the history of video games. They don't give a shit about performance and facts unless they suit what they want to believe and who they want to support. If all these so called enthusiasts really cared, they would have played almost all those multiplatform games on an xbox. Now they are crying about how great the PS4 and how much better multiplatform games might perform on it.

    The only reason why sony fans have been so aggressive this time around is because they carried that chip on their shoulder for an entire generation. A generation dictated by superior ports and a superior online service on the xbox side. Now that the ports situation might swing the other way, they are out for blood. Finally time to turn to those tables and rail on about those same things that caused that severe insecurity for so long.

    You would think people posting on an internet forum would be true enthusiasts. People who care about games more than the average person. People who understand that you need all the platforms to play all the games. Instead you get the same idiotic mentalities and brand loyalty that high school kids have. Both platforms will have great games and to play all those games you need both platforms. I love hearing people say how much they love Forza or even Dead Rising and how great they look and how they don't really like any of the PS4 launch games, but they are still getting a PS4 at launch. Why? Because that's what everyone else seems to be doing.

    Avatar image for liquidprince
    LiquidPrince

    17073

    Forum Posts

    -1

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 5

    #61  Edited By LiquidPrince

    http://m.youtube.com/watch?v=pxmtncgURRY

    Apparently the PS4 still leads by quite a lot.

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    Why have we not seen anything running on final PS4 consoles?

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    Why have we not seen anything running on final PS4 consoles?

    I don't think we've seen anything running on final X1 consoles either. It's been devkits on both sides.

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    @alexglass: pax had Ryse on consoles from what I heard.

    Avatar image for bigjeffrey
    bigjeffrey

    5282

    Forum Posts

    7872

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    #69  Edited By bigjeffrey
    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #70  Edited By AlexGlass

    @alexglass: pax had Ryse on consoles from what I heard.

    Yeah but I think it was still a dev kit if you were to go off of MS's recent statements. The X1 dev kits looks like a retail X1.

    "Xbox One architecture is much more complex than what any single figure can convey. It was designed with balanced performance in mind, and we think the games we continue to show running on near-final hardware demonstrate that performance," the statement continued. "In the end, we’ll let the consoles and their games speak for themselves.”

    They made it a point to state it's near-final hardware so that leads me to believe they're all still dev kits. Which is not a bad thing since it's quite possible all of the builds and what they've shown are running on dev kits that might not have the upgrades they've made since E3.

    And that doesn't necessarily mean anything's up with the PS4 either. The benefit MS had is by having such a big box, it was probably really easy to just go ahead and stick their dev kits in it. While Sony needed to refine the hardware to fit in their production box and their dev kits are easier to be held in PC towers for now.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    #71  Edited By jgf

    Then they mention an anonymous source by a dev who claims working with eSRAM is a "pain in the ass", and that unoptmized code runs faster on the PS4. I find that very surprising to hear it's such a pain in the ass since devs have been working with similar architecture for 8 years, unless it's an Indie developer who's used to working on PC hardware and dumped his code off directly on each dev kit. And if that's the case, what's the point of releasing unoptimized stats? I would hope they don't plan on ever going gold with unoptimized code on the Xbox One.

    Yeah since the PS4's architecture is almost identical to its PC counterpart, taking PC code and dumping it on the PS4 will perform a lot better than dumping that code on the X1. But no self respected developer will ever do that for the Xbox One. After all, no one did this for the PS4 last generation either. They dealt with Cell.

    If by harder you mean as hard as the 360, sure, but the 360, by all accounts, just isn't that hard a platform to program for. We're not talking an insane mountain of complexity like Cell was. We're just talking about 32MB of eSRAM that will have to be programmed for the most taxing, redundant processes in graphics that chew up bandwidth. Which by this point, most devs should know very well what those are. But the Xbox One still has a unified memory architecture as well.

    In addition the X1 probably offers some benefits in other areas such as the SHAPE audio chip being able to process everything on chip, versus devs having to use the PS4's CPU and program it to handler some of 3D digital sound effects. So that part will likely be easier on the X1 to deal with. And on the subject of CPU, whatever deficiencies the PS4 has from this, along with the speed of the CPU, along with the smaller bandwidth on the PS4's CPU, then developers will also have to take the type to program those parts on the PS4's GPGPU to maintain parity.

    I pretty much agree with you on most of those points. In general the rumors we got blow way out of proportion. I always said that those big gaps probably stem from early ports. No way a game as big as CoD will come out on Xone without extensively using the ESRAM. How hard it is to optimize for Xone - I have no clue - I can look at the rumors and the spec: Rumors say that atm librarys are shakey and specs show me that you've got 2 types of memory DDR3 and ESRAM. The goal is clearly to get the most out of the small portion of fast ESRAM. That definitely sounds harder then the PS4 with only a single huge chunk of memory. If the Xone is harder to optimize then the 360? Currently its very likely so, because the libraries are not so mature and the hardware is new.

    Considering audio, I have no real idea how much compute time and resources are usually needed for audio. I always guessed that graphics are the big chunk and audio is only a small part of the workload. If you have got some more detailed info in this case, I would be interested.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    @jgf said:

    @alexglass said:

    Then they mention an anonymous source by a dev who claims working with eSRAM is a "pain in the ass", and that unoptmized code runs faster on the PS4. I find that very surprising to hear it's such a pain in the ass since devs have been working with similar architecture for 8 years, unless it's an Indie developer who's used to working on PC hardware and dumped his code off directly on each dev kit. And if that's the case, what's the point of releasing unoptimized stats? I would hope they don't plan on ever going gold with unoptimized code on the Xbox One.

    Yeah since the PS4's architecture is almost identical to its PC counterpart, taking PC code and dumping it on the PS4 will perform a lot better than dumping that code on the X1. But no self respected developer will ever do that for the Xbox One. After all, no one did this for the PS4 last generation either. They dealt with Cell.

    If by harder you mean as hard as the 360, sure, but the 360, by all accounts, just isn't that hard a platform to program for. We're not talking an insane mountain of complexity like Cell was. We're just talking about 32MB of eSRAM that will have to be programmed for the most taxing, redundant processes in graphics that chew up bandwidth. Which by this point, most devs should know very well what those are. But the Xbox One still has a unified memory architecture as well.

    In addition the X1 probably offers some benefits in other areas such as the SHAPE audio chip being able to process everything on chip, versus devs having to use the PS4's CPU and program it to handler some of 3D digital sound effects. So that part will likely be easier on the X1 to deal with. And on the subject of CPU, whatever deficiencies the PS4 has from this, along with the speed of the CPU, along with the smaller bandwidth on the PS4's CPU, then developers will also have to take the type to program those parts on the PS4's GPGPU to maintain parity.

    I pretty much agree with you on most of those points. In general the rumors we got blow way out of proportion. I always said that those big gaps probably stem from early ports. No way a game as big as CoD will come out on Xone without extensively using the ESRAM. How hard it is to optimize for Xone - I have no clue - I can look at the rumors and the spec: Rumors say that atm librarys are shakey and specs show me that you've got 2 types of memory DDR3 and ESRAM. The goal is clearly to get the most out of the small portion of fast ESRAM. That definitely sounds harder then the PS4 with only a single huge chunk of memory. If the Xone is harder to optimize then the 360? Currently its very likely so, because the libraries are not so mature and the hardware is new.

    Considering audio, I have no real idea how much compute time and resources are usually needed for audio. I always guessed that graphics are the big chunk and audio is only a small part of the workload. If you have got some more detailed info in this case, I would be interested.

    Are you talking about development time? I don't think I've ever seen exact figures anywhere on something like that, especially for audio. Actually I don't think I've ever seen a broken down description of different parts of development in terms of specific time frames. The most you ever hear is how long it takes them to do a port. Seems like something that's impossible to determine, and would vary greatly from developer to developer, game to game.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    @alexglass: No with workload for audio, I was referring to the workload for the system/cpu, not the development time. Sorry if that was misleading.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    @jgf said:

    @alexglass: No with workload for audio, I was referring to the workload for the system/cpu, not the development time. Sorry if that was misleading.

    Oh ok. Same answer. Nobody really ever talks audio. All I heard is the SHAPE is about the equivalent of a CPU core. No idea how much of that is for voice recognition and Kinect and how much DSP effects like echo, reverb or 3D audio actually require. No idea how it breaks down so you could make a comparison.

    Avatar image for sinusoidal
    Sinusoidal

    3608

    Forum Posts

    20

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #75  Edited By Sinusoidal

    @alexglass said:

    Edge also starts off their article with the PS4 being 50% more powerful...but then go on and talk specifically about 50% more ALU computational power which is the only thing the 50% number can be pinned on and memory reads. So yeah, it's misleading to apply it across the board, since it's just not true and it's strictly referring to those two areas. At least when Gamespot reported the story, they had integrity to add in "key areas" and not just regurgitate the bullshit Edge printed.

    The same "bullshit" you based this long-winded piece of speculation on?

    Just which is the Edge article: justification for two pages of technical mumbo jumbo proving the X1 is faster in some niche cases, or "bullshit"?

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    @alexglass said:

    Edge also starts off their article with the PS4 being 50% more powerful...but then go on and talk specifically about 50% more ALU computational power which is the only thing the 50% number can be pinned on and memory reads. So yeah, it's misleading to apply it across the board, since it's just not true and it's strictly referring to those two areas. At least when Gamespot reported the story, they had integrity to add in "key areas" and not just regurgitate the bullshit Edge printed.

    The same "bullshit" you based this long-winded piece of speculation on?

    Just which is the Edge article: justification for two pages of technical mumbo jumbo proving the X1 is faster in some niche cases, or "bullshit"?

    Also, if someone's going to delete this post this time, could they at least let me know why? I find it kind of suspicious that the first one disappeared without any notification at all.

    Dont start something dude.

    Also A few places have called Edge on that bullshit of 50% Time Tech being one.

    Avatar image for sinusoidal
    Sinusoidal

    3608

    Forum Posts

    20

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    Dont start something dude.

    Also A few places have called Edge on that bullshit of 50% Time Tech being one.

    I'm not "starting" anything. I'm asking a legitimate question. In that other, locked thread, @alexglass uses bits of that article as basis for a huge opinion piece. Here he calls it "bullshit". I would genuinely like to know his real opinion of the article, because it sure seems like anything it says in favor of the PS4 being more powerful is "bullshit", where the one mention of the X1 having an edge is worth a two-three page speculation piece on how it could be faster.

    Also to be clear, I am in no way endorsing one console over the other. I don't plan on buying either one until a great deal of time has passed and they've proved themselves.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #79  Edited By AlexGlass

    @sinusoidal said:

    @alexglass said:

    Edge also starts off their article with the PS4 being 50% more powerful...but then go on and talk specifically about 50% more ALU computational power which is the only thing the 50% number can be pinned on and memory reads. So yeah, it's misleading to apply it across the board, since it's just not true and it's strictly referring to those two areas. At least when Gamespot reported the story, they had integrity to add in "key areas" and not just regurgitate the bullshit Edge printed.

    The same "bullshit" you based this long-winded piece of speculation on?

    Just which is the Edge article: justification for two pages of technical mumbo jumbo proving the X1 is faster in some niche cases, or "bullshit"?

    Also, if someone's going to delete this post this time, could they at least let me know why? I find it kind of suspicious that the first one disappeared without any notification at all.

    What about it is "bullshit"?

    And I'm guessing it has something to do with the baseless "shill" accusation which you had enough sense to leave out this time.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    @the_laughing_man said:

    Dont start something dude.

    Also A few places have called Edge on that bullshit of 50% Time Tech being one.

    I'm not "starting" anything. I'm asking a legitimate question. In that other, locked thread, @alexglass uses bits of that article as basis for a huge opinion piece. Here he calls it "bullshit". I would genuinely like to know his real opinion of the article, because it sure seems like anything it says in favor of the PS4 being more powerful is "bullshit", where the one mention of the X1 having an edge is worth a two-three page speculation piece on how it could be faster.

    Also to be clear, I am in no way endorsing one console over the other. I don't plan on buying either one until a great deal of time has passed and they've proved themselves.

    I've already explained what I believe is bullshit. Taking a specific area such as the GPU where the PS4 has a nearly 50% increase in compute units, and using it a blanket statement by turning it into "the PS4 is 50% faster" overall.

    It's just not supported by any of the specs we have. At most the PS4 has a 40%-50% advantage in GPU, Outside of RAM and GPU, all the other hardware points in favor of the X1. CPU, CPU bandwidth, audio chip, eSRAM, data move engines.

    Even EDGE specifically talks about those components, and it's the only thing they're addressing, but they have a headline that's meant to draw clicks that overgeneralizes. And frankly, as Albert put it, overstates that difference. If a video game console was nothing but a GPU and main RAM, then it would be true. But that's not the case. And you don't need a degree in computer science to figure that one out.

    Avatar image for sinusoidal
    Sinusoidal

    3608

    Forum Posts

    20

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #81  Edited By Sinusoidal

    What about it is "bullshit"?

    And I'm guessing it has something to do with the baseless "shill" accusation which you had enough sense to leave out this time.

    According to you, the part where it said that the PS4 was 50% more powerful than the X1 is "bullshit". But when some anonymous dev quoted in the same article said that the X1 might be better at "ray-tracing via parametric surfaces", well, that was worth three pages of highly technical speculation on how it might be true, despite you admittedly not even knowing what "ray-tracing via parametric surfaces" even is.

    Disclaimer: I do not support either console. I am not attempting to incite console wars. I do not necessarily support any of this Edge article. I just hate hypocrisy.

    As for being a shill. Well, something like 95% of your 500-something posts are praise for X1. You often use Microsoft's buzzwords of the day. Since you obviously saw my first post here before it was deleted, it's possible you also had something to do with its deletion. Reporting dissent to mods for deletion is not an uncommon tactic from ORM (Online Reputation Management) people. Who do exist and have been known to work for Microsoft.

    Disclaimer: This is not an accusation, just some observations.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #83  Edited By AlexGlass

    @sinusoidal said:

    @alexglass said:

    What about it is "bullshit"?

    And I'm guessing it has something to do with the baseless "shill" accusation which you had enough sense to leave out this time.

    According to you, the part where it said that the PS4 was 50% more powerful than the X1 is "bullshit". But when some anonymous dev quoted in the same article said that the X1 might be better at "ray-tracing via parametric surfaces", well, that was worth three pages of highly technical speculation on how it might be true, despite you admittedly not even knowing what "ray-tracing via parametric surfaces" even is.

    Disclaimer: I do not support either console. I am not attempting to incite console wars. I do not necessarily support any of this Edge article. I just hate hypocrisy.

    As for being a shill. Well, something like 95% of your 500-something posts are praise for X1. You often use Microsoft's buzzwords of the day. Since you obviously saw my first post here before it was deleted, it's possible you also had something to do with its deletion. Reporting dissent to mods for deletion is not an uncommon tactic from ORM (Online Reputation Management) people. Who do exist and have been known to work for Microsoft.

    Disclaimer: This is not an accusation, just some observations.

    Read again. "The part" I agree with though I don't believe it's quite 50% due to the X1 GPU being a bit faster. Probably more like 40% which what that developer also stated 40%-50%. That's not bullshit.

    And it has less to do with me knowing what "ray tracing via parametric surfaces" but rather that no one that I have ever heard speak on this or similar software techniques has ever used that phrase.

    The phrase used is "ray tracing parametric surfaces". When you say "via" it sounds something like" the road drives on the car" as opposed to "the car drives on the road". That's why I'm not sure what he's referring to. He's either referring to something not typically used or the editor didn't quote him accurately. Because it just doesn't make a lot of sense. Or he's a foreigner and got mistranslated.

    As far as I know, you ray trace parametric surfaces. As in a verb.

    Avatar image for sinusoidal
    Sinusoidal

    3608

    Forum Posts

    20

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    As far as I know, you ray trace parametric surfaces. As in a verb.

    Something neither console has the balls to do at any rate meaningful to game development. Meaning even if the X1 does it faster according to anonymous developer in the Edge article and your humongous speculation piece from the locked thread, it still doesn't do it fast enough to make any use of it.

    In any case, I still can't see why one piece of information in that article you dismiss as "bullshit" then the next you base a small research paper on. It seems to me you are being somewhat less than objective.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #85  Edited By AlexGlass

    @sinusoidal said:

    @alexglass said:

    As far as I know, you ray trace parametric surfaces. As in a verb.

    Something neither console has the balls to do at any rate meaningful to game development. Meaning even if the X1 does it faster according to anonymous developer in the Edge article and your humongous speculation piece from the locked thread, it still doesn't do it fast enough to make any use of it.

    In any case, I still can't see why one piece of information in that article you dismiss as "bullshit" then the next you base a small research paper on. It seems to me you are being somewhat less than objective.

    You're right, which is why I discounted it because if so it's one of those "holy shit" moments and will make huge waves.

    Unless what he means by that is ray tracing via voxel cone ray tracing....a cone could be considered a parametric surface. That's why I pointed to that paper by Cyril. Voxel cone ray tracing is ray tracing done cheap. Instead of individual rays, it uses a cone, for approximation.

    Now I have no idea how much it drops down the power requirement compared to traditional ray tracing, but if either one of these consoles are capable of doing this....then forget which one is faster....but holy shit! And that Edge editor obviously has little clue what he's writing, because if that was me on the phone...I would have paused...and said..."excuse me...could you say that again?"

    And again, I've already explained to you which parts of the article I think are bullshit. I never generalized as you do to begin with. I think Edge generalized. And I also made no claims that what the developer is saying is true or false regarding procedural generation. Just trying to figure out what he's referring to and how it may relate to the architecture.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    I've already explained what I believe is bullshit. Taking a specific area such as the GPU where the PS4 has a nearly 50% increase in compute units, and using it a blanket statement by turning it into "the PS4 is 50% faster" overall.

    It's just not supported by any of the specs we have. At most the PS4 has a 40%-50% advantage in GPU, Outside of RAM and GPU, all the other hardware points in favor of the X1. CPU, CPU bandwidth, audio chip, eSRAM, data move engines.

    To be fair GPU is the most important part for the performance of games. Just look at what happens when you upgrade your PC with a new graphics card vs. when you put a better CPU in it. Putting a 50% faster graphics card in your PC will have far more effect then putting a 50% faster CPU in there. So this is not just "bullshit", the GPU is a very important part and when its faster its a big deal.

    I would rather say that claiming the Xone is ahead in "all the other" departments is far fetched. We don't know the final cpu speed of PS4 yet, as it stands its 1.6ghz vs 1.75 - so roughly a 9-10% faster cpu. CPU bandwidth is the same afaik the Xone has 30gb/sec and the PS4 too, at least if you count the additional 10gb/sec of the onion bus. The PS4 has an audio chip too, granted it may be true that the xones chip is better - but what does that mean? Will it help to run games smoother, if so how much? Thats a bit too much speculation for me pinned on a vaguely "better" audio chip. Counting the eSRAM as an advantage is two folded. Its there to migitate the otherwise far to slow DDR3 memory. I wouldn't count the eSRAM+DDR3 combo as an advantage over the GDDR5. It leaves some wiggle room for future optimizations though and the PS4 doesn't have one, so in that way you may count it as an advantage. But its questionable at the very least. Finally the data move engines may count as an advantage, I don't exactly know what they do (asynchronous transfer I guess?) and I also don't know if PS4 has something similar, but I'll grant you the benefit of doubt there.

    Yet you don't mention specific advantages of th PS4, e.g. no kinect and a leaner operating system. Kinect has an appeal on its own for some people, but performance wise its an additional task that the system always needs to dedicate compute time to. They have built in special chips to minimize the overhead, but its still something to consider.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    You're right, which is why I discounted it because if so it's one of those "holy shit" moments and will make huge waves.

    Unless what he means by that is ray tracing via voxel cone ray tracing....a cone could be considered a parametric surface. That's why I pointed to that paper by Cyril. Voxel cone ray tracing is ray tracing done cheap. Instead of individual rays, it uses a cone, for approximation.

    Now I have no idea how much it drops down the power requirement compared to traditional ray tracing, but if either one of these consoles are capable of doing this....then forget which one is faster....but holy shit! And that Edge editor obviously has little clue what he's writing, because if that was me on the phone...I would have paused...and said..."excuse me...could you say that again?"

    The quote was that one developer "thinks that Xone could be faster" at these specific tasks. It seems reasonable, because of the latency advantage and many read/writes involved. But that does not mean that Xone is the only console that is capable of computing these things, both can.

    The Xone may be faster there, but he didn't even mention to what degree. After all it could be that its only 1% faster or he misjudged and PS4 is faster. Even if the Xone would be faster in these tasks by a significant amount (say like 50%), we don't know how that translates to overall game performance. On the other side we know quite well how a faster GPU translates to game performance.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #88  Edited By AlexGlass

    @jgf said:

    @alexglass said:

    I've already explained what I believe is bullshit. Taking a specific area such as the GPU where the PS4 has a nearly 50% increase in compute units, and using it a blanket statement by turning it into "the PS4 is 50% faster" overall.

    It's just not supported by any of the specs we have. At most the PS4 has a 40%-50% advantage in GPU, Outside of RAM and GPU, all the other hardware points in favor of the X1. CPU, CPU bandwidth, audio chip, eSRAM, data move engines.

    To be fair GPU is the most important part for the performance of games. Just look at what happens when you upgrade your PC with a new graphics card vs. when you put a better CPU in it. Putting a 50% faster graphics card in your PC will have far more effect then putting a 50% faster CPU in there. So this is not just "bullshit", the GPU is a very important part and when its faster its a big deal.

    I would rather say that claiming the Xone is ahead in "all the other" departments is far fetched. We don't know the final cpu speed of PS4 yet, as it stands its 1.6ghz vs 1.75 - so roughly a 9-10% faster cpu. CPU bandwidth is the same afaik the Xone has 30gb/sec and the PS4 too, at least if you count the additional 10gb/sec of the onion bus. The PS4 has an audio chip too, granted it may be true that the xones chip is better - but what does that mean? Will it help to run games smoother, if so how much? Thats a bit too much speculation for me pinned on a vaguely "better" audio chip. Counting the eSRAM as an advantage is two folded. Its there to migitate the otherwise far to slow DDR3 memory. I wouldn't count the eSRAM+DDR3 combo as an advantage over the GDDR5. It leaves some wiggle room for future optimizations though and the PS4 doesn't have one, so in that way you may count it as an advantage. But its questionable at the very least. Finally the data move engines may count as an advantage, I don't exactly know what they do (asynchronous transfer I guess?) and I also don't know if PS4 has something similar, but I'll grant you the benefit of doubt there.

    Yet you don't mention specific advantages of th PS4, e.g. no kinect and a leaner operating system. Kinect has an appeal on its own for some people, but performance wise its an additional task that the system always needs to dedicate compute time to. They have built in special chips to minimize the overhead, but its still something to consider.

    I can only go based on the numbers we have. So if the specs turn out to be different then things change. Just like I think most of these quotes are probably from developers that don't actually have dev kits with the upgraded tools or specs MS has made.

    For example 50% more CUs leading to 50% more computation power was actually accurate prior to the GPU upgrade when both GPUs were running at 800MHz. It's 12 vs 18. But that was months ago. Now it still has 50% more CU's but doesn't quite lead to 50% more computational power. The 12 CUs in the X1 run at 853MHz and the 18 CUs in the PS4 run at 800MHz. However that is still a significant advantage in favor of the PS4. So are ROPs and pixel fill rate.

    But on the CPU end, based on what we have the X1's CPU is faster and has a coherent 30GB/s bandwidth vs 20GB/s on the PS4. That's a small advantage in favor of the X1.

    Bandwidth isn't just speed. But acceleration + top speed. Here's where again the PS4 has an advantage but it's not all one sided. GDDR5 is better at top speed. DDR3 has tighter timing, better latency or acceleration. People want to say latency isn't a big deal with GDDR5 anymore but we'll see. So far we have seen more PS4 games suffer from pop-in than X1 games. Infamous, Drive Club's reflections, Killzone. When you're talking about pop-in from loading on demand, that's usually a ram issue. It's taking too long to stream and swap the objects in. If the PS4 software tools are more developed, isn't it at all odd that multiple games from multiple developers are displaying these issues? What's causing it? Are they all writing bad code? Let's just say it's an API issue, who knows but it could also have something to do with this.

    Now when it comes to eSRAM it doesn't interest me as much that it mitigates bandwidth. The amount it mitigates if we're talking about moving asses is so miniscule that it won't matter. The PS4 still has a huge advantage in bandwidth to main ram.

    You want to talk about PS4 advantages? More bandwidth means you can load a lot more of them at once. But since they both have roughly an equal number of usable RAM, around 4-5GBs, this really should only lead to being able to transfer a larger number of higher res textures or objects through bandwidth. In addition to that though there's a lot of other things in game development that are very important

    But that's not what makes the eSRAM important or interesting to me. To understand why we have to ask ourselves what RAM and bandwidth targeting in terms of actual game assets or techniques. Well it's mainly responsible for storing textures and assets, characters, objects, etc. Right? So more RAM means more textures and assets. But that's not what eSRAM's primary purpose is. It's on chip and it's capable of offering blistering speeds for a small amount of RAM. Doesn't seem like it would be built for that.

    So how can this be useful? We have a lot of new software development techniques that appear to me could favor the eSRAM set up.

    Procedural textures. We know for a fact, they don't require lots of RAM or lots of bandwidth. They require calculations. Now knowing what I know about procedural textures, and I have been keeping up with this for months, there is no doubt in my mind they will play a huge role this generation. Search for Allegorithmic's Substance engine, and go look at how many studios bought this middleware for next gen development. Partial resident textures. Again we've seen example where a texture as large as 3GB can be stored in 16MB of RAM. Granite was on stage with Microsoft showing of their Graphine PRT middleware at the DirectX 11.2 build. We now have compute units. Now bandwidth may be important here, but do they need a lot, or are they more suitable to a fast access on chip eSRAM? Some of these may be helped by an on chip scratch pad.

    So I'm particularly interested in finding out what exactly the eSRAM's purpose is as it relates to software techniques because I don't believe for a second MS put in the X1 to mitigate main bandwidth, when it's obvious that's not it's strong point in the first place. They see what predominant techniques are taking over development. They also know what type of features they're working on for DirectX11.2.

    For example during the Xbox and PS2 days it was multi-texturing in a single pass, where both GC and Xbox had it, and PS2 didn't. The PS2 could push around a whole bunch of flat shaded polygons, but it dropped like a rock in a pond when it came to textures. It made a huge difference in graphics. From bump mapping to displacement to multi-texture effects, that ended up becoming a very important difference. It wasn't so much raw power that produced the big visual gap between Xbox and PS2. In large parts, it was that. Multi-texturing in a single pass. That ended up being the largest defining factors between Xbox and PS2 games.

    So my questions are, does the eSRAM provide any benefits to procedural textures and procedural generation? We know procedural textures just aren't going to care very much about bandwidth. They're tiny. But they care more about compute. Now if I'm doing compute on my GPU, and it turns out I have to go back and forth to RAM, I think I'd much rather have blistering fast on chip eSRAM to write and read from.

    How do the move engines and eSRAM benefit partial resident resources? Once again, they take up little space, and streaming tiles isn't going to require a whole lot of bandwidth. But when it comes to streaming things from RAM, latency will play a bigger role than top speed. Low latency will probably affect texture pop-in more.

    How does the eSRAM affect GPGPU?

    Because to me everything is pointing towards those areas will end up making a big impact on the way games will look this generation just as much as resolution and the number of unique assets you can store and stream. I've said it before and I'll say it again. Specs and raw power don't decide graphics. It's the software techniques of the times, and which chips are better suited to running them. They go hand in hand, and often times manufacturers will build chips to suite particular techniques that developers are using.

    Finally, and this one is a bit off topic here, but to me, Kinect is not a disadvantage. I feel the exact opposite. How can you possibly see a system where it isn't capable of running an entire line-up of games as something advantageous? A system where voice recognition and gesture recognition is an accessory dropped to the side? I think that's actually going to end up creating the largest rift. One console is capable of giving you both a Wii like and core gaming experience. And one isn't. In addition, voice recognition and gesture recognition will probably end up affecting core games in a big way by the end of this gen. Now that it's standard for an entire generation it's going to end up making its way into core games, and who know what type of games that's going to end up leading to. That's a big advantage in favor of the X1 if you ask me. HUGE. That's a much bigger impact than a small difference in graphics. And I'd also hold off on labeling the PS4 as having " a leaner operating system". It wasn't too long ago that the article came out where the PS4 will also reserve a large part of its ram for its OS and relate apps. And considering how many non-gaming related things it's also trying to do, we'll see how that pans out.

    Avatar image for sergio
    Sergio

    3663

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 13

    Based on what facts are actually out there and not suppositions, the one advantage the Xbox One really has in terms of multiplatform games, which is what really matters in this conversation since people will still get whichever console(s) has the exclusives they want to play, is the announcement of dedicated servers for multiplayer games. This will remain so unless Sony or third-party publishers decide to roll out dedicated servers for games on PS4.

    A slightly faster CPU isn't going to grant any noticeable advantages in multiplatform games, while better graphics capabilities may.

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    #90  Edited By jgf
    @alexglass said:

    But on the CPU end, based on what we have the X1's CPU is faster and has a coherent 30GB/s bandwidth vs 20GB/s on the PS4. That's a small advantage in favor of the X1.

    Bandwidth isn't just speed. But acceleration + top speed. Here's where again the PS4 has an advantage but it's not all one sided. GDDR5 is better at top speed. DDR3 has tighter timing, better latency or acceleration. People want to say latency isn't a big deal with GDDR5 anymore but we'll see. So far we have seen more PS4 games suffer from pop-in than X1 games. Infamous, Drive Club's reflections, Killzone. When you're talking about pop-in from loading on demand, that's usually a ram issue. It's taking too long to stream and swap the objects in. If the PS4 software tools are more developed, isn't it at all odd that multiple games from multiple developers are displaying these issues? What's causing it? Are they all writing bad code? Let's just say it's an API issue, who knows but it could also have something to do with this.

    You want to talk about PS4 advantages? More bandwidth means you can load a lot more of them at once. But since they both have roughly an equal number of usable RAM, around 4-5GBs, this really should only lead to being able to transfer a larger number of higher res textures or objects through bandwidth. In addition to that though there's a lot of other things in game development that are very important

    But that's not what makes the eSRAM important or interesting to me. To understand why we have to ask ourselves what RAM and bandwidth targeting in terms of actual game assets or techniques. Well it's mainly responsible for storing textures and assets, characters, objects, etc. Right? So more RAM means more textures and assets. But that's not what eSRAM's primary purpose is. It's on chip and it's capable of offering blistering speeds for a small amount of RAM. Doesn't seem like it would be built for that.

    So how can this be useful? We have a lot of new software development techniques that appear to me could favor the eSRAM set up.

    Procedural textures. We know for a fact, they don't require lots of RAM or lots of bandwidth. They require calculations. Now knowing what I know about procedural textures, and I have been keeping up with this for months, there is no doubt in my mind they will play a huge role this generation. Search for Allegorithmic's Substance engine, and go look at how many studios bought this middleware for next gen development. Partial resident textures. Again we've seen example where a texture as large as 3GB can be stored in 16MB of RAM. Granite was on stage with Microsoft showing of their Graphine PRT middleware at the DirectX 11.2 build. We now have compute units. Now bandwidth may be important here, but do they need a lot, or are they more suitable to a fast access on chip eSRAM? Some of these may be helped by an on chip scratch pad.

    So I'm particularly interested in finding out what exactly the eSRAM's purpose is as it relates to software techniques because I don't believe for a second MS put in the X1 to mitigate main bandwidth, when it's obvious that's not it's strong point in the first place. They see what predominant techniques are taking over development. They also know what type of features they're working on for DirectX11.2.

    So my questions are, does the eSRAM provide any benefits to procedural textures and procedural generation? We know procedural textures just aren't going to care very much about bandwidth. They're tiny. But they care more about compute. Now if I'm doing compute on my GPU, and it turns out I have to go back and forth to RAM, I think I'd much rather have blistering fast on chip eSRAM to write and read from.

    How do the move engines and eSRAM benefit partial resident resources? Once again, they take up little space, and streaming tiles isn't going to require a whole lot of bandwidth. But when it comes to streaming things from RAM, latency will play a bigger role than top speed. Low latency will probably affect texture pop-in more.

    How does the eSRAM affect GPGPU?

    Because to me everything is pointing towards those areas will end up making a big impact on the way games will look this generation just as much as resolution and the number of unique assets you can store and stream. I've said it before and I'll say it again. Specs and raw power don't decide graphics. It's the software techniques of the times, and which chips are better suited to running them. They go hand in hand, and often times manufacturers will build chips to suite particular techniques that developers are using.

    Man your post is just too long, sorry but I have to leave some points out - I just don't have the time to answer to all of them. I try to answer on the most interesting/controversial ones.

    So let me structure your post in 4 main topics: (1) The cpu cache coherent bandwidth thing, (2) pop-in/streaming and latency, (3) procedural textures and esram, (4) GPGPU and esram.

    (1) Can you explain to me how you arrive at the number that PS4 has 20gb/sec cache coherent bandwidth? Its got a 20gb/sec bus from the cpus plus the onion(+) bus with 10gb/sec. The onion bus is explicitily built for coherence. So we've got at least 10gb/sec. For the other bus I'm not sure, but it either is or is not coherent. It certainly is not 50% coherent. So PS4 has either 10gb/sec or 30gb/sec coherent bandwidth for the cpu.

    (2) You mentioned high ram latency as the potential cause for pop-in issues. Then you also mention that pop-in occurs when textures have to be streamed into memory and are not ready before the scene is rendered. This problem occurs when you have to few gpu memory to hold the textures you want to draw. So you read them from disk and stream them to memory. I don't have to tell you that the disk is the bottleneck here and not the memory or do I? But lets just assume we have some magical source of high speed texture material waiting to fill our memory at maximal speed (perhaps procedural generated stuff). Even in this case latency is not the issue, its bandwidth. So we got this high quality 20mb texture we want to copy to memory. We issue the write command, then we wait for the go (aka latency) and then we write at maximal speed.

    Lets do this with GDDR5: Its bandwidth is 172gb/sec = 180224mb/sec so we copy/stream the texture in about 0.00012 sec = 0.12 ms = 120000ns. Then we add to this the initial time aka latency we need to set the write operation up. Generally these times are in the very-low double digit ns (http://en.wikipedia.org/wiki/CAS_latency), but as I have no real numbers lets assume 1000ns for the sake of it. Then we are left with 121000ns.

    Now with DDR3: We got 68gb/sec = 69632mb/sec. So we copy our 20mb in about 0.00029sec = 0.29ms = 290000ns. Even if we assume no latency at all for ddr3 121000ns << 290000ns

    What if we load it to ESRAM?: In the rare case that we have 20mb of our 32mb ESRAM to spare for a texture, we coud copy it there at about 109gb/sec. Note that we cannot profit from the simultaneous 109gb/sec read speed, because we simply want to copy stuff to memory. We also can't use the additional 68gb/sec from ddr3. So we also have a perfect example for why the roughly 287gb/sec peak is so theoretical. So we copy with 109gb/sec = 111616mb/sec our 20mb to esram in about 0.00018sec = 0.18ms = 180000ns. Again even with zero latency 180000ns >> 121000ns.

    (3) You may not believe for a second that MS added the ESRAM to migitate the lacking bandwidth of DDR3, but the rest of the world says that its been added for exactly that reason. So thats a bold statement to make. It has additional benefits besides migitating the bandwidth problem though. Procedural generated textures sound like they could favour esram, but that does not mean that (a) they cannot be done with gddr5 and (b) they are going to be extensively used in games. Its description sounds like you basically trade computation power for less memory usage.

    (4) As you constantly come up with GPGPU and the benefits of ESRAM, I may remind you that the PS4 has 8 ACE with 64 queues in contrast to the 2 ACE with 16 queues of the Xone. And if we believe Cerny the reason for this huge amount of advanced compute units is specifically to support stuff like GPGPU. So when speaking about the advantages of Xone in this area, you should not forget to mention this "tiny" detail.

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    Please translate for people who arnt computer minded. Use candy for reference please.

    Avatar image for sinusoidal
    Sinusoidal

    3608

    Forum Posts

    20

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    And one isn't. In addition, voice recognition and gesture recognition will probably end up affecting core games in a big way by the end of this gen.

    People have been predicting that voice recognition is the next greatest thing since the days of Star Trek TNG. Probably earlier. The fact remains, too many different people have too many different voices, and there are too many variables involved in detecting those voices for voice recognition to ever be accurate and quick enough in a game setting.

    Also concerning those "core" gamers as you put it: most don't have the space for Kinect.

    I find this article particularly relevant: http://www.cracked.com/blog/5-things-every-game-company-gets-wrong-about-gamers/

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    #93  Edited By jgf

    @the_laughing_man: Ok here we go: Would you rather have a big pile of candy next to your bed, or a tiny amount in your hand and the rest hidden in the closet?

    Avatar image for vanick
    Vanick

    333

    Forum Posts

    0

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    @sinusoidal: Thanks for linking to that Cracked article. It seemed pretty spot on.

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    @jgf: Do I have to share the candy?

    Avatar image for jgf
    jgf

    404

    Forum Posts

    14

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 1

    #97  Edited By jgf

    @the_laughing_man: Yeah there is this fat dude next to you. He can't move fast, but he eats all the candy you give to him in an instant. He keeps drawing nice pictures in return though.

    Avatar image for the_laughing_man
    The_Laughing_Man

    13807

    Forum Posts

    7460

    Wiki Points

    0

    Followers

    Reviews: 1

    User Lists: 0

    @jgf said:

    @the_laughing_man: Yeah there is this fat dude next to you. He can't move fast, but he eats all the candy you give to him in an instant. He keeps drawing nice pictures in return though.

    Will the games be pretty? Will they haev a decent frame rate? Thats all I care about.

    Avatar image for extomar
    EXTomar

    5047

    Forum Posts

    4

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    Or another way to think about it: This is the reason why people think "PCs are hard". It is hard to see which machines perform better by just the stats on paper.

    Avatar image for alexglass
    AlexGlass

    704

    Forum Posts

    5

    Wiki Points

    0

    Followers

    Reviews: 0

    User Lists: 0

    #100  Edited By AlexGlass

    @jgf said:
    @alexglass said:

    Man your post is just too long, sorry but I have to leave some points out - I just don't have the time to answer to all of them. I try to answer on the most interesting/controversial ones.

    So let me structure your post in 4 main topics: (1) The cpu cache coherent bandwidth thing, (2) pop-in/streaming and latency, (3) procedural textures and esram, (4) GPGPU and esram.

    (1) Can you explain to me how you arrive at the number that PS4 has 20gb/sec cache coherent bandwidth? Its got a 20gb/sec bus from the cpus plus the onion(+) bus with 10gb/sec. The onion bus is explicitily built for coherence. So we've got at least 10gb/sec. For the other bus I'm not sure, but it either is or is not coherent. It certainly is not 50% coherent. So PS4 has either 10gb/sec or 30gb/sec coherent bandwidth for the cpu.

    (2) You mentioned high ram latency as the potential cause for pop-in issues. Then you also mention that pop-in occurs when textures have to be streamed into memory and are not ready before the scene is rendered. This problem occurs when you have to few gpu memory to hold the textures you want to draw. So you read them from disk and stream them to memory. I don't have to tell you that the disk is the bottleneck here and not the memory or do I? But lets just assume we have some magical source of high speed texture material waiting to fill our memory at maximal speed (perhaps procedural generated stuff). Even in this case latency is not the issue, its bandwidth. So we got this high quality 20mb texture we want to copy to memory. We issue the write command, then we wait for the go (aka latency) and then we write at maximal speed.

    Lets do this with GDDR5: Its bandwidth is 172gb/sec = 180224mb/sec so we copy/stream the texture in about 0.00012 sec = 0.12 ms = 120000ns. Then we add to this the initial time aka latency we need to set the write operation up. Generally these times are in the very-low double digit ns (http://en.wikipedia.org/wiki/CAS_latency), but as I have no real numbers lets assume 1000ns for the sake of it. Then we are left with 121000ns.

    Now with DDR3: We got 68gb/sec = 69632mb/sec. So we copy our 20mb in 0.00029sec = 0.29ms = 290000ns. Even if we assume no latency at all for ddr3 121000ns << 290000ns

    What if we load it to ESRAM?: In the rare case that we have 20mb of our 32mb ESRAM to spare for a texture, we coud copy it there at about 109gb/sec. Note that we cannot profit from the simultaneous 109gb/sec read speed, because we simply want to copy stuff to memory. We also can't use the additional 68gb/sec from ddr3. So we also have a perfect example for why the roughly 287gb/sec peak is so theoretical. So we copy with 109gb/sec = 111616mb/sec our 20mb to esram in about 0.00018sec = 0.18ms = 180000ns. Again even with zero latency 180000ns >> 121000ns.

    (3) You may not believe for a second that MS added the ESRAM to migitate the lacking babout andwidth of DDR3, but the rest of the world says that its been added for exactly that reason. So thats a bold statement to make. It has additional benefits besides migitating the bandwidth problem though. Procedural generated textures sound like they could favour esram, but that does not mean that (a) they cannot be done with gddr5 and (b) they are going to be extensively used in games. Its description sounds like you basically trade computation power for less memory usage.

    (4) As you constantly come up with GPGPU and the benefits of ESRAM, I may remind you that the PS4 has 8 ACE with 64 queues in contrast to the 2 ACE with 16 queues of the Xone. And if we believe Cerny the reason for this huge amount of advanced compute units is specifically to support stuff like GPGPU. So when speaking about the advantages of Xone in this area, you should not forget to mention this "tiny" detail.

    1. Excuse me I had them reversed. In which case, as you said, it could be worse and only 10GB/s of that being coherent. That's something that still needs to be clarified.

    2. Yeah of course texture streaming from a hard disk can result in pop-in as well and that's a bigger bottleneck. But that 's a different subject and it's a far more obvious problem. On your calculations, your numbers add up but from my understanding latency doesn't typically affect it in the matter you describe. I'm not qualified in any matter to begin guessing on numbers like that, but what I know is that it often results in missed clock cycles, which can cause your bandwidth to run less efficiently overall because this request happens multiple times, not once, and you're not streaming a single 20mb texture. So in your case, aren't you using ideal conditions? Again I'm not saying that was the reason for the pop-in, but rather just a question. It certainly appears to be more of a trend with PS4 games from what we have seen so something is going on.

    3. The rest of the world? As in people who believe MS and Sony are spying on each other and trying to out do or mitigate each others advantages when designing chips, rather than having a particular goal in mind that they're trying to achieve? I'm ok with disagreeing with them. I think MS designed the X1 and DirectX as it has from its inception to best take advantage of one another and I believe the eSRAM isn't there to deal with streaming a whole bunch of stuff in bulk, as opposed to the main ram bandwidth. Just like on the 360 it was designed to give it certain specific advantages in the case of AA, with the additional features added to the X1's eSRAM, I believe they're targeting specific software techniques that they're also implementing in DirectX.

    As far as procedural textures being a big deal, I'm willing to bet these major publishers are not licensing middleware like this if they don't plan on using it a lot:

    Substance’s new compatibility with Xbox One extends the reach of a technology that is already integrated into 3D development tools like Autodesk 3ds Max and Maya, and industry-standard game engines like Unity 3D, Unreal Engine 3, and UDK. Noteworthy Substance proponents include: Microsoft Game Studios®, Electronic Arts, Activision, Ubisoft, Square-Enix, 2K Games and Namco-Bandai. http://www.prweb.com/releases/2013/5/prweb10756353.htm

    And of course it will also be available on PS4.

    4. Sure but I don't think I ever talked about the X1 having an advantage in pure compute. That is well known. But it depends on how they are being used and what for. What I'm more interested in, however, is how the eSRAM, tile/untile of the move engines and compute units work together and what kind of software techniques they're designed to accelerate. For example, voxel cone ray tracing is a pretty hot topic currently, and it was the driving force behind Unreal Engine 4 before they had to yank it out because mid-range PC's and next gen consoles were not considered powerful enough in terms of pure TFLOps to run it. But it's a technique that can benefit greatly from partial resident textures which precisely deals with a lot of memory reads and writes. So if it turns out that MS designed the X1 to be suitable towards handle something like this, I think it's going to be a pretty big deal. It's going to have a much bigger impact on graphics than how many high res textures you can stuff into your RAM compressed in their native resolution and move around through a large bus.

    This edit will also create new pages on Giant Bomb for:

    Beware, you are proposing to add brand new pages to the wiki along with your edits. Make sure this is what you intended. This will likely increase the time it takes for your changes to go live.

    Comment and Save

    Until you earn 1000 points all your submissions need to be vetted by other Giant Bomb users. This process takes no more than a few hours and we'll send you an email once approved.