In the same article that Edge talks about the PS4's power advantages over the X1, they also had a quote from a developer that to me is quite interesting and didn't get the attention it deserves.
Xbox One does, however, boast superior performance to PS4 in other ways. “Let’s say you are using procedural generation or raytracing via parametric surfaces – that is, using a lot of memory writes and not much texturing or ALU – Xbox One will be likely be faster,” said one developer.
I wrote a post previously on here about MS's DirectX 11.2 presentation on partially resident resources and how the eSRAM and data move engines may play a role. I want to say that I do not proclaim to be an expert on this subject, but rather this is just from my own personal research and understanding. You be the judge. Having said that, this caught my interest, because this is actually the first time I have heard a developer allegedly mentioning something that could point towards this connection I along with a few others were speculating.
I've also wondered about whether or not the eSRAM could provide the X1 GPU with an additional benefits when it comes to GPGPU, and if it could perhaps act in a similar fashion to how an L2 or L3 cache acts for a CPU.
Now here we have a quote that specifically mentions procedural generation and ray tracing via parametric surfaces as likely being faster on the X1. The question is what are they and why?
First, what are they? Now I'm not exactly sure if they're talking about procedural generation in general, referring to both mesh such as tessellation, as well as textures, or procedural textures only. In either case, one thing that is for certain is that anytime you are talking about procedural generation, you're talking about an algorithm and computational power to solve that algorithm.
The difference between applying a traditional image texture on an object and a texture created using a procedural algorithm, as far as hardware is concerned, is that of storage and bandwidth vs computational power and bandwidth/latency.
-An image texture will take up more memory. Your texture is typically stored in ram in its native resolution. It will also require a certain amount of bandwidth, significantly more compared to a procedural texture, to make its way to the GPU. Obviously since the PS4 is equipped with GDDR5 and more bandwidth, it's better suited towards storing and moving around more or higher resolution textures.
-A procedural texture is stored in memory as an algorithm, a mathematical equation or formula. It takes very little memory storage, and very little bandwidth. It's just a math formula waiting to be calculated. But it does require that it gets processed, and calculated in real time, in order to turn it into what looks like an image texture at some point before it can be displayed in a frame of animation. So it requires less memory, less bandwidth, but more computational power.
Similar principles apply to procedural generation of mesh, tessellation or adaptive loading on demand though that has more to do with displaying a high polygon object up close and a simpler object further away, while handling the phases in-between, dynamically, using an algorithm. Now it's curious that despite the PS4 GPU is known to have more compute units, we have a developer mentioning that these techniques will likely run faster on the Xbox One.
The reason for that could be perhaps that in order to help with these calculations, some sort of low latency cache may prove useful. One thing I might think of why this would be in the case of the X1 is the advantage it has due to the eSRAM. You do not need a lot of memory to store a procedural algorithm. What you may need is a very fast scratchpad with low latency, that your GPU can quickly go back and forth and access to perform is calculations or pull algorithms. The eSRAM and 32MB is more than sufficient to store a whole bunch of them or to be used as a scratchpad.
Ray-tracing via parametric surfaces.
Of equal interest is that they talk about ray-tracing via parametric surfaces. First let me preface, that it's been awhile since I took Calculus 3, and I couldn't remember the first rule about parametric surfaces, but what I can tell you is that they're talking about curved surfaces. Tessellation is likely based on calculating parametric surfaces.
So simpler models further away, more detailed up close.
Now I couldn't give you a confident layman's explanation of what this developer is referring to as far as "ray tracing via parametric surfaces", because I'm not sure myself, but I highly doubt they're talking about ray-tracing as it's known.
Having said that, I might be able to offer some clues. In digging around for the original starting point for the partial resident texture tech that AMD introduced in their GPUs, I stumbled upon a paper on Cyril Crassin's PhD thesis on GigaVoxels, which is the inspiration behind the tech being used in Unreal Engine 4 as well as being supported in hardware as part of AMD's GPUs.
The first thing that popped to my attention is that the exact limitations they are explaining from seemingly using a software approach are pretty much explaining the limitations Microsoft had on their slides in their DirectX 11.2 presentation. The limitations MS were able to remove by moving partial resident resources to hardware and is a standard feature on the X1.
From the paper:
There are several cons to the GigaVoxels approach:
Slow octree traversal. The octree is an indirect mechanism for sampling the actual scene information stored in the bricks. When a ray needs to get information from the scene, it must first traverse down the octree. Once it finds the correct node it can sample from the associated brick. This process can be slow and it would be nicer to sample from the voxel data directly.
Non-automatic linear interpolation between mip-map levels. Since all bricks of different LODs are stored in the same memory pool, we are not taking advantage of mip-maps. Therefore we can not make use of hardware quadrilinear interpolation.
Reduntant memory use. Each brick must contain information about neighboring voxels, which takes up a lot of memory. For example, the number of voxels inside an 8*8*8 brick is 512 and 169 of these are redundant neighbor voxels. This is a 33% increase in memory.
The filtering process is encumbered by determining which neighbors are shared among the bricks.
From MS's DirectX 11.2 presentation:
So basically supporting partially resident texture in hardware will remove those limitations that the software approach had and held developers back.
What was more intriguing for me in that paper, however, that I did not mention at that time, is that he and later Nvidia wasn't just using the Gigavoxels approach for texturing, but rather they were initially researching voxel cone ray tracing which can extend beyond the use of textures for lighting and global illumination.
In one sentence, the GigaVoxels technique performs adaptive level-of-detail selection of lighting and material information during the ray-marching of a sparse voxelized scene. It is made up of several stages:
First, the scene is converted from triangles to voxels in a two-step process:
Check if the triangle's plane intersects the voxel.
Rasterize the triangle along its dominant axis (the one that results in the largest projected area) and check if the resulting fragments intersect the 2D projection of the voxel.
Interestingly enough, Microsoft also mentioned shadow maps in their presentation, and made it a point to refer to this as partial resident resources. Which implies usage that extends beyond just partial resident texturing.
I wanted to mention this relationship earlier, but I wanted to wait and see if I have something more to go on before starting up some other crazy speculation or rumor without enough evidence to draw such conclusions. Now we have a developer that specifically mentions this area in regards to the X1's abilities.
There certainly appears to be something to it but more clarification is needed as I cannot deduct with certainty that what this developer is referring to as ray-tracing "via" parametric surfaces has some relationship to cone tracing using gigavoxels which is what Cyril was describing for usage in lighting and global illumination. The choice of words is throwing me off. However with different buzz words being used for very similar techniques depending on what developer you talk to, I certainly wouldn't be surprised if they're similar. Perhaps someone else can chime in.