• 65 results
  • 1
  • 2
#1 Edited by AlexGlass (688 posts) -

Thanks to the hardware based tiled resources support added in DX11.2. The video has a lot of boring technical speak in the beginning but they demonstrate it with a couple of videos as well. The first is a demo of Mars that starts at 17:30.

http://channel9.msdn.com/Events/Build/2013/4-063

So how does it work in layman's terms? Typically in a video game whenever a textured object is displayed, like say a high resolution brick texture for a road, the entire texture is compressed and stored entirely in RAM and takes up its respective size of RAM. Textures are the biggest RAM consumers by a large margin compared to everything else that needs to be stored in RAM.

With partial resident textures, or tiled resources, or as Microsoft likes to call it, the texture is split up into smaller tiles allowing you to load only the tiles necessary to be displayed at a particular detail level. So for our stretched road, it's not really necessary to display all the detail you need at 1 foot away, for the portion of the road that might be 50 feet away from the player's camera.

In the presentation, Mars is textured using two textures: A 1GB diffuse map and a 2GB normal map for a total of 3GBs of textures.

Using tiled textures they were able to texture the same scene using only 16Mb of RAM.

Visualization of how the textured is tiled. Magenta represents highest level of detail tiles.

Comparison.

If you were to clamp all tiles at the lowest resolution, using only 16mb of RAM. Tiled textures off.

Tiled Textures on. This allows for the highest possible detail when zoomed in. Still only using 16mb of RAM.

While texture tiling has been done in software before, it had certain limitations. By moving it to hardware the limitations were removed. Without going into all the technical details, the benefits of removing these limitations are impressive enough that it allows developers to store texture data sizes that previously took up 3GB of RAM in only 16Mb of RAM! Not only does it offer a drastic reduction in size, but it can also allow more detailed worlds than before since now developers have a lot more texture storage available.

In addition to textures, and presumably the reason Microsoft prefers to call this partial resident resources, is because this technique can also be applied to other areas such as shadows through shadow mapping. It's demonstrated in the video at 22:40.

Using only 16Mb. Tiled resources turned off.

Using only 16Mb. Tiled resources on.

Middleware developer Graphine Software also got on stage and demonstrated their middleware software Granite for streaming tiled textures which has been integrated in Divinity Dragon Commander by Lariat. The video they demo-ed at the conference can be found here on their website.

For the X1 this is particularly important since using this technique the 32Mb of eSRAM can theoretically be capable of storing up to 6GB worth of tiled textures going by those numbers. Couple the eSRAM's ultra fast bandwidth with tiled texture streaming middleware tools like Granite, and the eSRAM just became orders of magnitute more important for your next gen gaming. Between software developments such as this and the implications of the data move engines with LZ encode/decode compression capabilities on making cloud gaming practical on common broadband connections, Microsoft's design choice of going with embedded eSRAM for the Xbox One is beginning to make a lot more sense. Pretty amazing.

#2 Posted by Akeldama (4217 posts) -

How many transistors?

#3 Posted by jimmyfenix (3688 posts) -
#4 Posted by Blu3V3nom07 (4037 posts) -

Very cool stuff.

#5 Posted by Cirdain (2957 posts) -

@alexglass: Oh I love me some dirty dirty tech demos, RAW SON!!!

#6 Posted by Syed117 (388 posts) -

That's a lot to take in but it's pretty cool stuff.

#7 Posted by Kidavenger (3380 posts) -

So pixels are making a comeback, saves ram, but seems like it would be a lot harder on the CPU, especially once you animate the world.

#8 Edited by Eujin (1294 posts) -

@alexglass That's awesome. While it'll technically implement some additional amount of latency, this type of middleware could mean that game textures won't be limited to the DDR bandwith limitations of <69gb/s, and it will instead get a little less than the ESRAMs 102gb/s (a little less due to the latency involved in the middleware).

That's an awesome thing. I hope Microsoft implements some sort of base middleware solution for this into the devkit, assuming it's simple enough to move around.

#9 Edited by Slaegar (629 posts) -

This is nonsense. This has been done forever. eSRAM may be able to do it a bit faster, but GDDR5 is obviously going to also be faster than DDR3. They can do a lot with that eSRAM, but 32MBs of textures will always look like 32MBs of textures.

http://en.wikipedia.org/wiki/Level_of_detail

Objects have varying LOD qualities until the go beyond the draw distance

http://en.wikipedia.org/wiki/Draw_distance

http://en.wikipedia.org/wiki/Distance_fog

I'm gonna go get a screenshot for this and throw it in.

My screenshots didn't show the whole picture (GET IT) so I stole this one instead while googling LOD

Open world games work well as an example because there is no way an entire country side could be loaded at once in it's full glory. Almost all games use LOD to some extent, though. If you've ever played a game and ran too close too a wall and watched it load a prettier version, that was the intended effect of LOD, just too slow.

More edit: I remembered that you could fly is skyrim. Console commands are the best

What Eujin said below me about it being potentially a middleware is pretty cool. The 360 actually had a small GPU framebuffer for sismilar tasks, but if it becomes easier for developers to do this task, all the better for everyone.

#10 Posted by eskimo (461 posts) -

Another day, another post from Alexglass spruiking the Xbox. Do you get a sales commission or something?

#11 Posted by EXTomar (4133 posts) -

But how many voxels?

#12 Posted by The_Laughing_Man (13629 posts) -

So this all means????

#13 Posted by Eujin (1294 posts) -

@the_laughing_man: I originally thought it meant that they had found a way to do high-detailed LOD textures (IE: what you see when close up to a character/object) in smaller chunks, therefore being able to use the ESRAM for even the largest textures direct to the system. However, after Slaegar's post, I did some more digging.

What this appears to actually be is that they found a way to do standard LOD loading on the ESRAM, but it doesn't appear that it skips standard ram at all. It's still doing the huge close-up LOD textures in chunks into the main ram then to the rendering. The ESRAM is just helping do it in small (fast loading through ESRAM) uncompressed chunks, so it still loads into the main RAM Faster than normal with less (or no) compression, but doesn't quite give as much as a performance boost as direct ESRAM to GPU.

That was just from some googling around hardware tiled resources and DX11.2. I could be misinterpreting, as I'm still watching through the Build videos. but based off https://en.wikipedia.org/wiki/Tiled_rendering and other resources, it just appears that someone's created some middleware for this rather than devs needing to make their own code. Even the PSVITA has hardware accelerated tiled rendering, it seems.

#14 Posted by The_Laughing_Man (13629 posts) -
#15 Edited by Eujin (1294 posts) -

@the_laughing_man: Most definitely good. With caveats that it's not exclusive to Microsoft, although it sounds like they've optimized DX 11.2 to not need to compress as much when doing this. Unknown if that puts it better off than what you'd be dealing with with when coding directly for hardware with assembly/or with OpenGL, but I'd definitely say it's good regardless.

#16 Edited by AlexGlass (688 posts) -

@slaegar said:

What is nonsense? Direct X 11.2 is the first DirectX to support hardware-based tiled resources. I'm not sure you understood what they're presenting.

DirectX 11.2 promises to bring a host of new features such as the new "Tiled Resources" technique that will (unsurprisingly) be exclusive to the Xbox One and Windows 8.1.

"Tiled Resources" allows for significant enhancement of in-game textures by making it possible to simultaneously access GPU and traditional RAM memory and create a single large buffer where large textures can be stored.

http://www.tomshardware.com/news/DirectX-11.2-Tiled-Resources-Xbox,23322.html

1. LOD is a technique to primarily reduce the taxation of polygons and works on entire objects.

2. This was previously done in software for textures and it's usually how it's done today as well. Doing it in software has certain limitations. It also typically requires your CPU to do the compression and tiling prior to them being ready to be used.

3. This is about moving it to hardware. The increase in efficiency comes from the fact it's not software-based which removes the limitations imposed by having to peform manual filtering, performing biliniear and anisotropic filtering and needing to duplicate border regions for your tiles. All of these are removed by going to a hardware-based approach.

@the_laughing_man:

While LOD and tiled resources are similar in concept, Slaegar is comparing apples and oranges while not quite understanding the important factor here is the fact it's being supported in hardware.

The presentation wasn't about the Granite middleware so that's not the important factor to take away from that. It was about Microsoft's DX 11.2 hardware support features. They're moving tiled resources to a hardware-based implementation and comparing it to the old software-based approach.

Graphine was just there to demo their tech and basically stating they will support Microsoft's Partial Resident Resources as well while also talking about the difference between hardware and software implementation. .

"Hardware tiled resources offer several improvements over the shader based method available in Granite 1.8. Especially when using high-quality filtering modes such as full anisotropic trilinear filtering with a high degree of anisotropy (>= 8x) there are clear advantages. Firstly, the shader can be simplified, reducing the instruction count from around 30 to 10-15 shader instructions in the case of anisotropic filtering. Secondly, since no overlap needs to be used on tile borders cache use and compression can be improved. Finally, streaming throughput is improved by 33% as no mipmaps have to be generated for uploaded tiles"

Of course Granite 2.0 still has full support for shader emulation on older API´s and hardware. This makes using tiled resources in multi-platform games or engines very easy. If there is hardware Granite will use it, if not it will automatically fall back to a shader based implementation.

In the case of the Xbox One it does go straight from eSRAM to the GPU. MS has actually been using this type of EDRAM/eSRAM approach since the 360. For example, beginning with titles as early as Kameo, it used the 360's EDRAM and GPU only for particle effects without ever touching the CPU. This frees up the CPU and CPU/RAM/GPU bandwidth to do other processes.

The way the tech works is that it needs to spit out unused tiles and re-load them as needed or quickly replacing lower res tiles with higher res tiles and vice versa, very fast. The eSRAM is basically a scratchpad for this, built on chip, so the big advantage is that you don't have to deal with the latency issues of GPU/RAM access or tie up resources when going back and forth. You also don't have to deal with the bottlenecks of the software implementation. It doesn't suck up additional resources from your CPU. I think the point you are missing here is that there's no need for your GPU to go back to your ram pool if you can do this within the confines of the eSRAM. The eSRAM is the dedicated hardware for tiled resources and DirectX 11.2 contains the APIs to take advantage of it.

Maybe the bold now makes it more clear ;)

#17 Edited by Eujin (1294 posts) -

@alexglass: To reiterate, this is actually a new standard in most hardware GPUs since early 2012, including the Vita and AMDs newest GPU/APUs (Used in both XBO and PS4), so it is supported on both platforms and it's not something that only Microsoft has. What is awesome is that Microsoft has implemented some APIs in DirectX11.2 so that developers don't have to utilize their own implementation from scratch.

I've finished watching the Granite demos and the Build videos, and one thing I'd would definitely stress is that it appears that the Granite engine is what allows this to happen without Mipmapping, not the DX 11.2 API calls.The DX11.2 API calls allows the usage of this direct-to-ram/GPU usage of Hardware Based Tiled Resources on PCs (and XBO as it uses DX11.2). But Mipmapping (Load pauses) Could occur if they don't have the workarounds. (Unsure, because we do not know if XBO uses pure DX11.2 or a cooked version that has some specific tools for XBO)

A neat thing is that Microsoft picked up on this as a way to work through some memory limitations it had on the the 360, so it used the 10 mb ram cache on there to do a similar thing they're doing with the 32mb ram cache on the XBO. It just appears that they've now worked the baseline of that workaround into the API calls for DX11.2

Again, to be clear, this is definitely a good thing. It is always better for Developers to have easier ways to utilize the hardware for better effect. I just want to clarify that this is not something hardware-wise that is exclusivee to the XBO, and is in fact available on all upcoming platforms, and some of the current ones. The current version of OpenGL has options that enable this same type of API call shortcut, which is what is used on the PS4 (and I believe the WiiU, but I don't think the WiiU's GPU chip actually has the capability for this). You can find this information looking for Hardware Tiled Rendering, or PRT, on google.

Also: Alexglass, did you post this exact same thread on IGN? http://www.ign.com/boards/threads/x1-esram-directx-11-2-from-32mb-to-6gb-worth-of-textures.453263349/

Edit: You know what, I rethought this, and I can totally understand wanting to get the word out on new tech stuff. It was just a little weird that it was verbatim. I only brought it up because it was on the first page of results while I was looking up info on the PRT/Hardware Based Tiled Rendering

#18 Edited by WickedFather (1720 posts) -

This looks like the kind of shit that the xbone would try to sound important about.

#19 Posted by Eujin (1294 posts) -

@wickedfather: It's actually a big deal from a API side of things. It appears that OpenGL has had it for about 1.5 years now(early 2012. It was added after AMD announced adding hardware PRT support to their next line of GPUs/APUs back in Dec 2011), and DX10 and 11 did not. The fact they're adding it in is a great thing.

Not everything has to be spin just because it's a positive.

#20 Posted by The_Laughing_Man (13629 posts) -

@eujin said:

@the_laughing_man: Most definitely good. With caveats that it's not exclusive to Microsoft, although it sounds like they've optimized DX 11.2 to not need to compress as much when doing this. Unknown if that puts it better off than what you'd be dealing with with when coding directly for hardware with assembly/or with OpenGL, but I'd definitely say it's good regardless.

Isnt the DX11.2 thing the reason BF4 is supposed to be nicer on Xboxone?

#21 Posted by Eujin (1294 posts) -

@the_laughing_man: News to me. The only thing I"ve heard about Battlefield 4 is that it's going to have Kinect support on XBO.

#22 Edited by JerkDaNerd_ (2 posts) -

@ WickedFather

It's a big deal and a real "game changer" I believe. I think given the chance, most developers would highly prefer tile rendering then what we use today. The problem with tile rendering, there wasn't any hardware support for it. Which is why I give mad respects to John Carmack for what he was trying to do with RAGE (Virtual texturing, or software tiling). I scolded him when I purchased RAGE, but ever since AMD's announcement with hardware support for PRT beginning with the HD 7970 and going more in depth of what he was trying to do, I popped in RAGE again and noticed the the dynamic textures of the whole map....and of course some "pop-ins".

I respect Microsoft's architectural panel for applying IMO, an ingenious method for graphics...and FINALLY. It has been disappointing to see the level of details in graphics with overly powerful GPUs, but still no dynamism and details in actual gameplay. We can now make every bit of the already powerful GPUs count for almost every single detail.

#23 Posted by The_Laughing_Man (13629 posts) -

@eujin said:

@the_laughing_man: News to me. The only thing I"ve heard about Battlefield 4 is that it's going to have Kinect support on XBO.

Something about how BF4 uses DX

#24 Posted by jArmAhead (129 posts) -

@slaegar said:

This is nonsense. This has been done forever. eSRAM may be able to do it a bit faster, but GDDR5 is obviously going to also be faster than DDR3. They can do a lot with that eSRAM, but 32MBs of textures will always look like 32MBs of textures.

http://en.wikipedia.org/wiki/Level_of_detail

Objects have varying LOD qualities until the go beyond the draw distance

http://en.wikipedia.org/wiki/Draw_distance

http://en.wikipedia.org/wiki/Distance_fog

I'm gonna go get a screenshot for this and throw it in.

My screenshots didn't show the whole picture (GET IT) so I stole this one instead while googling LOD

Open world games work well as an example because there is no way an entire country side could be loaded at once in it's full glory. Almost all games use LOD to some extent, though. If you've ever played a game and ran too close too a wall and watched it load a prettier version, that was the intended effect of LOD, just too slow.

More edit: I remembered that you could fly is skyrim. Console commands are the best

What Eujin said below me about it being potentially a middleware is pretty cool. The 360 actually had a small GPU framebuffer for sismilar tasks, but if it becomes easier for developers to do this task, all the better for everyone.

This is a lot more than simple LOD. Skyrim used about as much memory as it could, and then some. As far as I know it doesn't use software or hardware tiling. Just look at the shadow, that alone should tell you that we aren't talking about the same thing. The concept is similar, but not at all the same. This is a new technique to make LOD much more efficient for memory. When you got close to an object in Skyrim, it changed the objects appearance, not just the appearance of the pixels on your screen. The computer saw the entire asset loaded. This would mean only small parts of assets would be loaded, and that would mean much smaller memory usage. It also potentially means smoother LOD transitions but we'll see about that.
This kind of stuff isn't brand new, but it's newer than LOD, which seems like what you think is all this is.
People should really understand that this isn't "look LOD," but rather "look, a clever way of doing LOD that has been made much easier and simpler and accessible."
Also, posting "wah, you must be a secret marketer because you posted without farting on the Xbone like a 3 year old" should really be like a one hour ban. I'm a little sick of it. This is interesting shit to those of us that watch things like 4 hours of Carmack talking nerdy. And yes, it's a positive. Being related to the Xbox One, which by all accounts is shaping up to be a great hardware box with a pretty impressive launch lineup (for a launch lineup anyway), does not make something shitty or not worth sharing. This kind of stuff needs to stop. I get you guys all want to be the cool cynical assholes like the staff tend to be on occasion but it's a little old and it just makes you look like idiots.

Now, if only we could get those crazy Bohemia boys to start using this...

#25 Edited by AlexGlass (688 posts) -

@eujin said:

@alexglass: To reiterate, this is actually a new standard in most hardware GPUs since early 2012, including the Vita and AMDs newest GPU/APUs (Used in both XBO and PS4), so it is supported on both platforms and it's not something that only Microsoft has. What is awesome is that Microsoft has implemented some APIs in DirectX11.2 so that developers don't have to utilize their own implementation from scratch.

I've finished watching the Granite demos and the Build videos, and one thing I'd would definitely stress is that it appears that the Granite engine is what allows this to happen without Mipmapping, not the DX 11.2 API calls.The DX11.2 API calls allows the usage of this direct-to-ram/GPU usage of Hardware Based Tiled Resources on PCs (and XBO as it uses DX11.2). But Mipmapping (Load pauses) Could occur if they don't have the workarounds. (Unsure, because we do not know if XBO uses pure DX11.2 or a cooked version that has some specific tools for XBO)

A neat thing is that Microsoft picked up on this as a way to work through some memory limitations it had on the the 360, so it used the 10 mb ram cache on there to do a similar thing they're doing with the 32mb ram cache on the XBO. It just appears that they've now worked the baseline of that workaround into the API calls for DX11.2

Again, to be clear, this is definitely a good thing. It is always better for Developers to have easier ways to utilize the hardware for better effect. I just want to clarify that this is not something hardware-wise that is exclusivee to the XBO, and is in fact available on all upcoming platforms, and some of the current ones. The current version of OpenGL has options that enable this same type of API call shortcut, which is what is used on the PS4 (and I believe the WiiU, but I don't think the WiiU's GPU chip actually has the capability for this). You can find this information looking for Hardware Tiled Rendering, or PRT, on google.

Also: Alexglass, did you post this exact same thread on IGN? http://www.ign.com/boards/threads/x1-esram-directx-11-2-from-32mb-to-6gb-worth-of-textures.453263349/

Edit: You know what, I rethought this, and I can totally understand wanting to get the word out on new tech stuff. It was just a little weird that it was verbatim. I only brought it up because it was on the first page of results while I was looking up info on the PRT/Hardware Based Tiled Rendering

Yeah I did. I have an account there with the same name.

I'm not entirely sure if and how the PS4 supports the hardware implementation of PRT and the difference would be that it would require part of its RAM and RAM/GPU bandwidth to emulate it while dealing with the GPU/RAM latency. Which you can also do in the X1's DD3. That's actually the equivalent comparison. There's bigger implications in the case of X1's eSRAM.

And of course since this is especially useful for large terrain megatexturing. John Carmack's implementation allowed textures up to 32GBs in size so I'm also pretty sure it doesn't have to be stored entirely in RAM since that would be impossible. Nevertheless the more exciting implications are the possibility of the combination of tiled textures and the cloud. Developers could go crazy since they wouldn't have to store these massive textures on a disc. They could either offer them as downloadable content, or more exciting, I would imagine the possibility of actually streaming the tiles straight from the cloud in real time thanks to the LZ encode/decode capabilities of the data move engines straight to the eSRAM to be fed into the GPU. In addition, and this is just speculation on my part, if they already have a way of tiling procedural textures, or at least converting them in real time to mega textures on the cloud, this would open up some interesting possibilities. Using the cloud to process your procedural textures for free rather than depending on your CPU would be pretty freaking cool. There hasn't been enough info on the development of procedural textures and how the tech has advanced in practical applications. I'm not sure if tiling procedurals is possible or if real time conversion from procedurals to a tilable format is possible yet.

However I will say that the eSRAM and data move engines are not simply a work around for bandwidth and to label it as such(which I have seen numerous times) is disingenuous to the X1's design. They have specifically equipped it with this for specific applications beyond mitigating bandwidth limitations. Tiled textures, shadow mapping, cloud offloading, cloud streaming and of course a very fast scratch pad for the GPU. This will become more evident as new tools and techniques will come out detailing how it's being taken advantage of. Simply put, eSRAM is superior to both GDDR5 and DDR3 for certain applications. That's why it's there. Not just to boost bandwidth.

#26 Posted by ch3burashka (4916 posts) -

@akeldama said:

How many transistors?

You're gonna have to get two jobs to afford all those transistors.

#27 Posted by Eujin (1294 posts) -

@alexglass: Carmack's interpretation was an awesome improvement to doing huge textures through software rather than hardware, as Rage started development well before it was being added at the hardware level. So I don't know if his code will transfer over to this.

As far as PRT, the hardware level implementation of this functionality is not in the memory controller (which is unique to the XBO due to it's ram configuration), but the main APU itself (according to AMD's talk on Jaguar based APUs), which is shared between the PS4 and XBO. This news appears to speak to the fact that Microsoft has implemented API calls in DirectX to utilize this hardware directly, skipping overhead for developers that they have had to do within DX10 and DX11 as opposed to other APIs (OpenGL)

I can't speak as far as cloud usage, as developers are probably not ready yet to require online connectivity for their non-MP focused games, but remote cloud storage and potential processing is great for the entire industry, in the long run, even if it can't really be used for rendering things that need real-time changes. (IE: It could definitely be used for architecture/geometry shadows, but probably not player model shadows). So I'm excited to see where that goes.

I also want to mention, I looked up Data Move Engines based off your mentioning them above. Data Move Engines are just Microsoft's internal terms for the Direct Memory Access functions of the GPU/APU. They're fixed function DMA built into Jaguar. They're a hardware function that is a part of most GPU/CPUs nowadays, and not unique to the XBO, and in fact this type of DMA access is shared between the XBO and PS4 as they use the same APU.

#28 Edited by AlexGlass (688 posts) -

@eujin said:

@alexglass: Carmack's interpretation was an awesome improvement to doing huge textures through software rather than hardware, as Rage started development well before it was being added at the hardware level. So I don't know if his code will transfer over to this.

As far as PRT, the hardware level implementation of this functionality is not in the memory controller (which is unique to the XBO due to it's ram configuration), but the main APU itself (according to AMD's talk on Jaguar based APUs), which is shared between the PS4 and XBO. This news appears to speak to the fact that Microsoft has implemented API calls in DirectX to utilize this hardware directly, skipping overhead for developers that they have had to do within DX10 and DX11 as opposed to other APIs (OpenGL)

I can't speak as far as cloud usage, as developers are probably not ready yet to require online connectivity for their non-MP focused games, but remote cloud storage and potential processing is great for the entire industry, in the long run, even if it can't really be used for rendering things that need real-time changes. (IE: It could definitely be used for architecture/geometry shadows, but probably not player model shadows). So I'm excited to see where that goes.

I also want to mention, I looked up Data Move Engines based off your mentioning them above. Data Move Engines are just Microsoft's internal terms for the Direct Memory Access functions of the GPU/APU. They're fixed function DMA built into Jaguar. They're a hardware function that is a part of most GPU/CPUs nowadays, and not unique to the XBO, and in fact this type of DMA access is shared between the XBO and PS4 as they use the same APU.

I don't see any reason why Carmack couldn't implement his own hardware-level version if he wanted, but this topic is specifically addressing the difference between software-based tiling like Carmack's and hardware-based.

Nobody was saying the hardware level implementation is stored in the memory controller although the DMEs are designed for the specific workload of tiling/untiling of textures.

However, that still has nothing to do with the point that this technique is a perfect fit to leverage the 32Mb of on-chip eSRAM. There is a difference versus storing your tiles in RAM and on-chip eSRAM. I'm pretty sure there is no additional memory storage on the APU itself to allow for a 16 or 32mb of tiles storage with the exception being the X1's eSRAM.

The DMEs are not built into the Jaguar, though perhaps you meant to say the APU. The Jaguar, eSRAM, DMEs and GPU are all on the same die. It's not quite the same thing. The DMEs are free to operate independently of the CPU. Depending on which released diagram you go off of, it looks a bit different, but in both instances, the idea behind the DMEs are to take the load off the Jaguar.

In addition to architectural differences, the Data Move Engines on the X1 differ from the DMA on the PS4 and I mentioned this in the other thread. For one the X1 has four of them, each one with slightly different capabilities, as noted in the diagram, and one distinct difference in the context of that thread is that the PS4's DMA doesn't support compression capabilities. Just decompression. In the context of that discussion, this makes a difference when you're talking about two way transferring and compression capabilities and an important distinction especially if you plan on doing real time offloading to and from a server.

While the PS4 has its own advantages with additional CUs, and raw horsepower, it can't possibly emulate a missing slab of silicone memory, missing hardware compression capabilities built into the silicone, or a different hardware configuration to match some of the unique advantages the X1 has. They both have their advantages. Sony gave the PS4 more raw power and more programmable compute units, but their functions are different. It's going to have to rely on some software emulation, whether by tasking the CPU or dedicating some of its CUs to it(which would naturally take away from its GPU capabilities) if developers want to match some of the capabilities of the X1 that will be a result of relying on the on-chip eSRAM and its dedicated compression capabilities. The X1 has dedicated hardware for specific applications and vice versa. And when it comes to emulation, it's never going to compete with dedicated hardware, while also requires taxing your processors to implement.

#29 Posted by Humanity (8015 posts) -

Pretty funny how people are so anxious to shit all over this because it is associated with the XBO instead of seeing the merits of evolving technology.

But yah LOL transistors right?

Online
#30 Edited by TruthTellah (7690 posts) -

@humanity said:

Pretty funny how people are so anxious to shit all over this because it is associated with the XBO instead of seeing the merits of evolving technology.

But yah LOL transistors right?

I think part of that mocking is based around how Microsoft themselves have been promoting the Xbox One, and I'd at least agree with being critical of their focus on flash over the real substance that is compelling. There continue to be tangible positives for the Xbox One, but they've sold it rather poorly, infamously promoting just how many transistors it has.

This right here is a cool explanation of evolving tech, and while going into this length of detail isn't necessary for public presentations, they could certainly be doing a better job focusing on things like this over meaningless jargon about the cloud and other buzzwords.

#31 Edited by Humanity (8015 posts) -

@truthtellah: you play a great devils advocate but it doesn't make it any less idiotic that console wars are still as strong as ever.

Online
#32 Posted by EXTomar (4133 posts) -

This thing is though this technology has been around since the 90s but that isn't surprising. In particular, I have seen algorithms using a sort to maximize tiled rendering directly demonstrated. So it isn't a question of whether it works but acting (again) like this changes everything because someone (re?)implemented on modern hardware technology.

I see value in this running to "special interest boards" that can dive into the strength and weaknesses (hint: there is plenty of geometry that has textures that can't use this) but coming to GB and acting like this is big stuff because Microsoft now puts in something makes sad.

#33 Edited by AlexGlass (688 posts) -

@extomar said:

This thing is though this technology has been around since the 90s but that isn't surprising. In particular, I have seen algorithms using a sort to maximize tiled rendering directly demonstrated. So it isn't a question of whether it works but acting (again) like this changes everything because someone (re?)implemented on modern hardware technology.

I see value in this running to "special interest boards" that can dive into the strength and weaknesses (hint: there is plenty of geometry that has textures that can't use this) but coming to GB and acting like this is big stuff because Microsoft now puts in something makes sad.

Yeah I think I'm beginning to see a definitive trend in your posts.

Quantum Computing has "been around" for some time as well. Doesn't really mean shit considering you and I can't buy one and play games on it, does it?

Yeah as a matter of fact, it usually does change things. Anytime experimental tech or ideas actually reach the point where they can have practical applications and be introduced in mass market products people can actually use, it changes things. Touch screens have been around forever too, and I don't think that takes anything away from what the way the iPhone revolutionized the tech and market.

Your attempts at constantly downplaying anything Xbox related are pretty freaking weak.

#34 Edited by leftie68 (214 posts) -

Seriously, this means NOTHING to everybody but the hardcore technical hobbyists. I am not going to pretend I know what the hell this means (and I hope it means great things for video games PERIOD), but don't blame others for taking anything you post with a grain of salt. I am not saying you are wrong, but you activated your profile on June 15 (2 days after E3) and 90% of your posts have been nothing but Xbox praising opinion pieces (you have to admit that is unbelievably fishy).

Hey we get it, you are excited for the Xbox One...great man, good for you, but if you plan on continuing to make these posts, don't get your underwear in bunch every time some posts something contrary to your opinion.

#35 Edited by AlexGlass (688 posts) -

@leftie68 said:

Seriously, this means NOTHING to everybody but the hardcore technical hobbyists. I am not going to pretend I know what the hell this means (and I hope it means great things for video games PERIOD), but don't blame others for taking anything you post with a grain of salt. I am not saying you are wrong, but you activated your profile on June 15 (2 days after E3) and 90% of your posts have been nothing but Xbox praising opinion pieces (you have to admit that is unbelievably fishy).

Hey we get it, you are excited for the Xbox One...great man, good for you, but if you plan on continuing to make these posts, don't get your underwear in bunch every time some posts something contrary to your opinion.

That's probably why I'm excited and you are not.

But really? I thought the entire internet knew about the importance of ram. We've been hearing it for months now. I don't think you need to be a technical hobbyist to understand the importance of being able to use 32mb of RAM to display scenes using what otherwise would have taken up to 6GB of textures.

And it's not my opinion. It's a fact. That's the issue. It's not really up for debate that this is a great thing no matter how some try to spin it.

PS: If "you don't know what the hell it means", then why even join a thread, not bother to read the explanation of what the hell it all means, post and take sides in an argument between people discussing something "you don't know what the hell it means"?

#36 Posted by tourgen (4260 posts) -

@eujin said:

@the_laughing_man: Most definitely good. With caveats that it's not exclusive to Microsoft, although it sounds like they've optimized DX 11.2 to not need to compress as much when doing this. Unknown if that puts it better off than what you'd be dealing with with when coding directly for hardware with assembly/or with OpenGL, but I'd definitely say it's good regardless.

I think OpenGL is capable of streaming LODs and texture tiles if you make use of the new OpenGL 4.4 "bindless" texture extension: GL_ARB_bindless_texture. I think NVidia also has a bindless extension called NV_bindless_texture. I'm no expert though, just do some GL coding as a hobby.

Generally speaking anything do-able in DX is also do-able in GL. They both talk to the same hardware. It might be a little easier in one than the other though. The graphics drivers might be more optimized for one over the other as well. *cough* AMD *cough* shit opengl drivers.

#37 Posted by JerkDaNerd_ (2 posts) -

@tourgen:

Generally speaking anything do-able in DX is also do-able in GL. They both talk to the same hardware. It might be a little easier in one than the other though. The graphics drivers might be more optimized for one over the other as well. *cough* AMD *cough* shit opengl drivers.

It's not so much that it's "a little easier...", the point is that it works on one over the other, the Xbox One to be exact. Xbox One's architectural design is built for tile rendering and surprisingly for cloud computing. PS4 is simply built for raw power and seems as if that was Sony's only answer, which Microsoft could have pulled off if that was the only option.

So it seems like Microsoft did it's homework on all things graphics and built a great console, and probably the only reason I'm getting Xbox One just because of this technology along with cloud and plus with interesting IPs.

#38 Edited by AlexGlass (688 posts) -

@jerkdanerd_ said:

It's not so much that it's "a little easier...", the point is that it works on one over the other, the Xbox One to be exact. Xbox One's architectural design is built for tile rendering and surprisingly for cloud computing. PS4 is simply built for raw power and seems as if that was Sony's only answer, which Microsoft could have pulled off if that was the only option.

So it seems like Microsoft did it's homework on all things graphics and built a great console, and probably the only reason I'm getting Xbox One just because of this technology along with cloud and plus with interesting IPs.

Interestingly there's an interview with Hihopgamer and Chris Doran regarding the X1 and PS4 and apparently both consoles start to chug when using 5-6GB of RAM at 60fps. So developers have no trouble filling up that RAM and the consoles haven't even launched. So yeah MS did do its homework and just like the 360 they're using the eSRAM to address some of the most common bottlenecks in game development.

Dr. Chris Doran reveals that the Playstation 4 and The Xbox One RAM Situation still isn't finalized on either system but CURRENTLY when you hit the 5 or 6 gig mark you start to receive alot of pull back per frame, meaning the full 8gigs is not available for use when it comes to development.

http://www.youtube.com/watch?v=UCU_rUK-Iks&feature=youtu.be

It would be interesting to know the impact MS's DX 11.2 will have on their streaming texture tech. If Epic decides to marry its streaming tech along with supporting MS's hardware-based implementations of partial resident textures not only it should save a lot of that ram, bandwidth and CPU, but probably fix some of the issues that plagued the Unreal 3 engine.

If anyone remembers playing any of the Unreal engine games like Borderlands or Bioshock Infinite, texture pop-in due to streaming textures was a common annoyance. Hopefully not only will hardware-based partial resident textures help alleviate RAM and CPU usage but also fixes this as well. With it being hardware based, one would expect much faster and more solid implementations.

I don't believe Epic has implemented this yet in their engine or any of the launch games like Battlefield 4 yet.

#39 Posted by flippyandnod (349 posts) -

The memory savings on the given demo are not representative of the savings which will occur in the real world. The reason is that developers are already careful not to load super high resolution maps for textures that are far out in the distance and that not every scene rendered has a single object that is both very close to you on the left side of the screen and far from you on the right (note both demos do this).

In short, you will only reduce the amount of texture memory used to render a scene if you assume that you did it in the dumbest way possible in the bad case and if there are surfaces that are simultaneously very close to the eye in one part of the screen and far away in another. If either of these aren't true, then your savings will drop precipitously.

In short, you're saying the same stuff people said when downloadable games were limited to 50MB on 360. People said that it was okay, because 360 had a new super-duper texture compression (procedural texturing, see here) that was going to make 50MB more than enough.

It was never true, but those staunch defenders just didn't know how procedural textures worked anyway, so they assumed they were going to fix everything. It wasn't true then, and it isn't now.

Most textures on Xbox One will be stored in main memory, and all that entails.

It'll be interesting to see if this does affect pop-in on UE or such. The problem with it isn't something that can be fixed with DMA or rendering speedups, but simply that the game has to get the textures in from disk quicker. This will require some texture loading strategy improvements on the part of games, more so than any kinds of changes in the hardware.

Just to ask, you did notice that the listed transfer rates for compressed data on Xbox One are awful, right? It says the LZ decompressor has a bandwidth of only about 200MB/sec. This compares to a regular transfer done by the GPU or GPU in the multiple gigabytes. I don't think the LZ decompressor is going to be useful for textures or anything in RAM, maybe more for stuff coming over the net or off the BluRay drive (maybe the hard drive in a pinch).

#40 Edited by AlexGlass (688 posts) -

@flippyandnod said:

The memory savings on the given demo are not representative of the savings which will occur in the real world. The reason is that developers are already careful not to load super high resolution maps for textures that are far out in the distance and that not every scene rendered has a single object that is both very close to you on the left side of the screen and far from you on the right (note both demos do this).

In short, you will only reduce the amount of texture memory used to render a scene if you assume that you did it in the dumbest way possible in the bad case and if there are surfaces that are simultaneously very close to the eye in one part of the screen and far away in another. If either of these aren't true, then your savings will drop precipitously.

In short, you're saying the same stuff people said when downloadable games were limited to 50MB on 360. People said that it was okay, because 360 had a new super-duper texture compression (procedural texturing, see here) that was going to make 50MB more than enough.

It was never true, but those staunch defenders just didn't know how procedural textures worked anyway, so they assumed they were going to fix everything. It wasn't true then, and it isn't now.

Most textures on Xbox One will be stored in main memory, and all that entails.

It'll be interesting to see if this does affect pop-in on UE or such. The problem with it isn't something that can be fixed with DMA or rendering speedups, but simply that the game has to get the textures in from disk quicker. This will require some texture loading strategy improvements on the part of games, more so than any kinds of changes in the hardware.

Just to ask, you did notice that the listed transfer rates for compressed data on Xbox One are awful, right? It says the LZ decompressor has a bandwidth of only about 200MB/sec. This compares to a regular transfer done by the GPU or GPU in the multiple gigabytes. I don't think the LZ decompressor is going to be useful for textures or anything in RAM, maybe more for stuff coming over the net or off the BluRay drive (maybe the hard drive in a pinch).

Nah.

This has nothing to do with your correlation and I'm not saying anything along the lines of what you think I'm saying.

You've just labeled people and studios like John Carmack, Epic and Crytek and many, many other studios, as developers who apparently use textures in the dumbest possible way.

And lastly, procedural textures have little to do with texture compression in the matter you're referring to it though they do save a lot of RAM. There's not much to compress and they really aren't going to be much use for tiling unless they are converted to static textures. It's an algorithm, a mathematical formula, a string of code, not a picture or a texture to be compressed. You're misinformed. The texture you see created using procedural generation, came from a line of code calculated by your CPU on the fly, or in real time.

So it's true that they take up a microscopic amount of RAM in comparison to regular textures since they don't physically exist in RAM and they will be a big part of next generation gaming now that hopefully there's enough CPU and GPU compute power to run the algorithms. That along with middleware such as Source, which most of the major studios from EA to UBI to MS Studios have purchased, will pretty much guarantee their inevitable take over of a large portion of your texture usage in video games.(That would be Allegorithmic's Source, the company participating in the article you linked to.)

However you came to the understanding of an argument about XBLA games being ok in spite of a 50mb limit because of procedural textures...errr....you either read it wrong or you need to go back and correct whoever told you that. Tell them they're full of it. I don't even know how that one could have been twisted in such a manner like what you're saying. The only way you can take advantage of procedural texture RAM saving capabilities is by creating your games using a lot of procedural textures. If the original game didn't use procedural textures, then it just doesn't have them. Can't think of too many XBLA titles that fall in that category. Can't think why anyone would make a claim that XBLA titles, especially older games that are getting re-released would ever benefit from procedural textures, since they were never made with procedural textures.

But that's not what was being discussed in the opening post.

#41 Posted by flippyandnod (349 posts) -

You've just labeled people and studios like John Carmack, Epic and Crytek and many, many other studios, as developers who apparently use textures in the dumbest possible way.

No I haven't, you have. I said that you can assume most developers (such as what you list) use textures well. And thus they won't get a 95% reduction in texture size from this optimization. If you think a game is going to save 95% of texture space, then you are also saying the developer who wrote it is a poor developer, because that'd be the only way there would be this much savings to make.

Since I assume that most developers aren't that lousy, I know the actual memory savings will be much smaller than portrayed here.

As to the procedural section, I appreciate that you understand procedural textures. However, I think you are showing an inability to understand larger concepts. The concept in question being my allegory. I didn't say tiled textures work like procedural textures. I said that you are showing the same sort of overestimation of the savings from tiled textures that others showed for procedural textures back in the RoboBlitz days.

Many people then didn't understand how procedural textures worked, so they thought that procedural textures would produce massive savings in texture sizes (on disk, not in RAM) in all games. But as you understand, this isn't the case. If the game isn't suited to employ procedural textures, it would not realize those savings.

And what I'm saying is it's the same case here. If a developer isn't dumb enough to load the highest resolution textures for patches in the distance in the first place, then this technique cannot save memory in its more automatic fashion by not doing loading them.

And as to suggesting I correct those people, I did. But they didn't arrive at their conclusions in the first place due to factual information, so my countering their assumptions produced no result in their final determination. They continued to delude themselves about how no future downloadable game would need big textures because procedural textures were going to fix everything. And this continued for the "true believers" until the 50MB limit was lifted. Much like the people who said 1080P was not important (including MS own statement that "1080P doesn't matter this generation") believed that until 360 got 1080P output. And for that matter how PS3 "true believers" frequently said that in-game XMB (menus) wasn't important until PS3 got it.

For the record, even though the amount of textures which can be held in eSRAM on XBox One will be rather limited, I just don't think it'll really matter. The system has good bandwidth to all RAM. Few games come close to the limit of the performance of systems, they only optimize until they hit their fps goals anyway. So developers will have to put in a few more days of optimization on their algorithms for Xbox One at the end of the product cycle. They'll do so and everything will virtually always come out fine.

It seems like a lot of excitement over nothing to me. I don't play bandwidth, I play games. And the games will be great on Xbox One.

#42 Edited by AlexGlass (688 posts) -

@flippyandnod said:

You've just labeled people and studios like John Carmack, Epic and Crytek and many, many other studios, as developers who apparently use textures in the dumbest possible way.

No I haven't, you have. I said that you can assume most developers (such as what you list) use textures well. And thus they won't get a 95% reduction in texture size from this optimization. If you think a game is going to save 95% of texture space, then you are also saying the developer who wrote it is a poor developer, because that'd be the only way there would be this much savings to make.

Since I assume that most developers aren't that lousy, I know the actual memory savings will be much smaller than portrayed here.

As to the procedural section, I appreciate that you understand procedural textures. However, I think you are showing an inability to understand larger concepts. The concept in question being my allegory. I didn't say tiled textures work like procedural textures. I said that you are showing the same sort of overestimation of the savings from tiled textures that others showed for procedural textures back in the RoboBlitz days.

Many people then didn't understand how procedural textures worked, so they thought that procedural textures would produce massive savings in texture sizes (on disk, not in RAM) in all games. But as you understand, this isn't the case. If the game isn't suited to employ procedural textures, it would not realize those savings.

And what I'm saying is it's the same case here. If a developer isn't dumb enough to load the highest resolution textures for patches in the distance in the first place, then this technique cannot save memory in its more automatic fashion by not doing loading them.

And as to suggesting I correct those people, I did. But they didn't arrive at their conclusions in the first place due to factual information, so my countering their assumptions produced no result in their final determination. They continued to delude themselves about how no future downloadable game would need big textures because procedural textures were going to fix everything. And this continued for the "true believers" until the 50MB limit was lifted. Much like the people who said 1080P was not important (including MS own statement that "1080P doesn't matter this generation") believed that until 360 got 1080P output. And for that matter how PS3 "true believers" frequently said that in-game XMB (menus) wasn't important until PS3 got it.

For the record, even though the amount of textures which can be held in eSRAM on XBox One will be rather limited, I just don't think it'll really matter. The system has good bandwidth to all RAM. Few games come close to the limit of the performance of systems, they only optimize until they hit their fps goals anyway. So developers will have to put in a few more days of optimization on their algorithms for Xbox One at the end of the product cycle. They'll do so and everything will virtually always come out fine.

It seems like a lot of excitement over nothing to me. I don't play bandwidth, I play games. And the games will be great on Xbox One.

I understood very well what you are describing. You're suggesting level design is the alternative solution for texture compression or saving RAM. And that developers who develop games with proper level design would not benefit from this. As opposed to them taking advantage of free software and hardware technique to free up level design? How in the world did you draw the conclusion that tiled resources are not as beneficial to have developers with intelligent level design?

I was trying to give you an out and let you think that through more. If you think the people who made this apparent correlation to you about procedural textures being a reason for XBLA not having to worry about 50mb of RAM are silly, imagine how silly it sounds to me, when you're making the argument that only developers who are poor level designers would see major benefits from tiled resources.

You may have or may not have understood the concept of tiled textures. But you clearly don't understand how games are currently being textured by developers and what limitations developers themselves have to deal with. Judging by your bolded statement. And you clearly don't understand that the ability to display GBs of textures by only using MBs of RAM would offer the same advantage to everyone. And that level design is not a solution and is no substitution for tiled resources when it comes to efficiently using RAM.

And yes you did call guys like John Carmack dumb, because prior to tiled resources and texture streaming, that's exactly what they had to do. There was no other way but to store the entire texture in RAM and display that entire texture, at its native resolution, all at once, irrelevant of level design. That's why he wrote his own engine that took advantage of partial resident texture in Rage. And out of all games and developers, Doom developer John Carmack would have had little issues putting walls up everywhere he had a texture seam to mask those limitations, and yet...he's one of the first who went out to write his own software and engine support for it.

Imagine if Doom creator saw major benefits in megatexturing enough to go out and write his own engine, what those guys from Avalanche, Rockstar, Bethseda, Bungie....and basically any major game developer of last generation that didn't develop games that take place in a cubicle would have to say about this. PRR would absolutely benefit your biggest, brightest, best developers in the games industry who make your biggest, best games. From GTA to Skyrim to Just Cause to Battlefield 4, they would all see serious benefits from partial resident resources. In fact a lot of them are already using some of their own implementations. Guys like Epic have developed their own software solutions and are even researching their use in completely other areas.

Now, that you are a bit more informed, consider the fact that this OP was originally about moving from software based tiled resources to hardware-based and the benefits described came from removing those limitations by going to hardware. Can you understand now, how every single one of those guys are probably extremely grateful for something like this and very excited to get their hands on it?

#43 Posted by flippyandnod (349 posts) -

I understood very well what you are describing. You're suggesting level design is the alternative solution for texture compression or saving RAM.

It was clear before that you didn't understand what I was describing by your incorrect assertion I was saying prominent developers are bad developers. And it's even more clear now that you read my text which describes loading mip maps intelligently based upon distances to objects and say I'm talking about redesigning levels.

If you think the people who made this apparent correlation to you about procedural textures being a reason for XBLA not having to worry about 50mb of RAM are silly

Not 50MB of RAM. I was referring to the 50MB size limitation (file size) on XBLA games. Not RAM usage. I explained this twice, and you still didn't get it.

And you clearly don't understand that the ability to display GBs of textures by only using MBs of RAM would offer the same advantage to everyone.

It would not. Because good developers are already doing what is described here. They just aren't getting it automatically from DirectX or hardware. So it won't offer any advantage to them.

And yes you did call guys like John Carmack dumb, because prior to tiled resources and texture streaming, that's exactly what they had to do. There was no other way but to store the entire texture in RAM and display that entire texture, at its native resolution, all at once, irrelevant of level design.

You look like a complete idiot trying to tell someone else what they were saying. Stick to what you know, which is your position and stop trying to tell me mine. As to there being no other way but to store the entire texture in RAM, this was not true. It wasn't true then, it's not true now. You can tesselate the surface and use textures that tile the surface in software. You then decide which levels of resolution to load for each mipmap.

In fact a lot of them are already using some of their own implementations. Guys like Epic have developed their own software solutions and are even researching their use in completely other areas.

It's very odd you mention this yourself while denying it when I say it. This was one of the two reasons I said the stated amount of memory savings would not be realized by good developers. You deny that and then try to throw it in my face lower down about how I should know this is done. Bizarre.

Can you understand now, how every single one of those guys are probably extremely grateful for something like this and very excited to get their hands on it?

No, I really can't because the good developers are already doing this in software. They designed their engines to do it and those who license the engines get the benefits too. It happening in DirectX or hardware doesn't add anything for them because they can get the same results with even less work by not modifying their engine at all!

Now you say the post was about moving from software techniques to hardware. If that's the case, you did an awful job explaining yourself, because you repeatedly explain how that this technology will allow developers to do what required 6GB for before in only 32MB of eSRAM.

Now that I see you explain you were saying the same thing I am, which is that MS now offers techniques similar to those already being used by developers in their DX 11.2/Xbox One package, I certainly cannot disagree. With this new DX 11.2/Xbox One hardware developers will be able to do in 32MB of video RAM (eSRAM in this case) what they could have done with 32MB of video RAM (eSRAM in this case) before. And since HD games games really seem to need more than 32MB of video RAM, that means that games are not going to be fitting all their textures into 32MB of eSRAM on Xbox One.

#44 Edited by AlexGlass (688 posts) -

I was trying to be nice to you and not embarrass you but it seems you had to take it to the next level so...

@flippyandnod said: It was clear before that you didn't understand what I was describing by your incorrect assertion I was saying prominent developers are bad developers. And it's even more clear now that you read my text which describes loading mip maps intelligently based upon distances to objects and say I'm talking about redesigning levels.

What the fuck does loading mipmaps intelligently have to do with HARDWARE-BASED TILED TEXTURING? How the fuck is that an alternative solution? Mipmap levels have to do with scalability. Are you that dense, or do you always get yourself into arguments with terminology you don't understand? You think what you are saying now is any less preposterous and ridiculous? You still haven't grasped the concept even with different mipmap levels you are still loading the entire textures into RAM? As opposed to titles. In a tile pool. Reserved in hardware. Irrelevant of the fact developers are using different mipmap levels, they're still storing the entire texture in its native resolution in RAM, and they're also pre-computing the mipmap levels for the entire texture. And those also have to be stored in RAM. Mipmapping can actually add to the amount of RAM you use. It doesn't relieve it!

@flippyandnod said:Not 50MB of RAM. I was referring to the 50MB size limitation (file size) on XBLA games. Not RAM usage. I explained this twice, and you still didn't get it.

Yes I understood very well. Don't get hung up on the fact I misused the word RAM instead of disk size. It should be pretty clear I got it just fine in my original reply to this nonsense. If you want to call that a win for yourself, pat yourself on the back. That's about all you got.

@flippyandnod said:It would not. Because good developers are already doing what is described here. They just aren't getting it automatically from DirectX or hardware. So it won't offer any advantage to them.

Let me get this right. In a post describing the advantages between software-based tiled texturing and hardware-based implementations you are STILL trying to make this stupid argument? Everybody is going to see advantages from this. Very few developers even attempted software-based tiled texturing last generation. It wasn't practical.

@flippyandnod said:You look like a complete idiot trying to tell someone else what they were saying. Stick to what you know, which is your position and stop trying to tell me mine. As to there being no other way but to store the entire texture in RAM, this was not true. It wasn't true then, it's not true now. You can tesselate the surface and use textures that tile the surface in software. You then decide which levels of resolution to load for each mipmap.

Not only have you resumed to direct insults but you went and dug yourself into a new hole by bringing up another technique you don't understand how it works. Tesselation refers to the process of creating 3D geometry from textures. And while tesselation is distance dependent that sill has nothing- NOTHING -to do with our discussion. Just like your other attempts, it isn't a fucking alternative to tiling textures. It just isn't. If I look like a complete idiot, it's because I'm spitting back your own words at you.

@flippyandnod said:It's very odd you mention this yourself while denying it when I say it. This was one of the two reasons I said the stated amount of memory savings would not be realized by good developers. You deny that and then try to throw it in my face lower down about how I should know this is done. Bizarre.

Yes and every time you re-iterate that it makes you sound dumber and dumber to everyone who takes the time to read that. Because now you've dug your heels into it even harder after I tried to explain to you how stupid that sounds.

@flippyandnod said: No, I really can't because the good developers are already doing this in software. They designed their engines to do it and those who license the engines get the benefits too. It happening in DirectX or hardware doesn't add anything for them because they can get the same results with even less work by not modifying their engine at all!

Now you say the post was about moving from software techniques to hardware. If that's the case, you didan awful job explaining yourself, because you repeatedly explain how that this technology will allow developers to do what required 6GB for before in only 32MB of eSRAM.

Now that I see you explain you were saying the same thing I am, which is that MS now offers techniques similar to those already being used by developers in their DX 11.2/Xbox One package, I certainly cannot disagree. With this new DX 11.2/Xbox One hardware developers will be able to do in 32MB of video RAM (eSRAM in this case) what they could have done with 32MB of video RAM (eSRAM in this case) before. And since HD games games really seem to need more than 32MB of video RAM, that means that games are not going to be fitting all their textures into 32MB of eSRAM on Xbox One.

Thank you. You just proved that that however smart you think you are, you wrote entire books and didn't even understand the basic concept and what's being talked about in the opening post of this thread. Something your average forumer already got in the replies above you. Any developer that uses software implementation of this, would immediately see benefits from having dedicated freaking hardware for it. Not just to the technique itself, but considering they no longer have to emulate it it relieves all the resources necessary to emulate this in software.

You are not explaining the same thing I am. In fact you have proven you have little to no knowledge on the matter. You are confused, you mix terms and correlate terminology you believe sound similar enough to do so, and you are continuing to display your half-ass understanding of graphics techniques which are constantly getting exposed every time you try to downplay something unique in relation to X1 capabilites. In the process of diluting yourself into thinking you're on to something you call me an idiot. Next time you feel like going to the length of emphasizing your points with an insult, I suggest you make sure you got your facts down pat, locked tight, took the time to read the opening post and subject discussed, did your homework, and you actually know what the hell you are talking about.

Oh and one more thing, you may want to look for some FACTUAL evidence of just how many developers actually used software based tilable textures last generation. Come back to me, when you figure this one out and have some proof. And please try to tell them you believe they are not going to be able to realize the benefits from getting dedicated hardware to do this as a bad developer would. I'm sure I'll be able to confirm it as the sound of their laughter will be heard all the way from where I'm sitting.

#45 Posted by TruthTellah (7690 posts) -
#46 Edited by AlexGlass (688 posts) -

Yeah I probably could have made my point with a gif, but I think he's actually interested in learning about some of these things despite his attempts to downplay X1 implementations. I regret raising the tone of discussion, but it's especially unnerving when someone's using insults as emphasis to cover up that their in over their head.

Having said that, I'm sorry flippyandnod if I was too harsh. But my suggestion stands that you really need to do some more research on what mimpamming is used for and what partial resident textures are, as well as the extend of use of software implementations of this tech last generation and actually watching the video in the OP to understand the advantages of moving to hardware.

#47 Edited by leebmx (1876 posts) -

@leftie68 said:

Seriously, this means NOTHING to everybody but the hardcore technical hobbyists. I am not going to pretend I know what the hell this means (and I hope it means great things for video games PERIOD), but don't blame others for taking anything you post with a grain of salt. I am not saying you are wrong, but you activated your profile on June 15 (2 days after E3) and 90% of your posts have been nothing but Xbox praising opinion pieces (you have to admit that is unbelievably fishy).

Hey we get it, you are excited for the Xbox One...great man, good for you, but if you plan on continuing to make these posts, don't get your underwear in bunch every time some posts something contrary to your opinion.

Yeah I asked him about whether he worked for Microsoft or Xbox and he refused to answer my question as well. He also avoided all the other points I made in the thread. The depth of the technical information he claims to show and the amount of Xbox praising threads does look really suspect, add this to when he joined, no avatar (that allways makes me wary fro some reason) and it looks odd.

Although if he does work for Microsoft and is trying to convert people to the Xbox, being rude and patronising to anyone who has a different opinion is probably not helping.

EDIT: Also from @eujin Also: Alexglass, did you post this exact same thread on IGN? http://www.ign.com/boards/threads/x1-esram-directx-11-2-from-32mb-to-6gb-worth-of-textures.453263349/

hmmmmm....?

#48 Edited by AlexGlass (688 posts) -

@leebmx said:

What points? I don't work for MS. I post on multiple websites. Sure I'm not the only one.

I'm "rude" or sometimes respond sarcastically to those who makes accusations of me working for MS, think they know what they're talking about while attempting to say I don't without actually backing it up or anything, or makes direct insults because they're losing an argument.

You expect graciousness?

PS: I'm flattered though.

#49 Edited by flippyandnod (349 posts) -

Irrelevant of the fact developers are using different mipmap levels, they're still storing the entire texture in its native resolution in RAM, and they're also pre-computing the mipmap levels for the entire texture. And those also have to be stored in RAM. Mipmapping can actually add to the amount of RAM you use. It doesn't relieve it!

Absolutely mipmapping can add to the amount of RAM you use. But if you load only the levels you need for a given scene it can reduce it too. For the far away parts of the scene you don't load the highest resolution version. So no, you aren't storing the entire texture in its highest available resolution in RAM (especially video RAM). Not if you're a good developer. Every engine already has code to determine how high of textures resolutions to load in case they are on a PC with a small amount of video RAM, I assure you it's at all difficult to modify the code further to decide whether to load that next higher resolution of a texture or not.

So good developers will break up their textures into smaller textures (call you tiles if you wish) and load the textures they need up to a high resolution texture for the near stuff and not as much for the far stuff. If a developer does this, then the DX 11.2 solution, which does a similar thing, but without the developer having to put it into their engine, gains far less memory savings.

Let me get this right. In a post describing the advantages between software-based tiled texturing and hardware-based implementations you are STILL trying to make this stupid argument? Everybody is going to see advantages from this. Very few developers even attempted software-based tiled texturing last generation. It wasn't practical.

Nonsense. Rarely are large surfaces even covered with a single texture as you see here. Most objects are naturally broken up and if they aren't naturally broken up, then they will break it up anyway. They don't call it tiling, but it's the same thing.

Not only have you resumed to direct insults but you went and dug yourself into a new hole by bringing up another technique you don't understand how it works. Tesselation refers to the process of creating 3D geometry from textures. And while tesselation is distance dependent that sill has nothing- NOTHING -to do with our discussion. Just like your other attempts, it isn't a fucking alternative to tiling textures. It just isn't. If I look like a complete idiot, it's because I'm spitting back your own words at you.

That's not a direct insult. A direct insult would be "you are an idiot". I am referring to the negative impression you leave upon others, not an attribute of you. It was a veiled insult though, no question about that.

But more importantly. No, tessellation is not the process of creating 3G geometry from textures! Geometry is geometry and textures are how you paint them. You don't make geometry from textures.

Learn something:

http://en.wikipedia.org/wiki/Tessellation_(computer_graphics)

If you can't be bothered to read it or can't understand it, at least note the word "texture" doesn't even appear on the page.

Any developer that uses software implementation of this, would immediately see benefits from having dedicated freaking hardware for it. Not just to the technique itself, but considering they no longer have to emulate it it relieves all the resources necessary to emulate this in software.

Again, no you don't get it. Maybe you aren't a developer? If you go up to a developer and say "Hey, you know that thing you already have working? Now you can rewrite it again, because there is hardware for it on one platform, oh and don't forget to keep the old way working too, because 2/3rds of the platforms you'll be shipping on won't have the new feature.' This is not a benefit to a developer. If they already have it working on all 3 platforms, they will simply ignore the new feature because it's more work to use it.

The only case they would find using the new hardware feature a boon is if they couldn't make their software work because the hardware platform was too slow. They likely would use the new feature then, but they won't be thrilled about how they were cornered into doing it by hardware that didn't perform well enough to do it the way the other two platforms could manage it.

That's how developers work. Like I said before, they only optimize to a goal. If it has a high enough frame rate already, they will not go and seek out more work for themselves to do, especially at the end of the project where frame rate issues become apparent. And when work is thrust upon them because they can't meet a goal they do not laugh about how great it is to have more work to do to meet ship. They might do so if they were on a project which somehow found itself with tons of extra time in the schedule, but these sort of projects are about as common as unicorns.

Next time you feel like going to the length of emphasizing your points with an insult, I suggest you make sure you got your facts down pat, locked tight, took the time to read the opening post and subject discussed, did your homework, and you actually know what the hell you are talking about.

Pretty hilarious you posted this in the same message where you attacked me without having your facts about tessellation down pat.

Oh and one more thing, you may want to look for some FACTUAL evidence of just how many developers actually used software based tilable textures last generation. Come back to me, when you figure this one out and have some proof.

I'm not your monkey. When you baldly assert that you are going to declare that you will consider yourself correct unless I prove to you otherwise, it changes nothing for anyone but you. I am neither required nor enticed to act. If you want to educate yourself so as to improve your knowledge or to improve the impression you leave upon others it is up to you to do it.

#50 Edited by Zaccheus (1771 posts) -

Oh and one more thing, you may want to look for some FACTUAL evidence of just how many developers actually used software based tilable textures last generation. Come back to me, when you figure this one out and have some proof.

I'm not your monkey. When you baldly assert that you are going to declare that you will consider yourself correct unless I prove to you otherwise, it changes nothing for anyone but you. I am neither required nor enticed to act. If you want to educate yourself so as to improve your knowledge or to improve the impression you leave upon others it is up to you to do it.

I don't even know what you people are talking about, but this sentiment really hit me. "I will assert this as fact based upon nothing and it's your fucking job to prove me wrong!" So typical internet douchebaggery.