TRega123's forum posts

  • 20 results
  • 1
  • 2
#1 Edited by TRega123 (29 posts) -

@Hunkulese said:

@TRega123 You'd think a software developer would have a better understanding of how ram works. 4GB is more than enough for a dedicated gaming console and will be for the forseeable future. 8 would be overkill and 16 would be absurd. It doesn't work exactly like this but 4GB in a console is almost equivilant to 16GB in a PC. Look at what's out now. Consoles have 512mb and perform as good if not better than a PC with 2GB. All hardware and numbers you're familiar with for PCs mean next to nothing when dealing with a console.

Yeah... us software developers have no idea what we're talking about. Apparently neither does the CEO of Crytek, who says he wants 32GB RAM in the next-gen consoles. Now us software developers have no idea what we're talking about, so please ignore us and instead trust random people on internet forums. After all, they know how to open their task manager and see their memory utilization!

If you really want to know how memory utilization works, you can read my previous post:

@TRega123 said:

@Demoskinos said:

@TRega123 Uh, because they really dont need it. Look what they have done with 512 MB. Its amazing the Witcher 2 runs on the 360 at all. You also aren't running an operating system either so pretty much all of that memory can be utilized.

Game consoles run an operating system which is getting larger, acquiring more responsibilities, and running more background tasks. Years ago the operating system was the primary reason for differences in PC and console RAM utilization, but that is not true anymore. The primary reason for differences is due to different paradigms for application memory management. On consoles, the applications (ie. games) are still solely responsible for painstakingly managing their own memory. It causes debugging nightmares and drives up development costs. PC applications are more commonly using garbage collection (which is code that manages their memory for them). This is including the generational javascript garbage collector in your web browser which makes rich websites like Giant Bomb possible.

Garbage collection in a nutshell (yes, I wrote this in it's entirety-- I clearly know too much about how modern applications manage memory):

When a small object is allocated to memory, it is written to a contiguous block of available memory in the heap as a generation 0 object. Several background threads will recurse the heap building a memory graph, freeing unused objects found through reference counting and cycle detection, and marking survivors as generation 1. This is called a mark sweep. Less frequently, a generation 1 collector recurses the heap, freeing unused generation 1 objects and marking survivors as generation 2, and so on. For most garbage collectors, 2 generations is enough. 'Object churn' (ie. the constant freeing and allocating of objects) causes the heap to become fragmented, and once it becomes difficult to find contiguous blocks to allocate more objects, the garbage collector preforms heap compaction which is a very expensive operation. In order to avoid heap compaction, the application must be well-tuned so most objects which are not permanent die in generation 0. A well-tuned application also does not use more than 80% of the total memory available in order to allow for room to churn objects and greatly reduce the likelihood of heap compaction.

What about when a large object is allocated to memory? It is common in garbage collectors that large objects are treated differently, often having their own heap called the large object heap. Large objects also require a contiguous block of available memory to be allocated; however, unlike the standard heap, the large object heap is not compacted. If a contiguous block of memory is not available to allocate your large object, you are simply out of memory. This isn't as bad as it sounds, because in a well-tuned application, large objects should be infrequent or interchangeable, such as an uncompressed high resolution texture, which can be freed and have it's memory block reallocated by another texture of the same resolution and size (all uncompressed RGB textures will be the same number of bits as long as they are the same resolution). This is called object pooling.

If that was too long to read (or you really don't care about the details), I can sum it up with one sentence. Any more than 80% memory utilization when using garbage collection causes heap compaction, and will significantly slow down your application, whether it is your productivity software or a game.

#2 Edited by TRega123 (29 posts) -

@Demoskinos said:

@TRega123 Uh, because they really dont need it. Look what they have done with 512 MB. Its amazing the Witcher 2 runs on the 360 at all. You also aren't running an operating system either so pretty much all of that memory can be utilized.

Game consoles run an operating system which is getting larger, acquiring more responsibilities, and running more background tasks. Years ago the operating system was the primary reason for differences in PC and console RAM utilization, but that is not true anymore. The primary reason for differences is due to different paradigms for application memory management. On consoles, the applications (ie. games) are still solely responsible for painstakingly managing their own memory. It causes debugging nightmares and drives up development costs. PC applications are more commonly using garbage collection (which is code that manages their memory for them). This is including the generational javascript garbage collector in your web browser which makes rich websites like Giant Bomb possible.

Garbage collection in a nutshell (yes, I wrote this in it's entirety-- I clearly know too much about how modern applications manage memory):

When a small object is allocated to memory, it is written to a contiguous block of available memory in the heap as a generation 0 object. Several background threads will recurse the heap building a memory graph, freeing unused objects found through reference counting and cycle detection, and marking survivors as generation 1. This is called a mark sweep. Less frequently, a generation 1 collector recurses the heap, freeing unused generation 1 objects and marking survivors as generation 2, and so on. For most garbage collectors, 2 generations is enough. 'Object churn' (ie. the constant freeing and allocating of objects) causes the heap to become fragmented, and once it becomes difficult to find contiguous blocks to allocate more objects, the garbage collector preforms heap compaction which is a very expensive operation. In order to avoid heap compaction, the application must be well-tuned so most objects which are not permanent die in generation 0. A well-tuned application also does not use more than 80% of the total memory available in order to allow for room to churn objects and greatly reduce the likelihood of heap compaction.

What about when a large object is allocated to memory? It is common in garbage collectors that large objects are treated differently, often having their own heap called the large object heap. Large objects also require a contiguous block of available memory to be allocated; however, unlike the standard heap, the large object heap is not compacted. If a contiguous block of memory is not available to allocate your large object, you are simply out of memory. This isn't as bad as it sounds, because in a well-tuned application, large objects should be infrequent or interchangeable, such as an uncompressed high resolution texture, which can be freed and have it's memory block reallocated by another texture of the same resolution and size (all uncompressed RGB textures will be the same number of bits as long as they are the same resolution). This is called object pooling.

If that was too long to read (or you really don't care about the details), I can sum it up with one sentence. Any more than 80% memory utilization when using garbage collection causes heap compaction, and will significantly slow down your application, whether it is your productivity software or a game.

#3 Edited by TRega123 (29 posts) -

@DeathsWind said:

Using a cross compiler on the devepment side and shipping c++ will allow easier development but you won't lose the speed and low hardware requirements of c++.

This is true. A lot of games already have components written in languages other than C++ (python is popular, especially as glue code), which are then cross compiled. That is the current state of the industry; however, I do not believe that will always be the case.

I work in the web industry writing software for one of the largest websites on the internet. At an industry conference recently, I met some of the people who work at Facebook and they cross compile PHP to C++ and deploy it to their production environment using bittorrent; however, they are in the process of writing a PHP bytecode compiler and virtual machine (called HipHop) which is more manageable and easier to develop and debug. They are migrating away from direct cross compilation instead compiling to an intermediary. I believe virtual machines are the future of software as the overhead is small compared to what today's processors are capable of. Moreover, virtual machines and garbage collectors have improved immensely, and in some cases are more capable of managing memory than the average programmer. Just a few years ago the Java Virtual Machine for example used to be significantly slower than it is today. This transition will not happen overnight, but it will happen, starting with the outer edges of games (AI programming, sound programming, event programming).

You're already seeing garbage collection on consoles this gen with C# on Xbox 360; however, you will see a lot more garbage collection next-gen and in mainstream games.

#4 Edited by TRega123 (29 posts) -

Depends on how serious you are. If you are trying to make a serious website, I would recommend any of the below options--

ASP.NET- This is my personal favorite. The only downside is it is from Microsoft, so it requires a Windows server to host it.

Django - Giant Bomb was built on this web framework, so you are familiar with what it is capable of. It does sadden me that they are moving away from Django, as I believe it is the web framework with the brightest future.

Ruby on Rails - Even though I don't care much for it, you cannot argue with popularity. Twitter is one of the largest websites on this framework.

#5 Edited by TRega123 (29 posts) -

@DeF said:

@TRega123 said:

@canucks23 said:

@musclerider said:

@TRega123 said:

Is there ever an instance where more memory (assuming the memory is of the same speed and quality) is not better?

When you're a company who wants to sell hardware at a reasonable price and/or margin?

/thread

Memory is cheap, and by the time these consoles are actually being manufactured 16GB (8x2) of memory will cost about $65 versus $40 for 8GB (4x2). The savings is $25. Even if my estimation is off and the savings is $35 dollars, nearly doubling the price of the memory. I would gladly pay Sony and Microsoft extra if it meant I got 16GB of RAM. Over the lifetime of the console, the price of the memory will gradually decline. Moore's law says every 18 months it halves.

Console RAM =! PC RAM

Also, closed hardware systems don't need as much RAM as a PC to run the same thing because their hardware is a known factor. PC's need 2+ gigs to even run Windows and all the other stuff running in the background and only then start using the RAM for games.

You have completely unreasonable and unrealistic expectations. Also, infinite RAM at some point becomes useless when the rest of the system can't do anything with it. Hardware engineering is better left to hardware engineers.

Hardware engineering is better left to hardware engineers? Great, then I'm well qualified. Here are the schematics for the integrated circuits of the first CPU I ever designed. It's an 8-bit processor capable of handling a limited MIPS instruction set. I was 19, and a sophmore in college when I made this. Just because computer hardware is esoteric and magical to most people doesn't mean that principle applies to everyone you encounter. Stop making assumptions about people you don't know.

Images from left to right-- Top-down view of CPU, Arithmetic Logic Unit, ALU Integrated Circuit, Adder, Control Unit, 32-Byte Register, 8-Byte Register, 8-Bit Multiplexer

Now to respond to your message...

As a matter of fact, console RAM is the same as PC RAM. It's not like we have special factories that manufacture RAM exclusively for consoles. The hardware is exactly the same, but I don't think that is what you meant. I think you meant that PC's utilize RAM differently from consoles, and while I would agree, I believe that is quickly diminishing. Consoles are beginning to have larger operating systems with more background tasks, especially as they continue to transition towards being a media center-- ie. the one device you need in your living room. Remember when you had a GPS, calculator, handheld gaming system, phone, etc.? Now you just have a smart phone, right? The same thing is happening to consoles.

Now for the reason PCs currently require more memory than consoles. Years ago, this used to be due to the operating system; however, consoles have growing operating systems and this is no longer the case. Most of the reason you need more memory on a PC is due to a different philosophy of memory management. Consoles still require all applications to manage their own memory, while many PC applications (including your web browser's javascript engine which allows for most of the rich internet content you enjoy) have sophisticated garbage collectors, often times generational garbage collectors. While the game loop and graphics engine will continue to be dominated by C++ and C (ie. programming languages which are closer to the metal), more of the game's workload will be written in friendlier languages which are in-fact garbage collected, and this different philosophy of memory management will require significantly more memory.

PS: I'm not posting this to be a jerk. I just want you to realize that not everyone on the internet is an 'armchair hardware expert.' As a matter of fact, I wouldn't even consider myself an expert. That being said, I think I am a little more knowledgeable than most considering that I've spent months, if not years of my life designing computer circuitry.

#6 Posted by TRega123 (29 posts) -

Oh Geez. I was just posting musings about the rumored next-gen specs and somehow it turned into a flame war while I was at work. I will reply to some of the comments in this thread, throughout the night, starting with the people who replied to me directly...

#7 Edited by TRega123 (29 posts) -

@demonknightinuyasha said:

I was reading something that was talking about RAM manufacturer's revenue being up but their profit being down, so while ram is super cheap right now for you and me, that seems to be more due to price wars than anything else. This may indicate that the cost per unit makes it unrealistic if they feel it's not actually needed. Which in the end the last consoles had 512mb or ram, so the new ones having gigabytes is going to feel like light years ahead, even if high end gaming computers now are putting in 16 GB + ram. At the the end of the day it was probably some dude sitting in a room, crunching a bunch of numbers that decided the benefit didn't justify the cost.

Hell, my computer was pretty old, and I finally just upgraded and I didn't even go super crazy (previously had a 1.8 GHz dual core opteron and 2GB ram, just upgraded to a phenom x4 and 8 GB ram) and while my upgrade was budget as hell, it's still made a world of difference compared to before. There are also plenty of people that have a console but don't have a gaming PC, so they only have the experience of the previous console to compare to.

So in the end I think RAM is like Pie: of course there's room for more, but at some point you look at it and look at the cost and determine it's not worth it.

This is true. Of course, the Orbis and Durango specs are only rumors-- they're probably not even final and number crunchers will come up with the end results. Those results may be 16GB or 8GB, but I seriously hope it is not the rumored 4GB. That would be abysmal. Historically, consoles have never had enough memory and the lack of memory ends up being a limiting factor to the potential of the console. Perhaps the time is right that we eliminate the bottleneck.

#8 Posted by TRega123 (29 posts) -

@canucks23 said:

@musclerider said:

@TRega123 said:

Is there ever an instance where more memory (assuming the memory is of the same speed and quality) is not better?

When you're a company who wants to sell hardware at a reasonable price and/or margin?

/thread

Memory is cheap, and by the time these consoles are actually being manufactured 16GB (8x2) of memory will cost about $65 versus $40 for 8GB (4x2). The savings is $25. Even if my estimation is off and the savings is $35 dollars, nearly doubling the price of the memory. I would gladly pay Sony and Microsoft extra if it meant I got 16GB of RAM. Over the lifetime of the console, the price of the memory will gradually decline. Moore's law says every 18 months it halves.

#9 Edited by TRega123 (29 posts) -

Have gaming consoles ever had enough RAM? When the 360 and playstation 3 were launched, 512MB of memory sounded like a whole lot and memory was expensive. Over time, memory has become one of the cheapest components in a PC and I cannot understand why Sony and Microsoft appear to be going cheap on RAM again.

I'm sure we've all read the rumors that the next-gen consoles will have between 4 and 8 GB of memory. While 8GB is certainly better than 4, we should have at least 16GB of memory. This is bananas! Why?

  • The more memory you have at your disposal, the shorter your load times will be as you can cache rather than streaming it from the disk.
  • Having more memory available would reduce the cost of a game's development as it could allow game developers to experiment more with garbage-collected programming languages rather than having to painstakingly develop with an RAII idiom in C++ (or have memory leaks). I think the game loop and graphics engine will still be in C++; however, things like AI, events, sound, and the parts of the game which are not rendering the graphics can begin moving from C++ to Python.
  • Having more memory available means you can dedicate more memory to the operating system so consoles can do more things. Microsoft keeps touting how they want to be a media hub in your living room. It's going to be especially difficult to multitask between a game and something else unless there is a lot of RAM available.
  • And yes, of course I think games could be larger and better if developers had more memory to work with.

Is there ever an instance where more memory (assuming the memory is of the same speed and quality) is not better? Perhaps I'm living in a bubble because I am a software developer who writes code for servers with 256GB of memory, and while I don't think the Xbox 720 and PS4 should have 256GB of memory, I do think it should have a sizable enough amount so developers do not have to worry about it as much. This may not be obvious now, but 1-2 years after the consoles have been released, we will be wishing we had more memory. Historically, consoles have never had enough RAM.

#10 Posted by TRega123 (29 posts) -

I was playing Catherine, and noticed one of the song names in the jukebox was "Jouji Washington"-- Jouji being Japanese for George. Does this mean Catherine is a George Washington related game? I think it should at least have an honorable mention in the George Washington related titles list.

  • 20 results
  • 1
  • 2