• 51 results
  • 1
  • 2
Avatar image for harbinlights
#1 Edited by HarbinLights (194 posts) -

Or is it now possible?

It used to be said that, you should never future proof your PC, because it's not possible, because of Moore's Law, your PC will just become obsolete, and your money wasted. Now it's starting to look like that may not be true for much longer. CPUs in particular have really stagnated. The CPU I upgraded to recently, is not all that much better than the i5 2500k I upgraded from. And that concerns me.

We're fastly approaching the physical limits of how we can even shrink transistors, where it isn't any longer an engineering issue, but an issue of the laws of the universe. From the look of things, we're either going to hit a paradigm shift, or a stagnation in 2025 or so. Of course, I don't have a crystal ball or anything, but it looks like we'll hit 5 nm mainstream chips by 2025 at the earliest unless I'm mistaken.

And after that, well, we'll pretty well have to find something else to do like "3D" chip design or putting more and more cores on chips, or we're stuck where we are. And I'm no expert on economics, but that also probably means prices stop dropping as hardware is no longer obsolete.

So what I'm saying is, maybe I should be getting the best CPUs on the market right now for gaming, and in general. Usually that would be a waste of money, but I'm starting to wonder if that's the case any more. The whole rule about "don't future proof your PC" is based on Moore's Law. And that Law is ending soon.

Avatar image for wandrecanada
#2 Posted by Wandrecanada (1008 posts) -

The next pushes in innovation on chip tech will likely be in multi threading. That involves software side evolution instead of using brute force clock increases to gain performance. Essentially GPU offloading did this a bit ago so I think we should expect to see something in that regard when it comes to next gen game engines (think PhysX but for other things).

This means we will probably be looking at how expanding the amount of cores will enable some rigs to process more and more stuff simultaneously at the same clock speeds.

Power reduction for processing also reduces heat production allowing more data to flow through more channels without melting important parts without making those parts more expensive (see gold's properties).

There are a ton more technical considerations but I think it's best to assume that you will see more and more engines support more and more threads requiring more and more simultaneous cores per machine. That's likely where you will find the evolution of computing. I'm pretty sure AMD hedged that bet too (I run and old 2500k i5 as well and AMD has me eyeing their stuff because of multi thread support).

Avatar image for therealseaman
#3 Edited by TheRealSeaman (133 posts) -

If you upgraded to a recent CPU it will crush the 2600k. Maybe just not in games, which is less to do with the quality of the CPU and more to do with the simple fact that games don't get the same gains that rendering workloads do.

Skylake/Kaby/Coffee all get big gains and the higher core 8700k runs circles around Sandy Bridge for stuff like video editing and batch processing.

Avatar image for john1912
#4 Edited by John1912 (2504 posts) -

Now is a better time then it ever has been, but technology will progress, and new methods will come out. Moores law is a road block. It always has seemed best to me to buy the upper mid tier CPUs/GPUs for quality and longevity. You just want a CPU that is not a huge bottleneck. I upgraded to a I7 4790 from like a I5-2400 4 years ago? I prob wont be upgrading again until I literally run into games I cant play on it. Which will probably be another 5+ years. The GPU is far more important. Quick google, PC gamer rates the i5-8400 as one of the best for value/performance. Not bad at 180$ If I had to upgrade and my motherboard supported I5 Id prob give that strong consideration.

You are just not going to see a huge performance boost from a CPU these days. Especially once windows gets bogged down with items loading on boot. Its crazy how much faster a clean install is for all your programs.

Avatar image for frytup
#6 Posted by frytup (1231 posts) -

Multi-core/multi-threaded CPUs have already been around for years, and most games really don't take advantage of them. Will game programmers actually start putting a significant effort into coding for them? No idea, but I'm going to say probably not enough of an effort that it's going to matter much.

If all you're using your PC for is gaming, don't worry about it. Buy the best GPU you can afford and a good CPU with what's left. You should be fine for years.

Avatar image for ozzie
#7 Posted by Ozzie (546 posts) -

This is just my personal opinion but I'd say we're more than a few years away from that being true. Granted, it might take a while for us to hit those limits because, as you get smaller, these things become harder to make and take more research to learn how to make reliable. So that might gain you something in the "buy now" argument.

But until we hit the point where things actually level out, then you'll have developers just relying on chip manufacturers to keep pushing the limits so they don't have to really remake their engines to really take advantage of things like multi-cores. Also, beyond some new 3D technology, there are still advances that can be made; regardless of transistor sizes. Intel can add new features to their architecture that increase efficiency. If you look at Hyperthreading, that's something that's increased performance but wasn't dependent on going from 22 to 14nm. (Technically, I'm sure that's not 100% true since more real estate means you can do fancier things without increasing the overall chip size but you get the idea.)

Avatar image for seikenfreak
#8 Edited by Seikenfreak (1482 posts) -

I'm still running the i7 2600k system I built like 6 years ago or something. Feels like forever. I used to have to do a whole new setup every 3-4 years, so it feels like we've already been there to me.

It's still kickin ass too. I do a small amount of rendering with Premiere but the the times never bothered me. I just do something else for 30 mins.

Avatar image for therealseaman
#10 Edited by TheRealSeaman (133 posts) -

@seikenfreak said:

I'm still running the i7 2600k system I built like 6 years ago or something. Feels like forever. I used to have to do a whole new setup every 3-4 years, so it feels like we've already been there to me.

It's still kickin ass too. I do a small amount of rendering with Premiere but the the times never bothered me. I just do something else for 30 mins.

The only real reason to upgrade a 2600k is for a lot of rendering, very high refresh rate monitors and some newer features like USB-C and better I/O support.

I'd say you're good.

Avatar image for bane
#11 Posted by Bane (878 posts) -

@seikenfreak Same here. Mine's got an i7-3770 from 2013. Based on past experience I should be thinking about replacing it, but I don't see any reason to right now. I did upgrade to a 1080 Ti recently though. With their powers combined they've crushed every game I've played at 1440p.

My current plan is to build a new system for Star Citizen. I tried one of their earlier alphas and it had horrible performance. Just terrible. That could've been the alpha code, who knows.

Avatar image for lovcol
#12 Edited by lovcol (67 posts) -

Multi-threading in games is hard because they rely on things happening in a strict order in real-time. Some subsystems like physics handling can often be offloaded to a separate thread, and it's not rare to see two or three threads being in use. Stuff like video encoding is comparatively easy to do multi-threaded since the encoded frames are indexed and can be rearranged after the fact.

It's probably less about programmer effort, but more about all the checks required to maintain the order of things and it eating up most of the benefit of multiple threads. I'm guessing it would also introduce a lot more bugs.

Avatar image for monkeyking1969
#13 Edited by MonkeyKing1969 (7437 posts) -

I think we are in a transition period. While the science and R&D has been down to get down to sub-5 nm nm with current tech, there is diminishing returns on getting smaller than 5 nm...or so the experts say. My guess is the road from 10 nm to 7 nm to 5nm will be slow. It might be eight to ten years from 10 nm being common to 5 nm being common. What will happen is the chip package will get bigger and get stacked higher after 7 nm. We already see big packages like Thread Ripper, but I think what is next is stacking. But unless we can get the size down it a dead end, because of heat...you can't stack high if the package overheats. So what we will see is bigger die-packages, than then every smaller low-nm lithography with stacking all done while keeping everything below 90 c.

All of the above will happen because there are several strategies being developed by semiconductor manufacturers to create ever-smaller microprocessors and memory cells; however, there is no clear direction and there is negligible capacity to make anything for consumers with a new process. That does not mean you will be able to future proof a machine, it just means process will be slow.

That fact is we are OVERDUE for a totally new type of processor, and a totally new computer architecture instruction set and software to run on it. So it won't be future proofing you need to worry about ist will be hoping there are adequate "tools" to get your old data to work on the new conception of computers.

Avatar image for facelessvixen
#14 Posted by FacelessVixen (2504 posts) -

I've been thinking about this when I noticed this thread yesterday morning. But I can't really form a thought other than this:

Future proofing for gaming PCs at any point in time is, at best, well intended but flawed, and at worst, a dumb excuse to spend money for either, more performance than you'll actually make use of, or for bragging rights of which I'd say whoop-de-fucking-doo too unless said person is a content creator. As far as I know, most games are fine with four cores in the 3.5GHz to 4GHz range (give or take a specific chip's instructions per cycle efficiency), and having more just helps with streaming and recording gameplay at the moment.

Seriously, I doubt that gamers will need a Kaby Lake-X or Threadripper class chip for many years to come as hexa-core i5's and Ryzen 5's are already more than enough for just playing games.

Avatar image for soimadeanaccount
#15 Posted by soimadeanaccount (601 posts) -

With CPU maybe, with GPU no.

Games have been utilizing multi thread/core in the recent years. From an economical sense, it is probably multicore for the end user. Maybe that older i7 finally gets it chances to seriously shine over that older i5. For better or worse gaming might not be the most intensive "intended" usage of a PC.

Much of it is software driven, if games continue to explore the multcore/thread path then who knows what the demand and out come of that is going to be.

However GPU has been gaining steady performance increase year after year, and it seems like games are targeted as such also, being primarily GPU focus. Now rather these increase are driver driven...that's a whole different discussion altogether.

Avatar image for cikame
#16 Posted by cikame (2652 posts) -

My PC is hanging tough at the moment, but some of the games coming in the next year or so have me worried, it really just boils down to optimization.
I have an i7-6700k and a gtx980i i got two and a half years ago and nothing i've played has pushed it yet, i've heard that AC:Origins was very hard on resources but that was purely due to it using hardware in really inefficient ways, but i haven't tried it and don't think i'll be picking it up anyway (AC purist), Far Cry 5 looks expensive to run but i'm disappointed the game wasn't more different so i've avoided that one too.
The specs of the rig running the Cyberpunk demo weren't all that crazy so i'm confident i can run that, The Crew 2 runs fine at 60fps for me, DMC5 will probably run fine, Ace Combat 7 might pose a problem with all its cloud tech, the Yakuza games are starting to come to PC and i'm sure they'll run no problem, Forza Horizon 4 might be tricky, but if it's a more solid port than 3 was it could run ok for me, i'm guessing Anthem will run ok, but i've no interest in playing it...

I think i'm alright for another year or two, but again, it really depends on optimization.

Online
Avatar image for vortextk
#17 Posted by Vortextk (914 posts) -

I wanted to say something long but nah. Computer upgrades for gaming are definitely slower these days, like big hardware purchases for noticeable framerate boosts. If a person has the money and their system isn't good enough for what they want(or they want to upgrade resolution/refresh rate on display device as well), that person should probably go ahead and upgrade to something modern and they will be set for probably longer than people were 10 years ago.

Either they figure out some new tech for the silicon or they devise something brand new, we probably don't see a new series of graphics cards every 12 months or new processors significantly raising their clock speeds ever again(but they will, eventually, find out something new if we're not a nuked out hell hole by that point)

Avatar image for haneybd87
#18 Posted by haneybd87 (330 posts) -

Considering how long it’s been since the GTX 1080 came out it might be worth waiting for the 1180/2080 or whatever they plan on calling it.

Avatar image for wandrecanada
#20 Edited by Wandrecanada (1008 posts) -

Seems like a lot of folk here are talking about games that run high on the graphics spectrum. GPUs pretty much have that taken care of to the point that even dual GPUs aren't a hard requirement for VR rigs.

The bottleneck is now happening in the actual mechanics of the sim. GPUs can't offset this issue which is why I refer to multi-threading as the next big deal. We've offloaded that processing to it's own core but we've just started looking at offloading some of the processes between multiple CPU clocks. If you want to see how your rig holds up against those bottlenecks you may have to test against larger strategy sims that feature high graphics. Or in some cases open world games that run off just your PC but sims a physics engine.

Stellaris does this on larger maps and even Civ 6 to some degree. Space Engineers is another that comes to mind. These are the types of games that will truly test your rig's effectiveness when dealing with massive scale physics engine sims. Games that model physics and run large scale operations while you're not there to see them. These are also the games that are pushing multi-threading.

At this point most modern titles can be run off a machine with a 7 year old processor but using a GTX 1080. It's when you start dealing with real time sim or strategy games that you will find the bottleneck noticeable without a newer CPU.

Avatar image for haneybd87
#21 Edited by haneybd87 (330 posts) -

Sorry but a 7 year old processor won’t cut it at this point. My 3820 would be 6.5 years old at this point and even 1.5 years ago before I upgraded it wasn’t cutting it anymore. My 6700K has made a very large difference in many games, especially the open world types and other calculation heavy games such as Civ.

Avatar image for wandrecanada
#22 Edited by Wandrecanada (1008 posts) -

@haneybd87: Yeah but that makes my point. Your new core has a slower clock but better threading support with hyperthreading gives it an edge in larger games like open world and heavy calculation games like Civ.

It's not the clock it's the threading tech that improves your performance (6700k is when they added hyperthreading to the i7 chips).

And I can run most modern games at extreme with a 2500k and a GTX 1080. I get slowdown on Civ and games that use a lot of backround math not involving graphics. FPS games with tiny maps and high graphics are a breeze to run at 120 frames.

Avatar image for haneybd87
#25 Posted by haneybd87 (330 posts) -

@wandrecanada: The 3820 doesn’t have a faster clock than the 6700K though, and they both have hyperthreading support. Hyperthreading has been around for quite a while now.

Anyways I’m sure your 2500K can run games, but something new will run them much more smoothly.

Avatar image for vortextk
#26 Edited by Vortextk (914 posts) -

@haneybd87: I find "much more" to be "I mean..maybe, on certain titles, doing certain things." The point I think is that a 5 year old processor can chug along fine in the vast majority of games that you would not see huge changes in all games with an upgrade. Take my 3570k for example. (It IS more necessary for breaking the 60fps barrier on a high refresh monitor)

Avatar image for haneybd87
#27 Edited by haneybd87 (330 posts) -

@vortextk: I saw a frame rate bump in just about everything though. Some games it was only like 5fps but some games were more like 60fps. This was using the same GPU, a 980ti. Now there are a few other factors to consider with this. First is the change in architecture you get by going to a new CPU, you’re eliminating some bottlenecks in the motherboard itself. The changes in memory can make a difference too.

Anyways I think you’d see a change across the board, some games not so much as others but with the size of game worlds these days I think more often than not the difference will be big. That’s not even taking into consideration other factors such as load times in games, other programs and windows itself. It feels good to have everything be nice and snappy.

Anyways back to the point of the OP, I think you could upgrade everything but your GPU right now and feel pretty good about it, just wait for the next gen of Nvidia cards to upgrade the GPU. You could wait until later this year for CPUs that have meltdown/spectre protections built in but I’m not sure the code based mitigations really make that much of a difference to new CPUs anyways. I for one haven’t seen a difference in my 6700K.

Avatar image for frytup
#28 Posted by frytup (1231 posts) -

@haneybd87: Yeah but that makes my point. Your new core has a slower clock but better threading support with hyperthreading gives it an edge in larger games like open world and heavy calculation games like Civ.

It's not the clock it's the threading tech that improves your performance (6700k is when they added hyperthreading to the i7 chips).

That's not accurate. Hyperthreading has been in i7 chips for almost a decade.

The i7-870 is the first CPU I owned with HT.

Avatar image for harbinlights
#29 Posted by HarbinLights (194 posts) -

If you upgraded to a recent CPU it will crush the 2600k. Maybe just not in games, which is less to do with the quality of the CPU and more to do with the simple fact that games don't get the same gains that rendering workloads do.

Skylake/Kaby/Coffee all get big gains and the higher core 8700k runs circles around Sandy Bridge for stuff like video editing and batch processing.

If most games stick to single core performance and can't hurry and learn how to better program for the many cores and multithreading that Intel and especially AMD are working towards, then for a gamer like me, you can indeed future proof pretty well at the moment.

I've also seen less game performance gains than I had hoped from upgrading from DDR3 to DDR4.

As someone who doesn't do video editing, and mostly uses a PC for playing video games. Having a new CPU hasn't felt like a big upgrade. And as for a technology in general, while they are still improving thanks to a paradigm shift to more cores, and eventually in the future it'll be "3D chips" to try to pack in more transistors. I do wonder when we hit the physical limits of silicon, and eventually the laws of the universe themselves, as quantum computers aren't good for classical PC tasks like gaming, I do wonder how cheaply people will be able to continue improving computers. Currently, CPUs keep getting cheaper and better because nm die shrinks allow fabrications to put more transistors on a chip for less money and less power consumption. At what point is the economics of that going to crash? I would like to see a bright future where CPUs keep improving indefinitely, but that seems pretty shaking right now. Nothing is guaranteed. Even the smartest engineering minds feel that way about Moore's Law right now.

I'd like to be a Kurzweil style techno-optimist and just assume they'll find a way. But people probably felt that way about cars, until cars started hardly getting better with every year, and improvements to automobile design started becoming marginal. I fear that computers may become the next car. Improving, but at a very slow, glacial pace. There doesn't seem to be anything we know about the science and engineering of PC components it would seem, that would warrant any degree of high optimism that we can continue to find loopholes around the laws of physics and make computers supremely better than we are now.

But maybe I'm wrong and we're due for a technological singularity in a few decades. That would be nice.

Avatar image for cliffordbanes
#30 Posted by cliffordbanes (122 posts) -

A delidded i7 8700K or i7 8086K overclocked to 5+ ghz will probably last you a long time. You'll probably have to upgrade your GPU sooner.

Some unrealistic scenarios where it wouldn't last as long would be if

  • a. RAM prices dropped and games started requiring 64GB or 128GB memory for ultra detail textures,
  • b. a new kind of faster memory that requires you to get a new motherboard and CPU is released,
  • c. game engines or middle ware started utilizing a lot more cores and 8+ cores becomes the norm for ultra settings with a lot more AI and better physics,
  • d. VR or raytracing becomes more popular and there are VR or raytracing specific hardware features in CPUs,
  • e. Intel have said that they are working on a new GPU for 2020(this is not speculation, they have announced this) and it requires a new CPU/Mobo/Memory.

Perhaps none of these scenarios are realistic since developers would most likely be constrained by console hardware, mainstream PC desktops and laptops when designing games. Very few devs seem to target the high-end PC enthusiasts exclusively these days.

The next-gen consoles are rumored to be using AMD Ryzen CPUs and AMD Navi GPUs. Neither Ryzen or Navi is targeting the ultra high-end. With Navi AMD is aiming at Geforce 1080 performance at $250 instead of trying to beat the next generation ultra high end Nvidia cards.

Avatar image for alistercat
#31 Posted by AlisterCat (8047 posts) -

@haneybd87: I have a 2600K and I can run AC Origins in 1440p on Ultra at 60fps, mostly due to a 1070. It's not amazing but it can get the job done even if it does hold things back a bit. CPU is especially demanding in Civ, true.

Also, I suggest we create Peter Moore's Law.

No Caption Provided

Avatar image for haneybd87
#32 Posted by haneybd87 (330 posts) -

@alistercat: That’s 1 game and I’m assuming you’re talking about a capped frame rate? What about frame rates higher than 60? You will see gains.

Anyways I think you’re not being quite truthful about your graphics settings or your frame rate in AC:O. I’m running a 980ti which is on par with the 1070 and have a 6700K and I had to make a few compromises in the graphics settings to get a solid 60 at 1080p even, to do the same at 1440p would have to be more like medium-high. Even then there are situations here and there where my frame rate will drop a bit. It’s a very demanding game.

Avatar image for gamer_152
#33 Edited by Gamer_152 (14730 posts) -

I think that the technological power of your PC is down to much more than what kind of CPU you have or how many transistors can fit onto a chip. I also disagree with the idea that the argument you can't future proof your PC was just down to the persistence of Moore's Law. It's long been recognised that thinking any piece of technology you own won't be superseded in some way is naive, transistors or not. We can't tell the future, and there are new areas of hardware development like quantum computing emerging right now where we are likely only touching the surface of what they can yield. We have definitely not reached the end of the history of the desktop PC.

Moderator
Avatar image for harbinlights
#34 Posted by HarbinLights (194 posts) -

I've been thinking about this when I noticed this thread yesterday morning. But I can't really form a thought other than this:

Future proofing for gaming PCs at any point in time is, at best, well intended but flawed, and at worst, a dumb excuse to spend money for either, more performance than you'll actually make use of, or for bragging rights of which I'd say whoop-de-fucking-doo too unless said person is a content creator. As far as I know, most games are fine with four cores in the 3.5GHz to 4GHz range (give or take a specific chip's instructions per cycle efficiency), and having more just helps with streaming and recording gameplay at the moment.

Seriously, I doubt that gamers will need a Kaby Lake-X or Threadripper class chip for many years to come as hexa-core i5's and Ryzen 5's are already more than enough for just playing games.

At what point do you consider PC spending at the moment wasteful? Is getting an Intel Core i7-8700K wasteful?

I was considering upgrading to one for better performance in VR and emulators for new systems(like better performance in Breath of the Wild). And going enthusiast with cooling in order to get a good overclock. And then sticking with that for several years and only swapping out GPUs. I've wanted a 1080 TI for a while now, but am considering waiting a month or two longer, to see if the rumors about the next GTX series are true, and instead save my money for an 1180.

But I also figure at this point, that GTX 1180 will probably last me quite a few years. Especially were I to get a TI.

Avatar image for deanoxd
#35 Posted by DeanoXD (776 posts) -

In the last year i upgraded from i3 6100/GTX 950 to a ryzen 1500x O.C'd to 3.9 and GTX 1060 and of the games i play none of them run below 60 fps at 1080. And i don't see myself growing out of the 1500x for years, but a graphic's card upgrade will the only thing needed to stay competitive in games over the next few years.

Avatar image for corvak
#37 Posted by Corvak (1963 posts) -

The insidiousness of CPU marketing means even tiny increases are going to get marketed like crazy. Same goes for the Extreme/i9 chips where you could get the same game performance out of a good i5 cushioned by about $600 in cash. In many cases it's only streamers streaming and playing on the same PC that hit it, and typically it's actually better to have a second PC handling the streaming than to buy any of the super powered CPUs out there, because if all its doing is running OBS/Xsplit, almost any off the shelf budget PC is going to be up to the task, at least until 4K/60 streaming becomes commonplace, which probably won't happen until the big american ISPs stop forcing their users to pay data cap blood money. Current HD pulls about 1GB/hr of video.

AMD is up to the same tricks with its Ryzen and Threadripper variants, though i've been out of AMD for so long i'm not even sure how the different models line up outside of price.

I think it's still going to hold true that you should build your gaming PC with the $300 CPU and the $400 GPU and you will generally get 3 years of service out of it. A lot of marketing goes into convincing people building gaming PCs to overpay for future proofing, and most of the time it's done more for bragging rights than actual framerate boosts.

Avatar image for eurobum
#38 Edited by Eurobum (487 posts) -

@harbinlights: The basic concept is true. From ultrathin laptops and tablets to consoles, there is a lot that held back PC performance standards. The 3D-transisor or FinFET revolution that Intel's 22nm brought has been the last big jump, the namesake Fins are 40 Si-atoms wide and there is basically little left to cut down. Following that we get TSMC and Globalfoundries using this tech and Intel basically iterating on this technology with 16/14/12 nm which at this point have just become marketing terms. Intel has promised a huge jump with 10nm but for whatever reason is stalling / free falling.

Without free performance from node shrinks, innovation is in the hands of engineers rather than scientists. But we will get at least one last breakthrough at some point. We have yet to reach the goalpost 7nm base-line.

Phones stalled too, and we got bigger screens, more RAM, more GPU, gimmicks like facial recognition. AI is supposedly the way forward. For me interfaces PCIe, NVMI, M.2, USB, DP have always been the biggest issue and headache with future proofing, also somewhere DDR5 is looming.

Avatar image for alistercat
#39 Posted by AlisterCat (8047 posts) -

@haneybd87: I wasn't saying that it runs perfectly on such old hardware by any means but from the benchmarks I looked up a more modern processor only added about 5 frames. I wasn't lying either, but I didn't give a full performance breakdown or anything. I get 60fps most of the time, and about 45 in the big cities. It's not a solid 60 by any means, but it's definitely acceptable.

Avatar image for haneybd87
#41 Posted by haneybd87 (330 posts) -

@alistercat: Big cities are most of the game though. Anyways if it were me, 45fps wouldn’t be acceptable. I’d rather have a locked 30 in that situation or just lower the graphics until I got a steady 60.

Avatar image for facelessvixen
#42 Posted by FacelessVixen (2504 posts) -

@facelessvixen said:

I've been thinking about this when I noticed this thread yesterday morning. But I can't really form a thought other than this:

Future proofing for gaming PCs at any point in time is, at best, well intended but flawed, and at worst, a dumb excuse to spend money for either, more performance than you'll actually make use of, or for bragging rights of which I'd say whoop-de-fucking-doo too unless said person is a content creator. As far as I know, most games are fine with four cores in the 3.5GHz to 4GHz range (give or take a specific chip's instructions per cycle efficiency), and having more just helps with streaming and recording gameplay at the moment.

Seriously, I doubt that gamers will need a Kaby Lake-X or Threadripper class chip for many years to come as hexa-core i5's and Ryzen 5's are already more than enough for just playing games.

At what point do you consider PC spending at the moment wasteful? Is getting an Intel Core i7-8700K wasteful?

I was considering upgrading to one for better performance in VR and emulators for new systems(like better performance in Breath of the Wild). And going enthusiast with cooling in order to get a good overclock. And then sticking with that for several years and only swapping out GPUs. I've wanted a 1080 TI for a while now, but am considering waiting a month or two longer, to see if the rumors about the next GTX series are true, and instead save my money for an 1180.

But I also figure at this point, that GTX 1180 will probably last me quite a few years. Especially were I to get a TI.

I can rationalize getting an i7 or a Ryzen 7 chip to better facilitate a GPU playing games at 2160p and with frame rates above 60 for the sake of balancing a high end CPU with a high end GPU, RAM, monitor, and so on. The general consensus with a quick "best CPUs for 4K gaming" Google search is that current i5s and Ryzen 5s are good enough, but the i7s and Ryzen 7s will have a longer tail and perform better with emulators since those are more CPU than GPU bound. But I'll leave it to you to sift though those Gamers Nexus benchmarks since I don't have first-hand experience with high-end hardware. I'm more of a mid-range person.

But, if we're still talking about future proofing, I question if going high-end for that purpose is worth the expenses. As some people have mentioned, software is also a factor. Maybe the next Unreal, Frostbite, CryEngine and so on, including middleware, APIs, and DRM, will all be optimized for current consumer level hardware, or maybe they won't and people in any current performance margin will have to upgrade anyway to get a specific feature. Who knows but the respective development teams and people who look significantly more into this than I do. As a consumer who knows a few things about tech without being obsessed with it, I don't like making these guesses, in fact, I absolutely hate it. So I rather just limit my foresight to the foreseeable future of what hardware and software has been announced at various press conferences and trade shows to come out in up to three years.

Avatar image for haneybd87
#43 Posted by haneybd87 (330 posts) -

@facelessvixen: “Maybe the next Unreal, Frostbite, CryEngine and so on, including middleware, APIs, and DRM, will all be optimized for current consumer level hardware, ”

People have been asking this question for decades and it never really seems to happen. The requirements for games just go up and up with every new API or engine because they add all kinds of new, power intensive effects.

Avatar image for facelessvixen
#44 Edited by FacelessVixen (2504 posts) -

@haneybd87: That's true, and something that I wasn't really thinking about at the moment since things like more advanced shadows, textures, lighting, anti-aliasing and so on are more GPU oriented features, not necessarily CPU which is what the OP is worrying about. So if I was reaching, that's fine. I'll take that L.

[joke]...Unless 128-bit computing will be a thing in three years and Unreal 5 needs a 128-bit CPU. In that case, I told you so.[/joke]

Avatar image for saispag
#45 Posted by saispag (133 posts) -

If you are not price conscious, buy a 1080 or 1080Ti now.

If you are price conscious, wait until the 11 series cards come out and buy a cheaper 1080 or 1080Ti then.

Should do you 5 - 10 years minimum based on the rate of card releases and graphical improvements etc.

Avatar image for harbinlights
#46 Posted by HarbinLights (194 posts) -

@saispag said:

If you are not price conscious, buy a 1080 or 1080Ti now.

If you are price conscious, wait until the 11 series cards come out and buy a cheaper 1080 or 1080Ti then.

Should do you 5 - 10 years minimum based on the rate of card releases and graphical improvements etc.

Not sure if it is a good idea, but my current plan is to way 2 more months and see if next gen GPUs are out or even announced, and if they are, I'm going to go for a 1180 TI and a build based around it and getting the best performance in VR apps. If not, I may upgrade from my 1060 with either a 1070 or 1080 to hold me over for a bit longer. And hopefully get less framerate drops in VRChat, and actually be able to handle streaming VRChat. Last time I tried that, my computer came to a halt and I had to restart it, and my framerate was so bad I was sick for a good 6 hours. My HMD turned into a hangover simulator the longer I had it on.

Avatar image for haneybd87
#48 Posted by haneybd87 (330 posts) -

@harbinlights: 1180 Ti probably won’t come out until a year or so after the 1180.

Avatar image for csl316
#49 Posted by csl316 (14928 posts) -

How many bits are we at these days? Are games more photorealistic than Mortal Kombat II yet?