Jump to content
Sign in to follow this  
Polymath820

CPU and GPU overclocking does sweet f a.

Recommended Posts

So tired of people saying "Ohhh over-clock your CPU" I am not overclocking for a 3.7% performance increase, not worth the risk and not worth the gain.

http://lifehacker.com/5846435/can-i-dramatically-improve-pc-gaming-performance-without-buying-new-hardware

Really 3.7% performance increase taking a 2.9Ghz CPU to a 3.8Ghz nearly 4Ghz

GPU? Nope 8.7% performance increase? So really how much value is over-clocking. Null zip zilch.

so both tweaks combined if you are getting 32FPS is 4.56 frames... < Really that's worse than bad.

Edited by Polymath820
Additional Info

Share this post


Link to post
Share on other sites

lol just coz someone has a certain experience doesn't mean it applies to the rest of the world. I notice a significant increase in my old and new system from stock speeds to 4.2 or so.

not worth the risk? what risk? u obviously haven't had much experience in oc'ing otherwise you would realise that the only way to damage parts is by overvolting which u dont need to do if u are only trying to achieve a mild overclock. cpus are full of fail safes these days, why do u think a computer shuts down or BSODS when something is wrong - its protecting itself.

null zip zilch? how bout null = execvm "scripts\youdontknowwhatuonabout.sqf"

Share this post


Link to post
Share on other sites

Did you not notice what clock speed? That CPU was at? 2.9 Ghz ramped up to 3.8Ghz and increase of nearly 1Ghz and only showed on 3Dmark a slight increase 6375 to 6549 It is marginal at best. The numbers do not lie. But overall a 3.7% increase so if your baseline frame-rate is at 35 overclocking only gives you. Approximately 1.30 Frames extra. Frankly CPU speeds are water under the bridge since 1980 - 1990 we saw a massive increase in CPU speed and performance 300% ... Not 12% and 20% like we see now. More cores != faster system , faster clock speed != faster system / better frame rates. OC'ing is just a complete waste of time and defeats the purpose of an intel CPU anyway which is to get a good performance out of a chip that is power-efficient. It is risky, and reduces the life of the CPU considerably. Because you are running it at higher voltages and higher clock speeds than what it is designed for. And higher clock speed results in higher current draw and from higher current draw you generate more heat. And it's a horrible relationship higher the temperature of a semi-conductor the less efficient it is. Due to the resistivity curve getting sharper and opposing current within the CPU.

And I know this for a fact as I have spoken to many people with overclocked systems. They live a lot shorter life.

I am also into HPC technology

To be frank I kind of knew the RAM clock speed would be effecting the game due to the fact the CPU has to come down to the speed of the RAM and Synchronise with it to transfer data so your 3.8Ghz CPU if you have a 1600Mhz RAM stick will force the CPU to equalise with the RAM speed. at 1.6Ghz with a CPU running at 3.8Ghz therefore decreasing the overall transfer speed, I also suspected it was something to do with either bandwidth or clock speed. Seeing as arma 3 is a "real-time rendering" higher clock speeds and transfer of data / information in and out of components should result in a higher frame-rate as the "on-the-fly processing" is what counts not the "RAW RAM or the RAW GDDR5 video-ram"

Maybe I'll wait for DDR4 and Broadwell chips to come out, certain hope Intel does not release a "BGA socket only" *shudders* I like my LGA... also sadly you say "faster RAM" not all "motherboards" are compatible with RAM that runs at 2.8Ghz, they just are not and you have to get "compatible" overclocked RAM for it to work you can't just buy a stick or 2 of OC'd RAM and put it into your motherboard expecting it to work, my motherboard like many other peoples motherboards are locked to a speed mine is 1333Mhz RAM so even if I put in a 2.8Ghz RAM unit or 2 in I will never see it. As the motherboard is non-compatible.

You need to check your motherboards documentation to find out which RAM is compatible.

http://www.msi.com/file/test_report/TR10_2489.pdf

If you don't use compatible RAM this happens. My current RAM is set to 667Mhz locked. Corsair select RAM sticks that are "non compatible". Whats worse your "Chipset" in my case a B3 Intel P35 has to be compatible with the RAM as well.

Edited by Polymath820
Additional

Share this post


Link to post
Share on other sites
Because you are running it at higher voltages and higher clock speeds than what it is designed for. And higher clock speed results in higher current draw and from higher current draw you generate more heat. And it's a horrible relationship higher the temperature of a semi-conductor the less efficient it is. Due to the resistivity curve getting sharper and opposing current within the CPU.

And I know this for a fact as I have spoken to many people with overclocked systems. They live a lot shorter life.

far out dude, if your planning on keeping your cpu for 10 - 15 years then yeah you have a slight point. in 5 years the current chips will be worth sfa so why are you worried about a chip degrading (which it wont until u go past certain volts which is pretty high mind you).

anyway I am in no way trying to get you to be an overclocker, I just have a problem with you coming on here and telling lies. the truth is if ppl want to oc their cpu they will see benefits in this game, mainly if the cpu is decent to begin with. weather the game is optimized or not is not the issue.

Share this post


Link to post
Share on other sites

The chips are worth a lot. Particularly the fact that I know what metals the chips are made of. I don't like joining the cyclical consumption of electronics, just look at the environmental disasters computer graveyards are due to developed countries. I prefer to look after my PC and maintain it. I am still debating whether or not I want a new CPU / RAM I do blender 3D cycles renders. And a few other GPU intesive things. Such as image compositing. I mean spending $450 for a CPU when you compare an I7 to an I5 side by side There is only a 12% improvement over an I7 vs an I5. Video-processing and workstation oriented tasks are best for an I7 games, not so much. Telling lies, not even remotely I have even read a definitive article in PCAuthority benchmarking components. There is always only a marginal improvement in frame-rate that is not worth the hassle. And if you call me poor

"A fool and his money are soon parted" Not only that but from 2011 1% decrease in PC component sales, 2013 14% decrease based on current predictions computer electronics industry is heading for a market crash.

Share this post


Link to post
Share on other sites
Did you not notice what clock speed? That CPU was at? 2.9 Ghz ramped up to 3.8Ghz and increase of nearly 1Ghz and only showed on 3Dmark a slight increase 6375 to 6549 It is marginal at best. The numbers do not lie. But overall a 3.7% increase so if your baseline frame-rate is at 35 overclocking only gives you. Approximately 1.30 Frames extra. Frankly CPU speeds are water under the bridge since 1980 - 1990 we saw a massive increase in CPU speed and performance 300% ... Not 12% and 20% like we see now. More cores != faster system , faster clock speed != faster system / better frame rates. OC'ing is just a complete waste of time and defeats the purpose of an intel CPU anyway which is to get a good performance out of a chip that is power-efficient. It is risky, and reduces the life of the CPU considerably. Because you are running it at higher voltages and higher clock speeds than what it is designed for. And higher clock speed results in higher current draw and from higher current draw you generate more heat. And it's a horrible relationship higher the temperature of a semi-conductor the less efficient it is. Due to the resistivity curve getting sharper and opposing current within the CPU.

And I know this for a fact as I have spoken to many people with overclocked systems. They live a lot shorter life.

I am also into HPC technology

To be frank I kind of knew the RAM clock speed would be effecting the game due to the fact the CPU has to come down to the speed of the RAM and Synchronise with it to transfer data so your 3.8Ghz CPU if you have a 1600Mhz RAM stick will force the CPU to equalise with the RAM speed. at 1.6Ghz with a CPU running at 3.8Ghz therefore decreasing the overall transfer speed, I also suspected it was something to do with either bandwidth or clock speed. Seeing as arma 3 is a "real-time rendering" higher clock speeds and transfer of data / information in and out of components should result in a higher frame-rate as the "on-the-fly processing" is what counts not the "RAW RAM or the RAW GDDR5 video-ram"

Maybe I'll wait for DDR4 and Broadwell chips to come out, certain hope Intel does not release a "BGA socket only" *shudders* I like my LGA... also sadly you say "faster RAM" not all "motherboards" are compatible with RAM that runs at 2.8Ghz, they just are not and you have to get "compatible" overclocked RAM for it to work you can't just buy a stick or 2 of OC'd RAM and put it into your motherboard expecting it to work, my motherboard like many other peoples motherboards are locked to a speed mine is 1333Mhz RAM so even if I put in a 2.8Ghz RAM unit or 2 in I will never see it. As the motherboard is non-compatible.

You need to check your motherboards documentation to find out which RAM is compatible.

http://www.msi.com/file/test_report/TR10_2489.pdf

If you don't use compatible RAM this happens. My current RAM is set to 667Mhz locked. Corsair select RAM sticks that are "non compatible". Whats worse your "Chipset" in my case a B3 Intel P35 has to be compatible with the RAM as well.

I did not see it mentioned but CPUs have 2-3 levels of cache for a reason and that is to AVOID getting data from RAM because it is slower.

I think the third level cache (L3) works of Northbridge or HT-link speed but the rest run at same speed as CPU.

Theres a lot of prediction etc going on so that data doesn't have to be loaded from RAM but the efficiency is somewhere close to 90% (of prediction being correct).

If I have understood it correctly, programmers need to have this in mind and badly written code might wreak havoc on cache predictions ( which leads to cache miss).

This might be old information but to give you all an idea

http://en.wikipedia.org/wiki/Branch_predictor

Share this post


Link to post
Share on other sites

I agree overclocking is a dying thing of the 90s and overhyped by gamers today... 5% differences...

Usually not worth it considering you will need cooling.

Share this post


Link to post
Share on other sites

Yeah I know about L1 L2 and L3 caches, but they are very small and the reason they are so fast is because they are built into the CPU die itself. But I don't think loading 512KB into a L1 cache is going to make much of a difference.

This is something to note Extreme tech are a vaild source as with ieee.org as with phys.org

http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck

http://www.mcorewire.com/2012/03/02/the-death-of-cpu-scaling-from-one-core-to-many-%E2%80%94-and-why-we%E2%80%99re-still-stuck/

More and more cores makes the task more and more complex as even though we have more cores we have a real problem. Before data can be processed it has to be "serialised" all data going in and out of a CPU is either broken into threads by serialising and each core given a thread or we have the case of a GPU which has thousands of cores all doing things. The complexity keeps going up and up the more cores you add and the more hell it is for programmers. As It says a 300% processor speed increase from 1980 - 1990 so, it's not the chips, the programmers haven't caught up yet. Sure we have CPU's nano-meter size decreasing but that only increases the transistor "density" which is "supposed" to increase speed I don't know about you but I question that statement.

I'm waiting patiently to see how AMD's APU's perform.

On the side note yeah Sneakson, you overclock and a lot of the time with over-clocking you will spend money on a cooler, and the funny thing is, people buy water-cooling and even through the "methyl-ethylene-gylcol" is added to increase the heat conduction water still has the limitation of it's "heat capacity" it will stay the same. And radiators are just down right inefficient. And then you've got nitty gritty details such as the type of material used for the radiator and the "overall" surface area of the radiator e.g the cooling fins which are normally made out of copper of aluminium as they conduct heat well, then you've got another problem of blowing air over the radiator. In an ideal environment to get the maximum cooling efficiency you would have a high CFM (Cubic Feet per Minute) also in metric M^3 * t meters cubed by time, but high CFM / High,air-pressure fans are loud.

Saw on some overclocker forum a guy used Pelter units to cool the water. Thermo-electric coolers.

Edited by Polymath820

Share this post


Link to post
Share on other sites
Yeah I know about L1 L2 and L3 caches, but they are very small and the reason they are so fast is because they are built into the CPU die itself. But I don't think loading 512KB into a L1 cache is going to make much of a difference.

This is something to note Extreme tech are a vaild source as with ieee.org as with phys.org

http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck

http://www.mcorewire.com/2012/03/02/the-death-of-cpu-scaling-from-one-core-to-many-%E2%80%94-and-why-we%E2%80%99re-still-stuck/

More and more cores makes the task more and more complex as even though we have more cores we have a real problem. Before data can be processed it has to be "serialised" all data going in and out of a CPU is either broken into threads by serialising and each core given a thread or we have the case of a GPU which has thousands of cores all doing things. The complexity keeps going up and up the more cores you add and the more hell it is for programmers. As It says a 300% processor speed increase from 1980 - 1990 so, it's not the chips, the programmers haven't caught up yet. Sure we have CPU's nano-meter size decreasing but that only increases the transistor "density" which is "supposed" to increase speed I don't know about you but I question that statement.

I'm waiting patiently to see how AMD's APU's perform.

On the side note yeah Sneakson, you overclock and a lot of the time with over-clocking you will spend money on a cooler, and the funny thing is, people buy water-cooling and even through the "methyl-ethylene-gylcol" is added to increase the heat conduction water still has the limitation of it's "heat capacity" it will stay the same. And radiators are just down right inefficient. And then you've got nitty gritty details such as the type of material used for the radiator and the "overall" surface area of the radiator e.g the cooling fins which are normally made out of copper of aluminium as they conduct heat well, then you've got another problem of blowing air over the radiator. In an ideal environment to get the maximum cooling efficiency you would have a high CFM (Cubic Feet per Minute) also in metric M^3 * t meters cubed by time, but high CFM / High,air-pressure fans are loud.

Saw on some overclocker forum a guy used Pelter units to cool the water. Thermo-electric coolers.

I wouldn't say water-cooling is inefficient especially custom watercooling. Cost-inefficient, but you can have good clocks.

I mean how is air-cooling any better, except being cheaper?

Share this post


Link to post
Share on other sites

Right.

3DMark11

Extreme preset, stock i5 2500K and a stock Asus GTX570 DC2:

pzmwHU6s.jpg

Extreme preset, oc'd i5 2500K @ 4.7GHz and an oc'd Asus GTX570 DC2 @850MHz core:

W9eCdENs.png

~13% gain.

Performance preset, stock i5 2500K and a stock Asus GTX570 DC2:

IXvXUaps.png

Performance preset, oc'd i5 2500K @ 4.7GHz and an oc'd Asus GTX570 DC2 @850MHz core:

0ZAN8BCs.png

~15% gain.

With modern OC-tools you achieve that basically with a flick of a switch if you don't want to/know how to fine tune your system manually.

I've yet to experience modern hardware failure due to overclocking, modern hardware usually just shuts down if user tries to do something stupid, just keep voltages in reasonable limits and you're good to go.

Share this post


Link to post
Share on other sites

Since this is a forum for Arma and not 3dmark, I tested overclocking's effect on Arma on my system.

Simple AI heavy test scenario had average fps of 26.46 at 3.02GHz CPU clock.

Overclocking to 3.53GHz (+16.8%) gave 29.30 fps. That's +10.7% increase in performance.

GPU load was around 50% throughout the test so that shouldn't be a factor. (like it isn't a factor in most systems because A3 is more CPU heavy and you can always ease the gpu load with settings)

I'm guessing that if I had a 2.9GHz processor OC'd to 3.8, the gain in fps would be about 35 -> 43.

So my subjective conclusion is "Ohhh over-clock your CPU". Especially if you can do it with a flick of a switch like I did.

Share this post


Link to post
Share on other sites

Overclocking my i5-3570k from 3,4 GHz to 4,5 GHz gave me ~25 % FPS boost. I invested about €30 to proper CPU cooler (HR-02 Macho) to make sure that CPU won't overheat when it's stressed. Base performance was already (relatively) high. €/% ratio was almost 1, which can be considered extremely good cost-effectiveness.

Share this post


Link to post
Share on other sites
Overclocking my i5-3570k from 3,4 GHz to 4,5 GHz gave me ~25 % FPS boost. I invested about €30 to proper CPU cooler (HR-02 Macho) to make sure that CPU won't overheat when it's stressed. Base performance was already (relatively) high. €/% ratio was almost 1, which can be considered extremely good cost-effectiveness.

Did you not read what I said? In the 1980's - 1990 there was a "300%" performance increase from chips. Now we have this silly "doubling law" (Moores Law) 25% is mediocre, 15% is mediocre and 30% is mediocre when all said and done. I remember reading in the 1980's overclocking was much more involved than just "soft-overclocking" because when all said and done soft is still soft.

Overclockers then had to physically remove the clock oscillator chip and replace it with a faster one.

And Again, overclocking is dangerous, CPU's use very low voltages and, changing them, is not exactly safe, because the more voltage you add the higher the current throughput and higher current if hitting a point of danger levels results in more resistance and the other thing is the current being drawn in PC circuits has to be very small as there is a relationship that resistance = resistivity × length / area (R = rho * l / A) that the resistance decreases with larger area PC's are already under a fair amount of strain with the tiny wire's and electro-plated lines there is so many things that can go wrong.

P.S: http://resources.schoolscience.co.uk/CDA/16plus/copelech2pg1.html

As stated as the resistivity of the material increases with higher temperatures. P.D / potential difference in DC circuits namely "PC's" is a constant relationship due to the square-wave consistent voltage peaks will result in more heat. And even that being said the "Ultra-overclocking groups" with liquid nitrogen silicon has a nasty habit of if you get it too cold the silicon goes super-conductive and the CPU gets "cold bugs". And crashes. something in the order of -185*C, I'd overclock but... the risks are too high and performance gains are minimal even if 20% increase is the difference between playable and unplayable.

My CPU is already clocked to a 200 * 33 multiplier and a maximum clock wattage peak of 118W for 1.0 seconds it is not recommended to change nor tweak this value because higher wattage peak times will generate more heat. As it is already over the TDP of 95W and I have been recommended by multiple computer stores DO NOT over-clock ever. As I said overclocking for a 25% performance boost. It is no use, we are fighting a losing battle as expressed by Amadahl's Law the more cores you bolt together it will not mean faster performance as a whole the CPU is going to be limited by the number of "physical CPU's / Cores" and when you get to the 2000 core count more cores you lose more and more parallel portions of the code, not to mention a lot of the programming has to as I stated in other posts "serialised". So even if you speed up the clock you are still going to be limited. By the programming and the serialisation of the code in-order to do operations which otherwise would be parallel. And lifehacker also states that they overclocked a 2.9Ghz CPU to 3.8Ghz with a 3.7% increase. in 3Dmark11

Share this post


Link to post
Share on other sites

I didn't quite get the point either. Could you clarify your point a bit please?

Here's a quote from the same article:

Overclocking your CPU

[---]

Results: 3.7% Bump (in 3DMark). Overclocking the CPU brought our score up from 6375 to 6549. It's a pretty good increase (2.7%), and with a better video card in the system to try and eliminate those bottlenecks we talked about, it pushed the score even farther with a 3.7% increase. So, while it won't cause an insane performance jump, it can certainly make a game on the brink of playability more useful—in fact, I've seen it do so. Now, this all depends on the game—certain games are more CPU heavy, and will see a bigger performance increase than others from a good CPU overclock (Starcraft 2, Crysis, and Grand Theft Auto IV leap to mind). In addition, remember those bottlenecks—if your CPU is the weak link in the chain, overclocking it will raise performance significantly, especially in areas where the game slows down because there are too many things on-screen (think big towns in games like World of Warcraft). So your results will vary, but the bottom line is that it can definitely be worth the time and effort.

The performance increase in CPU-dependent and CPU-heavy games like Arma (WoW is very light game to run if compared to Arma) can, and is, much bigger than the performance gain in artificial tests like 3DMark. The main issue in Arma is that both of the most resource-heavy operations (DirectX draw calls and main AI calculation) are processed in the main thread. This leads to situation where the main thread becomes a bottleneck and other parts of the hardware are used less than what would be possible, as everything is dependent on the main thread and can't function before they get the necessary information from the main thread.

When CPU is overclocked, the bottleneck of main thread becomes smaller and the usage level of other parts of hardware increases, as they get the critical information that is required for their functions to work faster. This kind of "echo" effect causes positive chain reaction which leads to multiplied performance gain if compared to artificial performance tests like 3DMark.

Artificial tests like 3DMark are modularized, which means that each component like CPU and GPU have almost 100% usage during the tests, and because of there aren't any dependencies between components, the performance gains are much smaller than in actual applications like games. Because of the sim-like nature of the game, Arma has very many dependencies in the code, that cause the CPU dependency and furthermore, the main thread bottleneck and thus render the overclocking totally worth it.

TL;DR: Overclocking is definitely worth it in Arma, as long as you know what you're doing (or if you're unexperienced in overclocking, as long as you follow the guidance of newbie guides). Most of the new motherboards have also a single-click mild OC button in UEFI (the replacer of BIOS), that overclocks the CPU a little and gives you performance boost especially in games like Arma.

Share this post


Link to post
Share on other sites
I didn't quite get the point either. Could you clarify your point a bit please?

Here's a quote from the same article:

The performance increase in CPU-dependent and CPU-heavy games like Arma (WoW is very light game to run if compared to Arma) can, and is, much bigger than the performance gain in artificial tests like 3DMark. The main issue in Arma is that both of the most resource-heavy operations (DirectX draw calls and main AI calculation) are processed in the main thread. This leads to situation where the main thread becomes a bottleneck and other parts of the hardware are used less than what would be possible, as everything is dependent on the main thread and can't function before they get the necessary information from the main thread.

When CPU is overclocked, the bottleneck of main thread becomes smaller and the usage level of other parts of hardware increases, as they get the critical information that is required for their functions to work faster. This kind of "echo" effect causes positive chain reaction which leads to multiplied performance gain if compared to artificial performance tests like 3DMark.

Artificial tests like 3DMark are modularized, which means that each component like CPU and GPU have almost 100% usage during the tests, and because of there aren't any dependencies between components, the performance gains are much smaller than in actual applications like games. Because of the sim-like nature of the game, Arma has very many dependencies in the code, that cause the CPU dependency and furthermore, the main thread bottleneck and thus render the overclocking totally worth it.

TL;DR: Overclocking is definitely worth it in Arma, as long as you know what you're doing (or if you're unexperienced in overclocking, as long as you follow the guidance of newbie guides). Most of the new motherboards have also a single-click mild OC button in UEFI (the replacer of BIOS), that overclocks the CPU a little and gives you performance boost especially in games like Arma.

I call bullshit. Overclocking will do nothing to lot of users.

Share this post


Link to post
Share on other sites
I call bullshit. Overclocking will do nothing to lot of users.

It would be nice to see arguments and facts that support your statement. Opinion alone means (well, should mean) nothing, as human being is an irrational creature and thus opinions without arguments tell nothing about the truth value of given statement.

Share this post


Link to post
Share on other sites
I call bullshit. Overclocking will do nothing to lot of users.

Experience tells otherwise, reread Ezcoo's post you quoted.

Share this post


Link to post
Share on other sites
It would be nice to see arguments and facts that support your statement. Opinion alone means (well, should mean) nothing, as human being is an irrational creature and thus opinions without arguments tell nothing about the truth value of given statement.

While it's not fact, depending on heat levels and heat tolerance of the actual CPU, you may achieve a high overclock but get worse performance due to erring. This is why overclocking isn't some blanket fix for poor performance. I know with the 2 Phenom II processors I had, a 940 BE and a 965 BE now, going over 3.6 ghz actually caused a large drop in performance, especially in ArmA. Granted I technically get "better" performance by not overclocking as much, I still suffer from the fact the ArmA is programmed in a very single threaded and monolithic nature. Even i7's at 4.5 ghz suffer from the fact that ArmA is single threaded and monolithic in nature.

Also some people don't like overclocking as it does wear your hardware down faster running something out of spec. You can't just say "Hey run everything out of spec!" as explication for inherent problems to the game.

Share this post


Link to post
Share on other sites
While it's not fact, depending on heat levels and heat tolerance of the actual CPU, you may achieve a high overclock but get worse performance due to erring. This is why overclocking isn't some blanket fix for poor performance. I know with the 2 Phenom II processors I had, a 940 BE and a 965 BE now, going over 3.6 ghz actually caused a large drop in performance, especially in ArmA. Granted I technically get "better" performance by not overclocking as much, I still suffer from the fact the ArmA is programmed in a very single threaded and monolithic nature. Even i7's at 4.5 ghz suffer from the fact that ArmA is single threaded and monolithic in nature.

Also some people don't like overclocking as it does wear your hardware down faster running something out of spec. You can't just say "Hey run everything out of spec!" as explication for inherent problems to the game.

Excessive stress testing of the CPU after overclocking with programs like Prime95 and Intel Burn Test is standard measure that has purpose of testing the ability of CPU to work without errors when it has higher clocks. This is strongly emphasized in overclocking guides. If the CPU calculations result in errors, the guides advise you to lower the voltages and/or clocks. The erring during gameplay is result of not obeying the guides and general guidelines of overclocking instead of the overclocking itself.

Note that DirectX draw calls don't support multithreading and it's something that the Arma/RV developers can't affect. Only Microsoft (as the developer of DirectX) can change that. Thankfully, it's going to change in future though.

I didn't tell anyone to overclock their CPU in my posts, but rather explained why I find it good option personally, basing my opinion on facts. You're right in that overclocking decreases the lifetime of your components, but does it decrease it so much that it would matter is a whole different question.

For example, it's estimated that the modern CPUs last over 15 years with default clocks, and overclocking would decrease the lifetime by from 1 to 5 years. So even if you fail in the OC a bit (not obeying the guides) and the lifetime of the CPU decreases to about 10 years, I don't think it would matter much. Not many of us will use the same PC in 2024, not for gaming at least. It's like playing today with PC that was built 10 years ago. How many of us uses regularly a PC that's 10 years old today? Not many, I think. It's true that the progress with CPUs is remarkably slower today than 10 years ago though, but I think no-one would use 10 years old PCs for regular gaming even in 2024.

Overclocking doesn't fix everything, of course. The main thread bottleneck stays still. But should overclocking not be an option because it's not perfect? Nope - it is an option still, because it still makes the situation better. It's a common logical fallacy of perfectionism - if a solution is not perfect, it's out of question even if it was better than the original solution.

Share this post


Link to post
Share on other sites
So tired of people saying "Ohhh over-clock your CPU" I am not overclocking for a 3.7% performance increase, not worth the risk and not worth the gain.

http://lifehacker.com/5846435/can-i-dramatically-improve-pc-gaming-performance-without-buying-new-hardware

Really 3.7% performance increase taking a 2.9Ghz CPU to a 3.8Ghz nearly 4Ghz

GPU? Nope 8.7% performance increase? So really how much value is over-clocking. Null zip zilch.

so both tweaks combined if you are getting 32FPS is 4.56 frames... < Really that's worse than bad.

You don't get a uniform % increase for all applications, it'll vary for each piece of software you try. Applications that are limited by the performance of one core (arma3) will benefit significantly.

So take your hyperbole and leave.

Share this post


Link to post
Share on other sites

St.Jimmy, Ezcoo and greenfist refutes the threadstarter with facts.

@Windies

I think the question was: how much overclocking the gpu/cpu helps in arma3? There are risks if there is no proper cooling and with lack of knowledge (lifetime/throttling etc). The sense of tips like overclocking is not to excuse something but to HELP to raise performance. Thats very simple to understand i think.

Share this post


Link to post
Share on other sites
St.Jimmy, Ezcoo and greenfist refutes the threadstarter with facts.

@Windies

I think the question was: how much overclocking the gpu/cpu helps in arma3? There are risks if there is no proper cooling and with lack of knowledge (lifetime/throttling etc). The sense of tips like overclocking is not to excuse something but to HELP to raise performance. Thats very simple to understand i think.

For example I top out at 48c under prime95 load @ 3.8ghz with a Phenom II 965 with a Rifle cooler and a Delta 3400rpm 120mm fan inside of a CM Storm Sniper case with 3x200mm, 1x140mm and 1x120mm fans. That's Tjunction temp which is well under the 62c maximum temp for Phenom II's. It's the simple fact that Phenom II's in general love to run cool, the cooler they run the faster they run and you eventually reach a point where you can safely overclock, but you're actually hurting your performance to a degree. It's not just about safety but tolerances of the CPU's and hardware themselves and how even operating within a safe measure while overclocking can still have poor results. Even with Intel's, there's a point where even though you're within safety margins, overclocking more does nothing. Overclocking is not some magical panache for poor performance in ArmA and running things "out of spec" should never become the norm for poor optimization and coding.

As far as synthetic benchmarks are concerned, they're shit for measuring real world performance. They're great if you wanna throw a number around to try and boast about your system, but as far as measuring how something will work in ArmA or Battlefield or CoD or whatever, they're shit.

Share this post


Link to post
Share on other sites
For example I top out at 48c under prime95 load @ 3.8ghz with a Phenom II 965 with a Rifle cooler and a Delta 3400rpm 120mm fan inside of a CM Storm Sniper case with 3x200mm, 1x140mm and 1x120mm fans. That's Tjunction temp which is well under the 62c maximum temp for Phenom II's. It's the simple fact that Phenom II's in general love to run cool, the cooler they run the faster they run and you eventually reach a point where you can safely overclock, but you're actually hurting your performance to a degree. It's not just about safety but tolerances of the CPU's and hardware themselves and how even operating within a safe measure while overclocking can still have poor results. Even with Intel's, there's a point where even though you're within safety margins, overclocking more does nothing. Overclocking is not some magical panache for poor performance in ArmA and running things "out of spec" should never become the norm for poor optimization and coding.

As far as synthetic benchmarks are concerned, they're shit for measuring real world performance. They're great if you wanna throw a number around to try and boast about your system, but as far as measuring how something will work in ArmA or Battlefield or CoD or whatever, they're shit.

and what exactly do you want to say? overclocking have limits? Or: to use arma3-benches is for arma3 better than synthetic benchmarks? Hell.....YES! :rolleyes:

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×