Jump to content
Sign in to follow this  
McArcher

Low GPU-load in game (one single-chipped videocard)

Recommended Posts

Hello, I have got a question about in-game GPU load.

I have an AMD Phenom II X3 running at 3500MHz, HD 4890 OC'ed to 900/4400.

Two days ago I had some time to play ArmA 2, I decided to measure all the temperatures, speeds, clocks and so on in my system via RivaTuner and plugins for it.

I was very amazed when I saw in-game GPU load only about 28-50%. Sometimes it could be more than 50%, but very very rarely. (CPU load is also not so much, not 100% of course) GPU load was 100% only in intro scenes (main menu and a 3D world as a background). Is this a "great optimization" for hardware??? I could understand 50% GPU load with Crossfire not working, but I have a single-chipped graphic card and it seems very strage to me... Plz, comment on this.

P.S. Latest BIOS, latest drivers for MB, VideoCard (Catalyst 9.11), Soundcard, DirectX and so on.

Share this post


Link to post
Share on other sites

well sorry to say that.....

in before the lock!

someone will answer, you would have been better quoting your self and replying with 'Anyone?'

Share this post


Link to post
Share on other sites

Are you amazed or what with the overclock?

Hint= I'm all stock, and my pc doesn't break a sweat playing ArmA2.

Share this post


Link to post
Share on other sites

I want to see higher FPS. But it s low, though the videocard is not working at 100%, so FPS could be higher if the code of game was optimized... IMHO

Share this post


Link to post
Share on other sites

the game has a lot of area and entities, its going to be hard on any process/GPU, i dont know just how much the game loads into the cpu/gpu or memory. if it loads it all so its a constant stream or loads bit by bit? either way its still much more than many other games load up.

my comp runs it quite well, sometimes with the odd glitch but thats (i think) down to memory.

Share this post


Link to post
Share on other sites

Then why do other PC games always use nearly 100% GPU ? i just don't understand it...

Share this post


Link to post
Share on other sites

pass?

might be down to ballistics scripts etc. most games have a relitively simple point and shoot, regardless of distance the bullet hits where the X hairs was, this is a little more indepth with dynamic wind etc, it also has BDC bullet drop compensation over distance.

each fired round is being processed, each AI is being processed, the weather is being processed along with grass movement, tree sway, rain, wind. indeed almost everything you see in the real world.

if you can set your vid card back to stock settings, try it that way, sometimes OC'd machines struggle more than stock tuned.

Share this post


Link to post
Share on other sites
pass?

might be down to ballistics scripts etc. most games have a relitively simple point and shoot, regardless of distance the bullet hits where the X hairs was, this is a little more indepth with dynamic wind etc, it also has BDC bullet drop compensation over distance.

each fired round is being processed, each AI is being processed, the weather is being processed along with grass movement, tree sway, rain, wind. indeed almost everything you see in the real world.

if you can set your vid card back to stock settings, try it that way, sometimes OC'd machines struggle more than stock tuned.

That's why we have multicore CPUs, to process all these simultaneously ;)

Share this post


Link to post
Share on other sites

yes but at the same time if a stock machine can run smoothly over an OC'd machine, usually the stock machine runs cooler, therefore processing things better.

8/10 its the machine not the program, i have run majorly OC'd in the past and only ran into problems. i run mine stock now, and its a slower machine than yours with only 2gb ram. i run almost flawlessly.

Share this post


Link to post
Share on other sites

An interesting article.

So, all that means that my videocard is too fast for this game to use all it's processing units, so that there are periods when no data is given to videocard, because CPU is processing some other things (e.g. AI) ?

P.S. If I had an 8-core (for example) CPU, I could increase view distance and load my VGA heavier (to 100%) ?

Share this post


Link to post
Share on other sites

No, the opposite in fact. What that article describes is that there's a limit to the usefulness of multiple cores and that, to avoid the overhead involved in passing large amounts of data to a separate rendering thread, rendering is still largely executed in the primary thread with separable routines handed out to additional cores.

Share this post


Link to post
Share on other sites

there are some strange things in the engine, for example the following: the more ai the lower the cpu-usage. Is it possible for nvidia cards to log the gpu-usage?

@Colt

overclocking has nothing to do with the gpu-usage. But its true: problems can be caused by o´clocking (overheating effects or whatever).

sorry for my bad english :)

Share this post


Link to post
Share on other sites
there are some strange things in the engine, for example the following: the more ai the lower the cpu-usage. Is it possible for nvidia cards to log the gpu-usage?

@Colt

overclocking has nothing to do with the gpu-usage. But its true: problems can be caused by o´clocking (overheating effects or whatever).

1. I think RivaTuner can measure every modern GPU load. There's also a statistics server in it, and it can show any data on screen (OSD), like CPU/GPU/memory loads, various other things. Just search in plugins section. Some plugins are available in internet, just google for them.

2. Overheating is not a problem, I tested my VGA under OCCT GPU (when VRM was heating to 120+ grad celsius!!! I had to increase turbine speed in settings to decrease temperature to 115 grad, and the hottest part of GPU - memory controller - according to Riva Tuner was operating at 98 grad celsius, all this stable more than 2 hours). Then I ran Arma2 and watched maximum VRM tempeature like 65-70 grad and was shocked by such low number :) So, if it was able to work under OCCT, overheating is not a problem.

By the way, overclocking is never bad, except cases when hardware decreases its clocks intentioally when overheating, like HD59xx videocards, as AMD says, but I dont have it and dont know how it works... I monitored GPU and video-memory clocks, they were stable all the time, even when I "used VGA as an oven" :D

Share this post


Link to post
Share on other sites
there are some strange things in the engine, for example the following: the more ai the lower the cpu-usage. Is it possible for nvidia cards to log the gpu-usage?

Yes a noticed this and other "anomalies" also.

One is the thing with the constant low GPU usage, and with all the respect, this has nothing to do with Multi-CPU/Multi-Threading per se (in case of the low-GPU-usage), now spoken in conjunction with the developers-blog article.

I monitored/logged my ATI HD4890 Toxic-1GB (heavy OC by factory) during gaming and it was actually rarely getting over 75% usage....

Then i played different games, like GT4 and such stuff and it was always near 100% - very strange indeed.

Another thing i noticed, and i have a "Watt-Meter" at my Powerline into the PC Unit, is that even though Arma2 claims to have Multi-CPU usage and whatnot, the power consumption is always way lower with Arma2, compared to for example GTA4.

Of course this saves money, produced less heat and whatnot, but might it not possible that Arma2 doesn't take the full potential of todays Hardware?

Somehow i have that feeling....:cool:

Share this post


Link to post
Share on other sites

As I interpret that article, if your CPU or the core that runs the primary thread (which as noted above runs rendering) is choked it will not be able to keep your GPU in work. There is therefore a direct correlation between CPU usage and GPU usage, either may have a negative influence on framerate. Without trying to say one way or the other if the article is correct or not, it is pointing out that a determination to keep all cores busy (as evidenced in your GTA IV comparison) may prove an exercise in futility because of the overhead in sharing data between threads (see the possible 20%-only note).

Share this post


Link to post
Share on other sites
1. I think RivaTuner can measure every modern GPU load. There's also a statistics server in it, and it can show any data on screen (OSD), like CPU/GPU/memory loads, various other things. Just search in plugins section. Some plugins are available in internet, just google for them.

2. Overheating is not a problem, I tested my VGA under OCCT GPU (when VRM was heating to 120+ grad celsius!!! I had to increase turbine speed in settings to decrease temperature to 115 grad, and the hottest part of GPU - memory controller - according to Riva Tuner was operating at 98 grad celsius, all this stable more than 2 hours). Then I ran Arma2 and watched maximum VRM tempeature like 65-70 grad and was shocked by such low number :) So, if it was able to work under OCCT, overheating is not a problem.

By the way, overclocking is never bad, except cases when hardware decreases its clocks intentioally when overheating, like HD59xx videocards, as AMD says, but I dont have it and dont know how it works... I monitored GPU and video-memory clocks, they were stable all the time, even when I "used VGA as an oven" :D

You are crazy git. Running card at 120 C is a death sentence to it. One advise I can give you is to start saving cash for a new card, coz in few months time it will die, if not earlier.

I had same with mine. When I bought one, it was running 60C full load, couple months later it was running 65 C and each month temperature was rising. (it begun to rise after I played crysis without rivatuner and back then drivers didnt rise fan automatically, so I burned it a bit)

Eventually I had temps 100-105 C but it was still running ok, but Ive solved issue by removing metal carcas. Temps droppped back to 60C at full load, and after about 4 months my cards dropped dead. I could still use desktop and browse internet, but as soon as I run any 3D applications I got blue screen, restart and then yellow artifacts all over my monitor which looked like some sort of a maze and max resolution 640x480 with no video hardware detected in harware manager.

So I returned card and been told that my Vram got damaged. Guess from what?

So yea, dont play with temperatures.

Edited by Lamerinio

Share this post


Link to post
Share on other sites
As I interpret that article, if your CPU or the core that runs the primary thread (which as noted above runs rendering) is choked it will not be able to keep your GPU in work. There is therefore a direct correlation between CPU usage and GPU usage, either may have a negative influence on framerate. Without trying to say one way or the other if the article is correct or not, it is pointing out that a determination to keep all cores busy (as evidenced in your GTA IV comparison) may prove an exercise in futility because of the overhead in sharing data between threads (see the possible 20%-only note).

No, what i said was that my GPU on my Graphics-Card (its a "single GPU") is in Arma only rarely over 75% utilized, while in most other games at the 100% level.

I have a 3.4GHZ Quadcore by the way, and i saw Arma2 rarely/almost never take over 35-40% of my available power (=1.5 cores at max used would that mean).

Allright, now you have the informations :D

Share this post


Link to post
Share on other sites
Allright, now you have the informations :D

Same information I had before, your GPU is not full utilised nor are all four cores of your CPU, I get it. All of which is consistent with the information in Suma's article, program flow and rendering are still largely dependant on a single core and the rationale for that is the principle point of the article. I also assume that the reason he's decided to post this now is because of the numerous posts like yours wondering why not everything is maxed out, it's all in the article, have another read.

Share this post


Link to post
Share on other sites
You are crazy git. Running card at 120 C is a death sentence to it.

1. It was a very heavy synthetic test by OCCT (no such load in real games).

2. It was a temperature of VRM Phase 3 - the hottest part of a videocard PCB, and this VRM is always heating a lot even on a stock clocks and voltages! I'll give an example in order not to be telling fairytales... In OCCT GPU test of my 4890 without any OC (850/3900 and 1.3125v on GPU) it had a temperature of 100 grad sharp after 3 minutes 0 seconds of this test! 102 grad after 5m0s... I've read somewhere, that VRMs heat 100+ grad and it is normal for them. It's the problem of ATI/AMD , my card has a guarantee period, if it burns, I'll ask for new one, it's not my problem.

---------- Post added at 08:42 PM ---------- Previous post was at 08:24 PM ----------

One advise I can give you is to start saving cash for a new card

Arma 2 cannot handle even with my 4890, I think on 5870 it will use only 25% of GPU :D:D:D

Or maybe you meant that it will burn and I will buy a new one? The guarantee period is long...

---------- Post added at 08:48 PM ---------- Previous post was at 08:42 PM ----------

and..... I don't believe that rendering code cannot be optimized further / divided into threads. AI pathfinding and rendering can be simultaneous. (AI keeps searching for better route while other core of CPU sends info to GPU to render current positon of objects). And if AI hasn't found its route, it keeps on searching further, while other cores render next frame of our world with other objects. Meanwhile other AIs or humans travel across the world and we render them)) They have found their routes)) We are not in DOS and CPU is multicore :yay:

Edited by McArcher

Share this post


Link to post
Share on other sites
1. It was a very heavy synthetic test by OCCT (no such load in real games).

2. It was a temperature of VRM Phase 3 - the hottest part of a videocard PCB, and this VRM is always heating a lot even on a stock clocks and voltages! I'll give an example in order not to be telling fairytales... In OCCT GPU test of my 4890 without any OC (850/3900 and 1.3125v on GPU) it had a temperature of 100 grad sharp after 3 minutes 0 seconds of this test! 102 grad after 5m0s... I've read somewhere, that VRMs heat 100+ grad and it is normal for them. It's the problem of ATI/AMD , my card has a guarantee period, if it burns, I'll ask for new one, it's not my problem.

---------- Post added at 08:42 PM ---------- Previous post was at 08:24 PM ----------

Arma 2 cannot handle even with my 4890, I think on 5870 it will use only 25% of GPU :D:D:D

Or maybe you meant that it will burn and I will buy a new one? The guarantee period is long...

100 C is way too high and its not normal. At 105c card will start to clock down itself to prevent any further heating and after that it will stop itself which result in your system crash. After reboot it will work like before, all ok.

What I was saying is once you reach max card temp, which is above 105 C or so, cook it a bit on these temps and then card's life time will start to tick down, so once youve fried it on high temps it will start to die slowly no matter at which temps you are playing after you burned it.

It is good if you got lifetime guarantee, if not Ill laugh once you pass your guarantee period and card dies. :D

You better start monitoring your temps on a daily basis and see if temps are rising or not.

Share this post


Link to post
Share on other sites

Dont get hung up on % usage... And a GPU getting to 100c is so rare, with all the down clocking now at 75c or 85C, thats why you see issues, the downclocking(really blow the dust out, or turn up the fans, or better yet hack the bios so Ati and NVDA dont choke our $350+ investment!).Also try different progs to chk the %, i have seen it change between Everest and RIVA and GPUZ.... and with different drivers... But if it is low you need to log, not alt tab out and in (tho RIVA has the history) anyhoo i can get 99% on all four GPUs i have ....

Share this post


Link to post
Share on other sites
1 I don't believe that rendering code cannot be optimized further / divided into threads. AI pathfinding and rendering can be simultaneous. (AI keeps searching for better route while other core of CPU sends info to GPU to render current positon of objects). And if AI hasn't found its route, it keeps on searching further, while other cores render next frame of our world with other objects. Meanwhile other AIs or humans travel across the world and we render them)) They have found their routes)) We are not in DOS and CPU is multicore :yay:

Talk to any game designer and you will find that you are wrong. Many threads are co-dependent and cannot afford any delay between them even though on the surface they appear to be unrelated. If the threads cannot be rejoined at the precise moment they are needed, the game will crash.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×