Jump to content
k3lt

Low CPU utilization & Low FPS

Recommended Posts

And you still misunderstand the issue still. It's not about some magical or mythical "problem solving" utilization number. It's about the fact that when the engine is doing things that put hardware under load be it render load, physics load, AI load or whatever, utilization DROPS. That in itself is symbolic of a problem within the engine, not with the hardware or the hardware not being powerful or fast enough to run the software/engine.

It would be like flooring the accelerator and watching the RPM gauge slowly or all at once drop and getting little to no power output. It's a symptom of a problem.

Obviously people are having problems, otherwise this thread wouldn't exist would it?

I understand the issue, as I have it myself, but it seems so that the issue doesn't get resolved the classical way, like major changes in the engine etc. .

As they add more and more nice content, which is somewhat great but doesn't solve any issues, and we are already through all the little optimizations (CPU/GPU), there seems to be only the option left to make things less hard for the CPU.

I can only guess what the main-thread is doing or waiting for, but if AI-related things are part of it, then it would make sense to reduce maybe the parameters or just "dumb" the AI a bit down.

:)

Edited by TONSCHUH

Share this post


Link to post
Share on other sites
Find the place where you're getting 15fps --tweak there.

The placES where I'm getting 15fps... there isn't much that can be done settings wise on my end to improve performance. That is, in fact, part of the problem.

---------- Post added at 01:14 ---------- Previous post was at 00:54 ----------

Maybe the dev's should implement profile-options to dumb the AI down, as it seems that the AI-Parameter-Calculation's are the main-issue for the bad performance.

I can reproduce this issue without AI quite easily. AI is just another thing that compounds the issue (b/c with every additional thing you are asking it to do when it has already hit its cap, you are just adding to the problem).

Dumbing things down shouldn't be considered an acceptable solution in my opinion. we've had enough of things getting trimmed down and out. A solution would be to actually fix it after all these years. Or, at least come clean, and stop engaging in misleading marketing like calling it a "brand new" engine. (in my opinion)

Share this post


Link to post
Share on other sites

I can reproduce this issue without AI quite easily. AI is just another thing that compounds the issue (b/c with every additional thing you are asking it to do when it has already hit its cap, you are just adding to the problem).

Dumbing things down shouldn't be considered an acceptable solution in my opinion. we've had enough of things getting trimmed down and out. A solution would be to actually fix it after all these years. Or, at least come clean, and stop engaging in misleading marketing like calling it a "brand new" engine. (in my opinion)

How much better is a mission running, if you take all the AI out and create something like a human-only Deathmatch scenario ?

:confused:

Share this post


Link to post
Share on other sites
It's about the fact that when the engine is doing things that put hardware under load be it render load, physics load, AI load or whatever, utilization DROPS. That in itself is symbolic of a problem within the engine, not with the hardware or the hardware not being powerful or fast enough to run the software/engine.

yep. I made a thread a couple of years ago about arma2 and the same behavior: the more AI the lower cpu utilization. Highest utiliziation in empty editor (lol) and in EVERY other scenario (mp/ai/scripts) lower utililization. Like maverick96 and others said..

Its such an obvious thing since arma2 that I have to assume its well known by the developers but they can´t fix it. Thats why I tried to compensate it with an watercooled and heatspreader modified 3570k @ 4,9 ghz and 2666er ram......and failed in scenarios with 100+ AI :p

Share this post


Link to post
Share on other sites
I understand the issue, as I have it myself, but it seems so that the issue doesn't get resolved the classical way, like major changes in the engine etc. .

As they add more and more nice content, which is somewhat great but doesn't solve any issues, and we are already through all the little optimizations (CPU/GPU), there seems to be only the option left to make things less hard for the CPU.

I can only guess what the main-thread is doing or waiting for, but if AI-related things are part of it, then it would make sense to reduce maybe the parameters or just "dumb" the AI a bit down.

:)

Then I misunderstood you and mistook the roll eyes emote in your post as sarcasm. My apologies. I think there is a combination of AI processing, memory limitations which causes issue's with number of objects displayed as well as the simple fact that the engine is extremely tied to the main thread due to the scripting engine and the extent to which the game makes use of the scripting. I don't think it's any one thing, I think it's a lot of very medium sized problems in combination.

---------- Post added at 10:31 ---------- Previous post was at 10:23 ----------

yep. I made a thread a couple of years ago about arma2 and the same behavior: the more AI the lower cpu utilization. Highest utiliziation in empty editor (lol) and in EVERY other scenario (mp/ai/scripts) lower utililization. Like maverick96 and others said..

Its such an obvious thing since arma2 that I have to assume its well known by the developers but they can´t fix it. Thats why I tried to compensate it with an watercooled and heatspreader modified 3570k @ 4,9 ghz and 2666er ram......and failed in scenarios with 100+ AI :p

I believe it's fixable, but the extent of the work required to fix it is probably a very large task. Eventually though it's going to come to a point where the engine cannot do what the developers want it to do in it's current state and will require a very large rewrite of a good portion of sections in the engine more than likely. It's either that, or keep trying to change your goals to fit the limitations of the engine, which is generally what happens.

Share this post


Link to post
Share on other sites

I think Bohemia should try to invest in a Mantle renderer. And I say this while I don't even have a AMD video card right now!, but the Arma series weak point seems to be cpu utilization to draw objects in screen, it's pretty clear just rotating your view between a beach or looking at the ground, and then looking at a forest or a city, and comparing the fps/cpu use. So that's seems lots of overhead in draw calls in cpu, which is the thing that Mantle improves the most.

Share this post


Link to post
Share on other sites
Then I misunderstood you and mistook the roll eyes emote in your post as sarcasm. My apologies. I think there is a combination of AI processing, memory limitations which causes issue's with number of objects displayed as well as the simple fact that the engine is extremely tied to the main thread due to the scripting engine and the extent to which the game makes use of the scripting. I don't think it's any one thing, I think it's a lot of very medium sized problems in combination.

---------- Post added at 10:31 ---------- Previous post was at 10:23 ----------

I believe it's fixable, but the extent of the work required to fix it is probably a very large task. Eventually though it's going to come to a point where the engine cannot do what the developers want it to do in it's current state and will require a very large rewrite of a good portion of sections in the engine more than likely. It's either that, or keep trying to change your goals to fit the limitations of the engine, which is generally what happens.

Thanks for sharing your thoughts !

It's really frustrating ... I can't even remember how long we are waiting now for the "wonder"-patch.

:(

---------- Post added at 22:06 ---------- Previous post was at 22:02 ----------

I think Bohemia should try to invest in a Mantle renderer. And I say this while I don't even have a AMD video card right now!, but the Arma series weak point seems to be cpu utilization to draw objects in screen, it's pretty clear just rotating your view between a beach or looking at the ground, and then looking at a forest or a city, and comparing the fps/cpu use. So that's seems lots of overhead in draw calls in cpu, which is the thing that Mantle improves the most.

Would it be not better to move such things entirely to the GPU ? Would things not be a bit better as well, if PhysX could be moved to the GPU too ?

:confused:

Share this post


Link to post
Share on other sites

I believe the issue is due to threading not being aggressive enough, something only the engine team can work on.

From what I can tell the main process is utilising the other cores by using helper threads to offload work from the main process, it seems that the main process may be starved of work due to having to wait for other tasks on other threads to be completed or perhaps not sharing the workload out to its helper threads as much as it should/could.

This is why I believe having a CPU with efficient processing(sandy/ivy/haswell etc) is better to have than one with more cores, that primary process uses all it can.

My multithreading experience is low, so I may be completely wrong. It would be nice to see members of the engine team open up about this topic in more detail.

Share this post


Link to post
Share on other sites

While in game I only see 20fps on High settings at 1080p, and my CPU & GPU usage seem really low (see image below):

img

System spec:

OS: Windows 8.1

Mobo: MSI FXA990-GD80

CPU: AMD FX8350 (8-Core) 4.3GHz

GPU: EVGA 780 Classified 3GB

RAM: 16GB Corsair 1600MHz

PSU: Corsair TX750

I am currently trawling through the sticky at the top of this forum but it's over 250 pages so will take an eternity to read all of it.

Am new to Arma and just for reference this PC can max out BF4 on Ultra @ 1080p with 120fps so either Arma 3 is really poorly optimised or there are some Arma specific tweaks I need to do?

Edited by [FRL]Myke

Share this post


Link to post
Share on other sites

5% cpu usage seems excessively low (unless that usage readout is from one of the cores, other than the main one). Normally folks will have the main core working pretty hard, and the others little to not at all.

Have you double checked your power settings? Is that 5% usage with Arma up and running? If so, I'd have to suspect that it is reading the usage off of one of the threads other than your primary one. I don't have windows 8, but there should be a way to see the usage across all of your cores/threads in that performance tab. What I see in that picture though, looks like idle usage.

Share this post


Link to post
Share on other sites
5% cpu usage seems excessively low (unless that usage readout is from one of the cores, other than the main one). Normally folks will have the main core working pretty hard, and the others little to not at all.

Have you double checked your power settings? Is that 5% usage with Arma up and running? If so, I'd have to suspect that it is reading the usage off of one of the threads other than your primary one. I don't have windows 8, but there should be a way to see the usage across all of your cores/threads in that performance tab. What I see in that picture though, looks like idle usage.

As above 5% is just as I exited the game. Will try again with the multiple core view open.

---------- Post added at 18:10 ---------- Previous post was at 17:53 ----------

Ok, another screen shot below. TBH with the graphics settings I've selected at 1080p I don't think I'm asking too much from my hardware. It's not the best spec in the world but am only asking it to push out 1080p, and as said it does happily max out Battlefield 4 at 80-120fps all day long. (While playing BF4. CPU usage is around 50% and CPU is 99% as expected)

System spec:

OS: Windows 8.1

Mobo: MSI FXA990-GD80

CPU: AMD FX8350 (8-Core) 4.3GHz

GPU: EVGA 780 Classified 3GB

RAM: 16GB Corsair 1600MHz

PSU: Corsair TX750

Game settings:

SAMPLING - 100%

TEXTURE - HIGH

OBJECTS - HIGH

TERRAIN - HIGH

SHADOW - HIGH

PARTICLES - HIGH

CLOUD - HIGH

PIO - HIGH

HDR - STANDARD

DYNAMIC LIGHTS - HIGH

OVERAL VIS - 3800

OBJECT - 3200

SHADOW - 50

DISPLAY - FULLSCREEN

RES - 1080P

ASPECT - 16:9

V-SYNC - DISABLED

INTERFACE - LARGE

BLOOM - 0

RADIAL BLUR - 0

ROTATION BLUR - 0

DOF - 100

SSAO - HIGH

CAUSTICS - DISABLED

FSAA - X2

ATOC - ALL TRESS AND GRASS

PPAA - DISABLED

ANISO. FILTERING - STANDARD

Fraps shows 20-25fps while hanging around in a base with a handful of soldiers at ease and nothing going on, and you can see CPU/GPU usage below...

GPU sits at around 20%

CPU sits at around 30% Utililization (where you see that 5% below, is usually 28% while playing ARMA3)

img

Edited by [FRL]Myke

Share this post


Link to post
Share on other sites

I feel your pain, man.

Based on your settings list, I would recommend turning SSAO off, as it does give a pretty good hit to fps (in non-bottlenecking areas of the game, but probably won't make much of a difference either way in those areas that do crap out your usage).

Your object view distance is pretty high from my experience... I generally try to keep mine at 2k or less. Overall view distance makes less of a difference and auto detect puts me at 3800 as well on that one. But, object view will be something that you can definitely notice a difference from lowering. I typically roll with it around 1800-2000, and my overall view at 3k-3800 or so.

Other than that, I think your settings are in a very reasonable place. Really, if you did those suggestions, you should be able to turn some of that other stuff up and not notice any fps hit based on your spec. I've not been an AMD guy for a long time now, so don't take it as apples-to-apples. But, your GPU is comparable to mine. I have a superclocked evga gtx 780. There is no difference between it and my previous 580 in the bottlenecking areas of the game. But, in the rural areas, etc, etc that don't crap out usage, I could either take a higher fps, or roll with higher overall settings.

I generally try to find the sweet spot in my settings that gives the most GPU usage (in non-crap parts of the game) before it starts to negatively impact fps, and then back down a touch from there.

SSAO, and object view distance will typically impact your fps noticeably (in my experience). That goes mostly out the window when the game starts crapping on your usage though. peace.

---------- Post added at 13:34 ---------- Previous post was at 13:32 ----------

How much better is a mission running, if you take all the AI out and create something like a human-only Deathmatch scenario ?

:confused:

Like I said, I don't think AI is a "cause", per se... It is just something that compounds the actual issue, b/c you are asking the cpu to do more. But, yes... layering in AI makes things worse.

Share this post


Link to post
Share on other sites
I feel your pain, man.

Based on your settings list, I would recommend turning SSAO off, as it does give a pretty good hit to fps (in non-bottlenecking areas of the game, but probably won't make much of a difference either way in those areas that do crap out your usage).

Your object view distance is pretty high from my experience... I generally try to keep mine at 2k or less. Overall view distance makes less of a difference and auto detect puts me at 3800 as well on that one. But, object view will be something that you can definitely notice a difference from lowering. I typically roll with it around 1800-2000, and my overall view at 3k-3800 or so.

Other than that, I think your settings are in a very reasonable place. Really, if you did those suggestions, you should be able to turn some of that other stuff up and not notice any fps hit based on your spec. I've not been an AMD guy for a long time now, so don't take it as apples-to-apples. But, your GPU is comparable to mine. I have a superclocked evga gtx 780. There is no difference between it and my previous 580 in the bottlenecking areas of the game. But, in the rural areas, etc, etc that don't crap out usage, I could either take a higher fps, or roll with higher overall settings.

I generally try to find the sweet spot in my settings that gives the most GPU usage (in non-crap parts of the game) before it starts to negatively impact fps, and then back down a touch from there.

SSAO, and object view distance will typically impact your fps noticeably (in my experience). That goes mostly out the window when the game starts crapping on your usage though. peace.

---------- Post added at 13:34 ---------- Previous post was at 13:32 ----------

Like I said, I don't think AI is a "cause", per se... It is just something that compounds the actual issue, b/c you are asking the cpu to do more. But, yes... layering in AI makes things worse.

Thanks, will give that a go :-)

Share this post


Link to post
Share on other sites
I think Bohemia should try to invest in a Mantle renderer. And I say this while I don't even have a AMD video card right now!, but the Arma series weak point seems to be cpu utilization to draw objects in screen, it's pretty clear just rotating your view between a beach or looking at the ground, and then looking at a forest or a city, and comparing the fps/cpu use. So that's seems lots of overhead in draw calls in cpu, which is the thing that Mantle improves the most.

nice drea, now considering it's available only for limited amount of GPU models and nobody yet rolled anything out on it ...

if Mantle is available for anything D11.x/OGL 4.x then it would be easier

Share this post


Link to post
Share on other sites
nice drea, now considering it's available only for limited amount of GPU models and nobody yet rolled anything out on it ...

if Mantle is available for anything D11.x/OGL 4.x then it would be easier

Oxide Games did, at APU13:

And AMD said Mantle will be open, so others can make their own drivers. If they're willing.

Give it a few months to see more demos/tech related. Or being a developer studio yourselves, contact AMD and see if they can 'lend you' Mantle beta API and documentation. ;)

Edited by Vixente

Share this post


Link to post
Share on other sites

Is Mantle the equivalent in practical terms to say 3DFX Glide? Basically just an API like DirectX?

Share this post


Link to post
Share on other sites
Is Mantle the equivalent in practical terms to say 3DFX Glide? Basically just an API like DirectX?

I believe so.

---------- Post added at 09:15 ---------- Previous post was at 09:00 ----------

I feel your pain, man.

Based on your settings list, I would recommend turning SSAO off, as it does give a pretty good hit to fps (in non-bottlenecking areas of the game, but probably won't make much of a difference either way in those areas that do crap out your usage).

Your object view distance is pretty high from my experience... I generally try to keep mine at 2k or less. Overall view distance makes less of a difference and auto detect puts me at 3800 as well on that one. But, object view will be something that you can definitely notice a difference from lowering. I typically roll with it around 1800-2000, and my overall view at 3k-3800 or so.

Other than that, I think your settings are in a very reasonable place. Really, if you did those suggestions, you should be able to turn some of that other stuff up and not notice any fps hit based on your spec. I've not been an AMD guy for a long time now, so don't take it as apples-to-apples. But, your GPU is comparable to mine. I have a superclocked evga gtx 780. There is no difference between it and my previous 580 in the bottlenecking areas of the game. But, in the rural areas, etc, etc that don't crap out usage, I could either take a higher fps, or roll with higher overall settings.

I generally try to find the sweet spot in my settings that gives the most GPU usage (in non-crap parts of the game) before it starts to negatively impact fps, and then back down a touch from there.

SSAO, and object view distance will typically impact your fps noticeably (in my experience). That goes mostly out the window when the game starts crapping on your usage though. peace.

---------- Post added at 13:34 ---------- Previous post was at 13:32 ----------

Like I said, I don't think AI is a "cause", per se... It is just something that compounds the actual issue, b/c you are asking the cpu to do more. But, yes... layering in AI makes things worse.

Ok, tried SSAO off, view distance 1800 and object distance at 1600 and fraps is still showing 20/30fps with nothing going on then 10/17fps when in a fire fight :-( Not good.

Edited by RipGroove

Share this post


Link to post
Share on other sites
nice drea, now considering it's available only for limited amount of GPU models and nobody yet rolled anything out on it ...

if Mantle is available for anything D11.x/OGL 4.x then it would be easier

in bottlenecking situations (lots of ai (small single player scenarios and above)) I have same minimum fps with R9 290 @ 1200MHZ as with my old GTX570. Captain Obvious is in my neck :p

Share this post


Link to post
Share on other sites

^ This. Some people seem to forget that the most important performance problem is in bottlenecking situations with a lot of AIs and not when the editor map is empty or AI is not involved in a fight. Regardless of your GPU, as our friend above, your computer will bottleneck at the same level.

Edited by Nikiforos

Share this post


Link to post
Share on other sites
Oxide Games did, at APU13:

And AMD said Mantle will be open, so others can make their own drivers. If they're willing.

Give it a few months to see more demos/tech related. Or being a developer studio yourselves, contact AMD and see if they can 'lend you' Mantle beta API and documentation. ;)

missed my point, atm it's available for like 3% of DX11.1> GPUs, hence unimportant for deployment unless you paid to do so ...

now if it's available for all AMD DX11.x cards or hell even DX10.x then it will become relevant faster

Share this post


Link to post
Share on other sites
missed my point, atm it's available for like 3% of DX11.1> GPUs, hence unimportant for deployment unless you paid to do so ...

now if it's available for all AMD DX11.x cards or hell even DX10.x then it will become relevant faster

Afaik it has nothing to do with DX, it won´t use dx, it will use itself, Mantle, another whole API. It´s about the cards supporting or not GCN to a degree (every AMD card since 7XXX), and those cards are pretty much what you need for any newer game anyway (fullhd with medium settings or above). AMD has also said that nvidia or others could make it work for their cards if they want to, but i don´t know how that works.

Mantle pretty much eliminates overhead and serial batch on the cpu´s first core, it can use true non serialized multithreaded batches to send data to the gpu and theoretically make use of all cores if needed, on the presentation they say even the latest avaiable GPU becomes the bottleneck, even when they underclock the FX8350 to half its clock. Not to mention that all draw calls took almost 1/3 of the time they did with dx.

Edited by Th4d

Share this post


Link to post
Share on other sites
Afaik it has nothing to do with DX, it won´t use dx, it will use itself, Mantle, another whole API. It´s about the cards supporting or not GCN to a degree (every AMD card since 7XXX), and those cards are pretty much what you need for any newer game anyway (fullhd with medium settings or above). AMD has also said that nvidia or others could make it work for their cards if they want to, but i don´t know how that works.

Mantle pretty much eliminates overhead and serial batch on the cpu´s first core, it can use true non serialized multithreaded batches to send data to the gpu and theoretically make use of all cores if needed, on the presentation they say even the latest avaiable GPU becomes the bottleneck, even when they underclock the FX8350 to half its clock. Not to mention that all draw calls took almost 1/3 of the time they did with dx.

another person missing my point , atm the only supported cards are some R2xxx and some HD7xxx ... aka fraction of market

in short, when it covers whole range flawlessly and ideally more products (e.g. HD5xxx up) and NVIDIA cards then it gets faster adoption

Share this post


Link to post
Share on other sites
another person missing my point , atm the only supported cards are some R2xxx and some HD7xxx ... aka fraction of market

in short, when it covers whole range flawlessly and ideally more products (e.g. HD5xxx up) and NVIDIA cards then it gets faster adoption

Not some, all 7XXX upwards and every card from now on, if you buy a new AMD card today it will support it, And AMD has a big market share, something like 40%, it´s a matter of time. Who develops a new technology and software and simply refuses to innovate thinking about future hardware and get stuck with horrible bottlenecks simply because old soon to be obsolete cards aren´t compatible? ArmA 3 runs bad (less than 20fps) on old hardware anyway (hell, runs with 20fps even on high end gear on some scenarios), your only leg to stand on in this argument is nvidia support which does hold a bigger part of the market.

Also there is word that in fact any card from both ati and nvidia are Mantle compatible, but i haven´t seen the actual statement for it. http://www.dsogaming.com/news/amds-mantle-does-not-require-gpus-with-gcn-architecture/

We are not missing your point, you simple are trying to create an excuse for it not to be worth it, even worse, on a game that clearly would benefit tremendously from anything that gave out significant gains and lessened bottleneck issues, in short, could finally make the game perform acceptably. Anyway, EA and others already disagree with you, they think it will be worth it and already announced titles with it, the BF4 patch for Mantle is soon to be released already.

I´ve said all i wanted, best of luck.

Edited by Th4d

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×