Jump to content
k3lt

Low CPU utilization & Low FPS

Recommended Posts

I have one core around 90% and the other three are around 50% almost all time. Gpu is around 40-70% So i think its not going to be better, only thing we could do is to wait for Broadwell processors.

So i think Devs should change Arma 3 system requirements to:

-Only Intel i7 4770K and OC to 4.5ghz otherwise stay away.

-U have AMD processor... WRONG stay away.

Is Arma going to be 64bit like Day Z?

We don't know at this point. Hopefully because they share the same engine, just different revisions, so it would be beneficial to both, it's not like DayZ is some memory monger compared to ArmA for instance.

Share this post


Link to post
Share on other sites
Well I don't think a standard benchmark is in BIS interests. Just think about it.
In the longer run it is in BIS interest of course. But on the other hand the lack of it speaks for itself...

Share this post


Link to post
Share on other sites
Myke;2680368']I'm sorry' date=' i'm only responsible for what i write, not for what you understand. Read through the bug feedback Maverick linked and see how many useless posts are in there. And please please, tell me where i said "all" or "everyone". I just said that those "fix it nao" peoples most likely get ignored, simply because their "feedback" doesn't contain any valuable info.

And since you're accusing me of stereotyping, could you please answer my question from here: [url']http://forums.bistudio.com/showthread.php?147533-Low-CPU-utilization-amp-Low-FPS&p=2680317&viewfull=1#post2680317[/url]

Thank you. Please stop picking arguments just to prove your point. either answer all or nothing.

---------- Post added at 00:47 ---------- Previous post was at 00:43 ----------

NOTHING....erm, sorry...nothing. CPU load represents the usual downsides of multithreading: threads waiting for data from other threads, therefor idling. GPU waits for data from the CPU to render the scene. If CPU is under heavy load (which isn't represented by % number), the GPU load goes down. You might want to invest the idling power of the GPU to raise GPU related graphic settings.

So what were your thoughts on the benefits of moving the AI thread to different cores as per my earlier posts. I told you it would be a straightforward change to the client - straightforward relative to re-writing the simulation thread to be more multi threaded - as the architecture already exists (client/server AI locality), you questioned my credentials, I provided some detail but re-iterated that it wasn't really important as the architecture already exists and the benefits are proven by anyone who offloads the AI using a dedi server. Do you, or do you not, think there would be a benefit to BIS specifying quad core and moving the AI processing to extra threads?

Edited by jiltedjock

Share this post


Link to post
Share on other sites
Myke;2679053']So you're ignoring those players which don't post on this forums because they don't have any problems (with the sold units in mind' date=' those are the majority) and call my view on it biased? Go figure. Most of the time there are the very same players nagging about problems.

Does the game needs improvements? [b']Yes[/b].

Does it have fundamental flaws? No. Not in my opinion. Am i allowed to have an opinion?

It is simple math:

250'000 units sold (fictive number) and 250'000 customers report problems = problem with the product.

250'000 units sold and 2000 (fictive number again) report problems (which are mostly the same customers which already were unhappy with the previous products) = ???

You may figure it out yourself. I for myself, if it works at most customers but not on a few (and i know the product is exactly the same) i wouldn't suspect a fault in the product itself at first place.

People say they get 20FPS no matter what settings. First thing i do, launching the game and try it out. I get 40 FPS on high settings (VD ~3200), i lower the settings and FPS goes through the roof. What should i think there? I don't have a high end rig, my GPU is already a few years old (HD5870). Now tell me, what youd you conclude?

People don't quit saying that the game is poorly programmed and/or poorly optimized. How can they know without having the source code (and more than often no programming knowledge either).

I've seen users asking why they can't play on ultra settings and after some time i find out that this user has a 5 year old mediocre budget laptop and really tried to play on ultra...what do you say there?

Face it, ArmA 3 will never reach 150FPS because it is not Call of Battlefield 34. It is ArmA 3, there is more going on under the hood than just fancy graphics and small areas. But there will always be some customers which will be never satisfied and will always see the game as the guilty. Even if suddenly ArmA 3 would run with 150FPS on ultra, there will always be unsatisfied customers still.

Where exactly did I ignore anyone? Seems, if anyone is ignoring it is you, friend. You act as though, the only people who haven't posted in here are ones who *don't* have problems... To say nothing of the fact that there is an unknown number of people who had problems, and boxed the game up, never even bothering to come onto this forum (I have two friends, for example, who have never posted on this forum, and who have long since abandoned this game due to the same issues that I have). You can pretend all you want that most people who don't post, are those without problems, but you have no data to back up that *assumption*, b/c you clearly have no way of knowing what experience those who haven't posted here have had. so please don't try to pass it off as fact.

You act as though, b/c people don't have the exact same experience as you, they *must* be wrong or have some problem that couldn't possibly be the game. This is intellectually dishonest, at best. But, b/c you get 40, then it has to be user error, or user system problem, huh? To say nothing of the wide range of hardware and software combinations of users playing this game. i.e. "Hello, I launched the game and it crashes every time on launch". Response: "Well, it doesn't crash for me, so it must be your problem". Yeah... that's the only possible solution... laughable.

You also have next to no data as to how far reaching this problem is. Though, I'd suggest that given the years worth of postings (both here, and elsewhere) spanning multiple iterations of this series, that (whether it be a minority, or not) this issue is impacting a large enough number of customers to be a problem. It's not like it is just one or two people.

I conclude (as I have more than once in this thread) that some people run this game fine (and/or fine for their taste). Others, quite clearly do not. Others have unrealistic expectations, etc, etc. The fact that it runs fine for you is not evidence whatsoever that it doesn't run fine for others, and it doesn't have to mean that everyone with problems is some simpleton incapable of basic troubleshooting, or with unrealistic expectations. If you think it has to be one or the other, you need to take off the blinders.

Bitching will get you ignored, you say? I find that kind of funny, considering the number of posts where people have attempted (blindly, and without input from devs) to provide meaningful feedback on their experience with this issue, only to be ignored by the devs. Look through all of the dev posts in this thread. The overwhelming majority of times that Dwarden has decided to respond in this thread has been with sarcasm, condescension, and cherry picking particular posts that he wants to slam on, etc.

And, just like you, attempting to lump anyone complaining of issues into a group of people expecting 150fps, or dismissing their problems as user error. Very selective, and does not fairly represent the full body of posts made by those who have expressed issues. But, if it makes you feel better to dismiss those with issues and attempt to lump us all into a group that has the unrealistic expectation of desiring 150fps, more power to you. It is intellectually dishonest, and certainly not helpful to this issue either. And, you can lump me into that group if it makes you feel better, but I have stated on more than one occasion that I would be happy if this game just ran at 30fps *most* of the time, outside of an empty map, or rural, small-scale (see: limited # of infantry only) encounters. But, b/c it doesn't, it must be a problem on my end... Dismiss away if it makes you feel better. It is rather insulting, that you insinuate that others must just be mistaken b/c you don't suffer the same issues. Though, not surprising in the least, considering this is par for the course from the devs as well (on this issue). I wouldn't expect anything less from one of their minions.

And, how does making up a number of units sold, and comparing it to a made up number of people with problems serve to prove your point at all? it doesn't. It is more intellectual dishonesty. When, you make up numbers in order to support your thesis, it doesn't strengthen it, it thoroughly discredits it. And, then you state that it's "mostly the same customers who were previously unhappy"... as though this is some sort of verified fact, or that (even if true) this would somehow make their claims less worthy of consideration? Yeah, you can have an opinion, but do us all a favor and state it as opinion, rather than trying to pass it off as fact.

Edited by Mobile_Medic

Share this post


Link to post
Share on other sites
... moving the AI thread to different cores ...

... the architecture already exists (client/server AI locality) ...

I think you are a bit mistaken what HC, or AI being local to any client is, it is still the same code, there is no new magical super AI thread separated from everything else.

Anyway, there might be benefits from what you are suggesting, but i'd leave it to the programmers who actually know (contrary to us).

However i think i can safely say, that untangling the AI routines, entangled with whatever else, would be a huge undertaking for sure - my point being: the ability of running Headless Client, or AI local to any other client is totally irrelevant to this task.

Share this post


Link to post
Share on other sites
I think you are a bit mistaken what HC, or AI being local to any client is, it is still the same code, there is no new magical super AI thread separated from everything else.

Anyway, there might be benefits from what you are suggesting, but i'd leave it to the programmers who actually know (contrary to us).

However i think i can safely say, that untangling the AI routines, entangled with whatever else, would be a huge undertaking for sure - my point being: the ability of running Headless Client, or AI local to any other client is totally irrelevant to this task.

As with most things in life, the best things generally require the most work. No one said it wouldn't be a big undertaking, no is saying it isn't a big undertaking. The question is why has it not been undertaken yet? the HC and MP just show that the AI can be processed on whole different systems and sync'd across a link or medium. If it can be done across a network, it can be done across any connection.

Also no one said it's not the same code. What we are saying is why is it that the code we have now can support splitting of AI based on locality across multiple machines over a network, then why is it somehow so complicated to do the SAME THING internally across multiple cores? Sure it will require rewriting code and making big changes, does that mean it shouldn't be done though?

Share this post


Link to post
Share on other sites

I have a Q6600 4GB, win7 32bit, SSD and the maps start out of but at some point during some maps my FPS drops to 1 a sec, I don't recall my FPS prior to this drop, but its playable, then suddenly its not. with the latest patch as of 2 May it is still happening, its been happening from the first time I installed the game back when it released.

Is there any fix or potential fix on the horizon?

croc5

Share this post


Link to post
Share on other sites
Myke;2680342']

Personally' date=' i would get a lot more motivation out of a "nice work but there are a few rather serious issues to look into" rather than "it runs like s**t, fix it nao".[/quote']

I remember people were very helpful and acted like that back in...2009?, when people raised the issue on the previous bug tracker. No one has infinite patience. Maybe people just feel like they've been ignored all this time?

I feel that nothing less than a full decoupling of the simulation from the rendering code will be sufficient to fix the problem.

Edit: I'll purchase a new CPU when the game can utilise more than 2 or 3 cores and more than 50% of each of them.

Share this post


Link to post
Share on other sites
This is so ridicolous that the newest CPU Technology (my second pc with the 3770K @ 4,5Ghz) with the CPU gimmicks should not be able to work correctly with Arma3 like Myke said "AI, Physics, rendering, collision and probably a lot of other tasks"

Your CPU works or the game wouldn't run. Not working as fast as you like, with as many units as you like, in the way that you would like is a different matter.

I think the Arma Engine maybe has those things deactivated, thats why even the fastest CPU is running at these low workload every user see.

I think that you grossly misinterpret where CPU features come into play.

Do they need to look at memory management and general code parallelization and efficiency? Absolutely. Does that have much to do with supporting CPU integrated instruction sets... uh... nope. That is mostly up to the libraries and compliers they are using, together with the flags they have set in the compiler, but is still (mostly) irrelevant.

It's not like they can just turn on the -UseIntelNiftyFeature compilation flag and that will fix all of the architectural problems they have with the lack of parallelization - which is the actual problem here.

Share this post


Link to post
Share on other sites

For me it doesnt matter what they do, if they do and try and tweak and whatever i´m cool with that.

Maybe my postings are wrong and false interpreted, it okay for me, im only an user. It seems to be an issue with Gameengine or something deeper in the structures, i dont know. Only searchning for a solution

Share this post


Link to post
Share on other sites

Essentially the solution for us, as the end user, is...nothing. There is nothing we can do short of waiting for the developers to give us a benchmark, have us give specific, reproducible data, or something of that nature so that they can then work on fixing the problem, whatever it may be.

Share this post


Link to post
Share on other sites

I know. But in Germany there is a sentence which says: "it´s never too late for wonders, they just dont happen very often" ^^

I say this too, when since Arma2/Arma3 Alpha+Beta+Final nothing happens with significant increasements with fps so then it will never happen.

Share this post


Link to post
Share on other sites

Well, i have an old PC and will upgrade in 3 months whith a Z97 and the new i7 Haswell based cpus (Devil´s canyon or something with better TIM) just for ArmA 3.

I have a q6600@ 3.0 and a 7870 OC. In SP i got 30 FPS on standard to high in some settings. In MP i got 10-15 FPS.

I don´t want 60 FPS. I want solid 30. I will risk hope don´t regret myself...

I think BIS is waiting fot the Dayz Team to fix our MP issue. :rolleyes:

Edited by Geraldus

Share this post


Link to post
Share on other sites

yea I think people are just getting frustrated with the lack of optimization being pushed out. I know I'm kind of sick of not having my resources utilized as are others and the fact that this has been a long term issue since Arma 2 but seems to be getting ignored for things like Zeus. I know it's hard to optimize the engine but it's something thats greatly needed and just because something's hard doesn't mean it shouldn't be done, and I know it can be done (many other games have figured it out and games like watch dogs actually recommend 8 cores). Just my two cents

Share this post


Link to post
Share on other sites
yea I think people are just getting frustrated with the lack of optimization being pushed out. I know I'm kind of sick of not having my resources utilized as are others and the fact that this has been a long term issue since Arma 2 but seems to be getting ignored for things like Zeus. I know it's hard to optimize the engine but it's something thats greatly needed and just because something's hard doesn't mean it shouldn't be done, and I know it can be done (many other games have figured it out and games like watch dogs actually recommend 8 cores). Just my two cents

Be fair - the engine for watch dogs was written in 2013 : http://en.wikipedia.org/wiki/Disrupt_%28game_engine%29

There comes a point when you push a technology as far as it can go. It is unreasonable to expect an engine rewrite for a game at this stage in its development.

The most helpful thing we can do is find out when these engine hiccups are occurring and report them.

I do agree that it is frustrating to not really see an improvement in 1.18 and to actually get something that feels less polished than 1.16

Share this post


Link to post
Share on other sites
Be fair - the engine for watch dogs was written in 2013 : http://en.wikipedia.org/wiki/Disrupt_%28game_engine%29

There comes a point when you push a technology as far as it can go. It is unreasonable to expect an engine rewrite for a game at this stage in its development.

The most helpful thing we can do is find out when these engine hiccups are occurring and report them.

I do agree that it is frustrating to not really see an improvement in 1.18 and to actually get something that feels less polished than 1.16

Yea for sure I agree, but I thought that RV4 was written in 2013 as well (http://en.wikipedia.org/wiki/Real_Virtuality_(game_engine)#Real_Virtuality_4)? My main point with bringing up watchdogs is that multi cores can be utilized effectively and efficiently. Also, there were plenty of 6-8 cores even before 2013 and being such a CPU intensive game you'd think that they would include better CPU core support/optimization

Share this post


Link to post
Share on other sites
I think you are a bit mistaken what HC, or AI being local to any client is, it is still the same code, there is no new magical super AI thread separated from everything else.

Anyway, there might be benefits from what you are suggesting, but i'd leave it to the programmers who actually know (contrary to us).

However i think i can safely say, that untangling the AI routines, entangled with whatever else, would be a huge undertaking for sure - my point being: the ability of running Headless Client, or AI local to any other client is totally irrelevant to this task.

No, I'm not mistaken. I already do this in Single Player by running a dedicated server instance on my PC, which all the AI is local to (except the AI in my group), and running my missions as multiplayer (with me as the only player). My client runs on the same PC. So now, instead of the AI processing taking place in any of the client threads, it is completely decoupled and running on an otherwise unused core. If I had a hexacore I would probably run a HC too and set some of the AI locality to that.

The point is, that Arma already supports decoupling the AI processing from the client threads. So what I am suggesting is that they build this into the client exe, so that in Single Player the AI is processed on a separate core from those used by client simulation threads.

The result of this is that when there is a lot of AI processing going on, there is little impact on client frame rates. It is not the golden fix that everyone is looking for, but it is better than the current client architecture.

Share this post


Link to post
Share on other sites
Yea for sure I agree, but I thought that RV4 was written in 2013 as well (http://en.wikipedia.org/wiki/Real_Virtuality_(game_engine)#Real_Virtuality_4)? My main point with bringing up watchdogs is that multi cores can be utilized effectively and efficiently. Also, there were plenty of 6-8 cores even before 2013 and being such a CPU intensive game you'd think that they would include better CPU core support/optimization

see but they are building off of previous versions of RV. we have no clue how old a lot of that code is (likely old as dirt)

my only point here is that BI may be in a position here where they won't be able to squeeze optimal performance out of the code because of some core parts that weren't written with today's tech in mind.

Share this post


Link to post
Share on other sites
Myke;2680669']No' date=' you don't have a problem with your system. Look at it this way: there are 4 workers but three of them regulary have to wait for data from the fourth to continue so they wait, idling, and you see 25% workload.

There are many things that rely on each other: AI, Physics, rendering, collision and probably a lot of other tasks . The fact that ArmA 3 isn't a player-centric game like almost every other shooter actually available doesn't help to make things easier.

So, a faster CPU can help since it get's more done due to more cycles per second but in task manager you'll probably see the same workload (25%, taking the example above) as with a slower CPU.

And that's also why it isn't that easy to improve performance. If you change one thing you have to make sure the others don't get broken because of the change you've made (see also my signature). This is probably the case when people report lower FPS after a patch. They (probably) optimized one part which gives 3FPS more but another part gets negatively influenced and drops then by 5FPS.

Optimizing therefor is extremely difficult to not break the whole game, that's why it takes long.[/quote']

Hey Myke,

Don't get me wrong, but I still didn't understand this completly.

If the Problem lies on one core having to work Information before this is sent for the other cores to work, shouldn't I have at least one core working under heavy load?

I have no core, at any time that goes over 60% utilization. The others usually jump up and down between 15-50%. And the core that is carrying the most load at one time is never the same. That means, it is one time core 1, than one second later it is the second, then the fourth, then the first again. And that changes really fast.

Could there be something else bottlenecking the Information even before it gets to the cpu for processing?

I am insisting on this because I really believe that there must be something wrong with my System. Maybe a configuration from my OCs, maybe real Hardware incompatibility. And maybe someone can spot this and help me out.

Cheers

Share this post


Link to post
Share on other sites
Hey Myke,

Don't get me wrong, but I still didn't understand this completly.

If the Problem lies on one core having to work Information before this is sent for the other cores to work, shouldn't I have at least one core working under heavy load?

I have no core, at any time that goes over 60% utilization. The others usually jump up and down between 15-50%. And the core that is carrying the most load at one time is never the same. That means, it is one time core 1, than one second later it is the second, then the fourth, then the first again. And that changes really fast.

Could there be something else bottlenecking the Information even before it gets to the cpu for processing?

I am insisting on this because I really believe that there must be something wrong with my System. Maybe a configuration from my OCs, maybe real Hardware incompatibility. And maybe someone can spot this and help me out.

Cheers

First, there's nothing wrong with your system.

Second, don't mistake "core" for "thread". Even a singlecore can work with multiple threads, just not at the same time (while "time" in this context are nanoseconds). Regarding to "cores", Windows is managing the workload for all cores so even a program that isn't designed to make use of multicore CPU's can be spread on all avaliable cores. Multithreading on the other hand is multiple threads at the very same time. If every thread in such applications can run by themself without needing info from the others running threads, then a 100% scaling with each CPU core added is possible.

As example, to draw a bullet in the scene, the physics thread has to give information about it's current trajetory (which can change after material penetration like going through a fence), position, orientation and probably a lot of more information aswell. The render thread has to gather all informations from all possible objects in the current frame before it can render the frame. Spread that priciple on everything that happens on screen and also behind the scenes and you get a idea how much work there is to do.

Having a 8 core CPU i would also like to see all cores running at ~90% (all on 100% is unrealistic IMHO) and FPS beyond 100 on ultra settings. But IMHO this is a unrealistic expectations if you don't want the game to be dumbed down (less simulation).

So again, don't question your system although being a little critical wont hurt and ensures you really tried everything to have it in the best state possible. Just don't get paranoid over it, understand the difficulties of multithreaded programming and hope that BI find ways to squeeze out a few more frames by finetuning the connections between concurrenting threads.

Share this post


Link to post
Share on other sites
Well, i have an old PC and will upgrade in 3 months whith a Z97 and the new i7 Haswell based cpus (Devil´s canyon or something with better TIM) just for ArmA 3.

I have a q6600@ 3.0 and a 7870 OC. In SP i got 30 FPS on standard to high in some settings. In MP i got 10-15 FPS.

I don´t want 60 FPS. I want solid 30. I will risk hope don´t regret myself...

I think BIS is waiting fot the Dayz Team to fix our MP issue. :rolleyes:

Buy an i5, never i7.

Share this post


Link to post
Share on other sites
see but they are building off of previous versions of RV. we have no clue how old a lot of that code is (likely old as dirt)

my only point here is that BI may be in a position here where they won't be able to squeeze optimal performance out of the code because of some core parts that weren't written with today's tech in mind.

That is a serious issue, then, because as it has been stated, these issues were known since ArmA 2 and at that time the developers said it was essentially "too difficult/time consuming" to do it for ArmA 2 with ArmA 3 on the horizon.

Jump to ArmA 3, we have the same exact issues...

Share this post


Link to post
Share on other sites
That is a serious issue, then, because as it has been stated, these issues were known since ArmA 2 and at that time the developers said it was essentially "too difficult/time consuming" to do it for ArmA 2 with ArmA 3 on the horizon.

Jump to ArmA 3, we have the same exact issues...

Exactly, well said Sir. Shouldnt be like that "time consuming" cause the World needs frames, "time consuming" is not acceptable ^^

Share this post


Link to post
Share on other sites
That is a serious issue, then, because as it has been stated, these issues were known since ArmA 2 and at that time the developers said it was essentially "too difficult/time consuming" to do it for ArmA 2 with ArmA 3 on the horizon.

Jump to ArmA 3, we have the same exact issues...

keep in mind, it is _my_ opinion that it would be too difficult and time consuming - I am not speaking on their behalf of course :)

Share this post


Link to post
Share on other sites
No, I'm not mistaken. I already do this in Single Player by running a dedicated server instance on my PC, which all the AI is local to (except the AI in my group), and running my missions as multiplayer (with me as the only player). My client runs on the same PC. So now, instead of the AI processing taking place in any of the client threads, it is completely decoupled and running on an otherwise unused core. If I had a hexacore I would probably run a HC too and set some of the AI locality to that.

The point is, that Arma already supports decoupling the AI processing from the client threads. So what I am suggesting is that they build this into the client exe, so that in Single Player the AI is processed on a separate core from those used by client simulation threads.

The result of this is that when there is a lot of AI processing going on, there is little impact on client frame rates. It is not the golden fix that everyone is looking for, but it is better than the current client architecture.

Hi

I've seen this mentioned before but not found any instructions on how to achieve it or is the improvement is significant ? (ie do you simply run the dedicated server exe locally ?).

cj

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×