Jump to content
technoxwalrus

AMD CPU Bottleneck?

Recommended Posts

Well, shadows are the fps killer, in my end.

I've tried with everthing maxed (Ultra/8xAA) and with shadows disabled and basically there are no performance issues.

With shadows in Low I have the same perfromance drop that I have with settings in High/4xAA 

https://www.youtube.com/watch?v=aLn4htOnWGU

Starting to think it's related with AMD gpu's only and is also related with the known issues with LOD transition that these gpu's have under DX11.

 

My current gpu is amd 290 and my previous was a 5850 amd. I always have shadows on the highest settings and haven't had a problem with either in A2 or A3. Think it maybe something else, unless its your card directly (individual problem).

Share this post


Link to post
Share on other sites

No mate.

This issue It is in fact related with AMD gpu's.

And it is because Multi-GPUs are not supported on OpenCL 2.0, which is the version provided by the current Catalyst and since shadows are a bit heavy on ARMA 3 it gives a huge performance loss.

Here is the evidence, with CrossFireX disabled.

No issues at all.

 

https://www.youtube.com/watch?v=RfPPlLwtIRw

 

Without doubts, Nvidia performs better with ARMA 3.

Share this post


Link to post
Share on other sites

No mate.

This issue It is in fact related with AMD gpu's.

And it is because Multi-GPUs are not supported on OpenCL 2.0, which is the version provided by the current Catalyst and since shadows are a bit heavy on ARMA 3 it gives a huge performance loss.

Here is the evidence, with CrossFireX disabled.

No issues at all.

 

https://www.youtube.com/watch?v=RfPPlLwtIRw

 

Without doubts, Nvidia performs better with ARMA 3.

 

So its an issue with the crossfire, you think. Perhaps someone with amd running crossfire could run the same test to confirm that, would be interesting to see, before judging.

Also interesting to see someone with NVidia run the same test/location to compare (unlikely we'll see that though).

 

I've never really seen a reason to play with more than the one card for this series, seems to work well with a single card. Also, I don't find any difference in performance between High & Ultra. Although recording can cost fps of course, perhaps only drop settings when recording (you may have done that though).

Share this post


Link to post
Share on other sites

Yep. its a AMD issue.

https://youtu.be/y7v-P1XhYnY

And DX9 sucks

https://youtu.be/UwGIg-5pMxI

Although we would have to wait to get a comparison done by someone with the same problem in A3, when using crossfire. Preferably using your test i.e. same test setup in same location.

Then we would know for sure.

 

One test, doesn't make it a AMD problem.. Yet..

Share this post


Link to post
Share on other sites

Although we would have to wait to get a comparison done by someone with the same problem in A3, when using crossfire. Preferably using your test i.e. same test setup in same location.

Then we would know for sure.

 

One test, doesn't make it a AMD problem.. Yet..

 

Bumping my own post here (one post back). Still waiting to see if this alleged problem is AMD. I didn't expect many if any players would be having this problem. Its looking more like its an individual problem now. :unsure:

Share this post


Link to post
Share on other sites
I am going to look like an insane guy, but anyway,

There is no real cpu bottleneck in this game, the only bottleneck that we have in this game is caused by the way how the memory (in matters of cache) is being managed.

And what it seems as a no doubt truth for a AMD user can be seen as a joke for a Nvidia user.

It is a known fact that AMD drivers do not support multithreading under DX11 (while Nvidia did an awesome job with that) and this (mainly because the architecture of ARMA 3) it really affects the performance of the game.

So, in fact how many Nvidia users we see complaining? Basically none.

The thing is, there is no cpu bottleneck in this game, this game uses the max that DX11 allowed in matters on multithreading, now the results may vary depending if you are using an AMD or Nvidia hardware and software.

Share this post


Link to post
Share on other sites

Light scenario: 1200m view distance and object distance what the game sets by default, object details low, terrain on standard. The rest are high, without MSAA, only CMAA.

Heavy scenario: 12000m view distance + 12000m object distance + very high terrain, objects details ultra, the rest is as previous.

 

i5 2500k@3,3GHz and 4,5GHz, 16GB RAM@1600MHz (1T, 9 9 9 24), standard HDD.

LE: Forgot about the video card: R290 @1100/1380MHz, latest AMD drivers.

 

Light scenario 4,5GHz - http://imgur.com/WympNr3 70fps

Light scenario 3,3GHz - http://imgur.com/9PYQDg3 53fps

Heavy scenario 4,5GHz - http://imgur.com/1g2I29h  12fps

Heavy scenario 3,3GHz - http://imgur.com/5GeLeSN9fps

 

LE: here is the album, for some reason imgur decided to move/rearrange/whatever. http://imgur.com/a/I6PUj

 

The OC and downclock are strictly made using the multiplier, so no other features of the system are changed such as FSB speed, memory speed or whatever. It's strictly a CPU performance gained or lost.

 

Please explain how the CPU is not the limiting factor in here? Even if AMD would have a significant overhead, that would mean to hummer that 1st cord hard, but it doesn't happen. Only when the settings are low enough and the bottleneck doesn't come from who knows what part, we see a significant use on core 1. You'd think the game would at least use the 1st core to its max possibility, but that doesn't happen as settings go up.

 

As a funny thing, the CPU usage increases as the frequency increases (both in light and heavy scenarios). "Only in ArmA". :D

 

In terms of CPU performance:

 

~33% increase from 3,3 to 4,5 in heavy scenario and ~32% in light scenario. As you can see the CPU is very much a limiting factor - remember, nothing else in the system is changed, except the multiplier.

 

Bottom line: the engine needs to be changed/heavily upgraded.

 

PS: 9 or 12fps as you look at... nothing. That's what I call "efficiency!" / irony.

 

PS: resolution is 5280x1050.

Share this post


Link to post
Share on other sites

Looks perfectly normal to me, the performance vs cpu speed, In this or some other game that it have "a true AI".

Btw, which graphics card are you using? I see you are using 5280x1050. That's triple screen or what? GPU Vram usage?

Share this post


Link to post
Share on other sites

I've updated the post, forgot about the video card: R290@1100/1380MHz. vRAM isn't an issue in single card configuration. The card runs of out power well before. Besides, Tilled Resources allows a developer to overcome that - if they really want high detailed textures.

 

Anyway, there is an issue of CPU bottleneck an of course AMD will suffer the most from it since they're running on many slow cores, instead of a few, but with higher IPC.

 

It's Eyefinity, mix resolution ( 1 x 27" 1920x1080 + 2 x 22" 1680x1050). Works quite well.

Share this post


Link to post
Share on other sites

Its obvious that in ai-heavy situations and/or with high visibility range cpu is the bottleneck, amd-cpu´s by far more than intel-cpu´s. If you overclock the cpu in a scenario with lots of ai, you will have performance gain (like demonstrated by colin_blanc). If you overclock the gpu in the same scenario, you will have no performance gain but less gpu-usage. The rest is logic....we have a big thread for this topic with tons of bratwurste-logic :p

Share this post


Link to post
Share on other sites
I am sorry for sounding like a lunatic, but.

I dont see where is the cpu bottleneck.

I see a cpu and gpu bottleneck after some memory cache loaded (and it is more noticeable under higher graphics settings), but cpu bottleneck?

I am sorry, I cant see it.

Share this post


Link to post
Share on other sites

It's simple:

-you gain more performance from a faster CPU or from overclocking -> CPU bottleneck.

-you gain more performance from a faster GPU or from overclocking -> GPU bottleneck.

Share this post


Link to post
Share on other sites

New skylake system at @3.5Ghz more than doubles my FPS over my OC'd AMD quad core @4.4Ghz with the same video card (R9 390).  I'm even able to stick around 30 FPS at "Ultra" in Eyefinity @5760x1200.

More FPS out of ARMA3?  If you are running an AMD CPU your best upgrade path is to sell it and go with an Intel based system.

Share this post


Link to post
Share on other sites

New skylake system at @3.5Ghz more than doubles my FPS over my OC'd AMD quad core @4.4Ghz with the same video card (R9 390).  I'm even able to stick around 30 FPS at "Ultra" in Eyefinity @5760x1200.

More FPS out of ARMA3?  If you are running an AMD CPU your best upgrade path is to sell it and go with an Intel based system.

Yep.

Share this post


Link to post
Share on other sites

New skylake system at @3.5Ghz more than doubles my FPS over my OC'd AMD quad core @4.4Ghz with the same video card (R9 390).  I'm even able to stick around 30 FPS at "Ultra" in Eyefinity @5760x1200.

More FPS out of ARMA3?  If you are running an AMD CPU your best upgrade path is to sell it and go with an Intel based system.

 

Sounds like you are comparing a low end (and possibly outdated) AMD CPU against a new high end Intel CPU. Pointless comparison really...

Share this post


Link to post
Share on other sites

AMD FX quad core, water cooled and OC'd @4.41Ghz on an Asus Sabertooth 990FX Rev 2.0.  That's socket AM3+ and the 990FX chipset.  You can't get anything more bleeding edge in AMD than that, even today.  16GB (2x8) DDR3 1866

vs.

Intel i5 6600K (no hyperthreading) @ 3.5Ghz on water, no overclock, on Gigabyte GA-Z170XP-SLI motherboard.  That's socket 1151 and the Z170 chipset.  16GB (2x8) DDR4 3000.

Same XFX R9 390 DD card on both systems, stable drivers (no beta).

I built my skylake system after I built one for my son for Christmas.  His is identical to mine but it uses the i5 6500 @3.2Ghz and reused his R9 280.  Even with the 6500 and a 280, his machine ran circles around my AMD + 390 in Arma3*.  That was all the convincing I needed.


*specifically in ARMA3.  Other games we play in common, my AMD +390 system was at least as fast and often much faster.

Share this post


Link to post
Share on other sites

My previous AMD was a 640 with a 5850 2gb card. Now moving over to an i7 didn't double my frame rate, although did obviously improve it. However it made me able to use many more AI without cache, which was nice. That said, I still have to use cache, but a good quality one is not that much of a problem.

I would add, I didn't have any problems with the AMD, other than there was a restriction on AI numbers, without cache 250-300 on a terrain, with cache, well many more obviously.

Share this post


Link to post
Share on other sites

I'm not much for benchmarking...  not that I don't believe in it, but rather that I don't do it myself obsessively.  My comparisons were mostly done on all three machines using "Play without mods" -->  the "Combined Arms" showcase and the Steam overlay for FPS, on a single monitor at 1920x1080@60Hz.  While there may be differences in how the AI behave and approach their objectives between runs there is a consistent number of AI with similar starting parameters.  The heli ride in to the AO, turning and running toward the village (before the player character comes under fire) is what I payed the most attention to. 

Share this post


Link to post
Share on other sites

My previous AMD was a 640 with a 5850 2gb card. Now moving over to an i7 didn't double my frame rate, although did obviously improve it. However it made me able to use many more AI without cache, which was nice. That said, I still have to use cache, but a good quality one is not that much of a problem.

I would add, I didn't have any problems with the AMD, other than there was a restriction on AI numbers, without cache 250-300 on a terrain, with cache, well many more obviously.

Nor it could, with that graphics card,

I still have in my basement my old 5870, (which is a beast when compared to 5850) and basically there is no change in performance no matter if we run a i7 at 4.5Ghz or 3.2 Ghz.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×