Jump to content
Sign in to follow this  
zwa233

FX6300 / HD7850 and low FPS

Recommended Posts

I originally posted this in the Low CPU utilization & Low FPS thread but apparently that's more of a debate thread

I've followed the optimization guide posted but still can't get good performance. I'm just looking for 25+ FPS in most situations. I have got it to where I get around 30 FPS in most areas, but once combat actually starts or I come to a city with a lot of OPFOR it tanks.

Specs are:

FX-6300 @ 3.5ghz

HD7850 2GB @ 875mhz/1200mhz

8GB DDR3 160

I'm not trying to run on all Ultra settings. A few are very high - the more GPU intense ones. Most are High/Standard and I put the sliders around the 'Standard' outlined in the guide. The game still doesn't utilize much of my GPU, usually <75% so I'm guessing it's a CPU bottleneck although I need to monitor it's usage more closely.

Any tips for this type of pc setup?

Here are some screens of CPU/GPU usage.

oUbhwzjl.jpg

aYn56wdl.jpg

These screens show the settings I'm using and an in-game screenshot showing 19 FPS and about 30% GPU usage.

http://i.imgur.com/8mz9kdW.jpg (421 kB)

http://i.imgur.com/B5vClCi.jpg (408 kB)

http://i.imgur.com/8cNabmU.jpg (410 kB)

http://i.imgur.com/6Agsg25.jpg (637 kB)

Share this post


Link to post
Share on other sites

Hello Zwa233.

There are not so much options with your CPU. ARMA runs much better on Intel CPUs because of better IPC performance.

What you can do:

check your Bios, take a look for Memory Mode (Ganged Mode) This will increase your FPS be 3 to 5 FP.

What´s about your Memory? DDR1600? wich timings?

Maybe you can increase your clockspeed to 4,0. This will raise your IPC and give you a few more extra FPS.

But in the end, this will not change worlds for you. but maybe it gives you some extra fps what you are looking for.

regards from Germany

Michael

Share this post


Link to post
Share on other sites

Since you have a quite good GPU, keep shadow details above normal. Reason: On Normal or lower settings the shadows are processed on the CPU while on high or above it is processed on the GPU. Besides that i agree with Maxon, Intels perform far better than AMD (having a AMD FX8350 myself, call me AMD fanboy). With AMD, either higher clock speed as Maxon suggested or upgrading the CPU. In case of the latter, don't expect wonders but it surely will help quite a bit and is still cheaper than having to buy a new Motherboard aswell. ;)

Share this post


Link to post
Share on other sites

I honestly don't remember the timings on the RAM, been a long time since I fooled with them and did any overclocking. I believe I was able to get this CPU stable @ 4.1ghz but I didn't feel the temp increase was really worth it. May try it again to see how it improves performance.

I was afraid there was not much more to do, other than a different CPU. I've not had problems in any other games that I've noticed, though I suppose this a rather CPU-intense engine. I still don't quite understand why it doesn't utilize 100% of CPU/GPU though. :confused:

Ah well, by messing with settings more I have it getting ~30 FPS at *most* times. Playable, at least.

Share this post


Link to post
Share on other sites
I still don't quite understand why it doesn't utilize 100% of CPU/GPU though.

This is because of concurrent multithreading (don't know if that's the correct term, i call it this way). Technically A3 does multithreading and therefor does use Multicore CPU's. But it's nearly impossible to avoid idle times of threads because they rely on each other. Thread A has to wait for data from Thread B which takes a little until B is done and can send the data, which makes Thread A idling. As a rather rough example: before the rendering thread can draw a bullet in the scene it has to have correct positional and directional data from the physics thread. Now sum this up to everything that happens on-screen and off-screen and you get a idea that there is a lot of idling and therefor a rather unexpected low CPU utilization. Other games can do this "better" since they are player centric (nothing happens in areas the player can't see) and a lot of what happens on-screen is not relevant for gameplay (fancy explosion effects and such).

But back to your topic, i have made good experience with the FX8350, maybe your Motherboard supports this CPU. Worth a look IMHO. If money isn't a problem, maybe considering to switch to a Intel (it hurts so much to write this) CPU might do the trick.

Share this post


Link to post
Share on other sites

What are you using to monitor your in-game frame-rate?

Share this post


Link to post
Share on other sites
Myke;2684858']This is because of concurrent multithreading (don't know if that's the correct term' date=' i call it this way). Technically A3 does multithreading and therefor does use Multicore CPU's. But it's nearly impossible to avoid idle times of threads because they rely on each other. Thread A has to wait for data from Thread B which takes a little until B is done and can send the data, which makes Thread A idling. As a rather rough example: before the rendering thread can draw a bullet in the scene it has to have correct positional and directional data from the physics thread. Now sum this up to everything that happens on-screen and off-screen and you get a idea that there is a lot of idling and therefor a rather unexpected low CPU utilization. Other games can do this "better" since they are player centric (nothing happens in areas the player can't see) and a lot of what happens on-screen is not relevant for gameplay (fancy explosion effects and such).

But back to your topic, i have made good experience with the FX8350, maybe your Motherboard supports this CPU. Worth a look IMHO. If money isn't a problem, maybe considering to switch to a Intel (it hurts so much to write this) CPU might do the trick.[/quote']

That makes sense, I think. Thanks. My motherboard would support a 8350, although I think I'd rather wait to just make the switch to Intel (which hurts me to say too, been using AMD only for at least 10 years).

What are you using to monitor your in-game frame-rate?

MSI Afterburner

Share this post


Link to post
Share on other sites
That makes sense, I think. Thanks. My motherboard would support a 8350, although I think I'd rather wait to just make the switch to Intel (which hurts me to say too, been using AMD only for at least 10 years).

That video-card would be great if you were using it as mainly a work-station rendering side of things etc. But AMD:

http://www.hwcompare.com/14638/geforce-gtx-770-vs-radeon-hd-7850/

Doesn't have the real-time power you need.

Simply put HyperTransport is a bus-communication method that does not use the FSB and share bandwidth with the FSB to communicate with the Memory instead it's a direct communication point to point serial communication to the Bridge (Main bridge) which is where the South side and North-side bridge exist. It's recent revision supports 2.6Ghz. (It potentially might make a difference) due to the smaller L1 cache a HT does 16 bit high speed serial communication full-duplex (both directions) so removing load from transfering large amounts of data to the L1 cache through the FSB may or may not release some stress.

This constant comparison about CPU's speed as an explanation for system performance is excessively simple, as CPU's when comparing an intel to AMD is silly they use both different internal architectures and they also use different sized caches. Noticable differences between intel CPU's and AMD are Shared instruction L1 and base instruction L1 16KB Instruction caches * 6 for the guys CPU above. 3 x 64 KB shared instruction caches L1 caches as a general rule a bigger L1 compared with L2 or L3 is better.

When you compare intel CPU's to AMD's Intel CPU's use approximately 14 Pipelines per 1 Hz cycle per instruction meaning in 1 cpu cycle an instruction goes through 14 breakdowns, an AMD on the other hand goes through 12 pipelines so in that sense an AMD can do 20% more operations compared to Intel. "Technically". Intel CPU's also use AVX ( Advanced Vector Extensions) and SSE for this you want a CPU that supports the newest versions. There is many details that factor into system performance and Synthetic benchmarks such as "Using an I7-4770k system with a high end video-card" to test FPS is also a type of Synthetic benchmark as the factors involved are both the system environment (components parts) and the software itself. Simply put OC'ing is not as clear cut as you may think in improving performance you need to analyse your system appropriately and upgrade where you see bottlenecks occurring.

There is certain details that need to be known to analyse your performance issues:

Motherboard type and brand:

View Distance (Overall)

Preferred View distance: (Object render)

Which option causes the largest performance hit:

Textures?:

Anti-aliasing?:

HDR (High Dynamic Range)?:

Geometry Count and Vertice?:

Shadows?:

Ambient Occlusion?(This is basically a mathematical method to approximate the location and ray-scattering of light and will destroy your frames):

Dynamic Lights?:

Particle Effects?:

PIP(Picture in Picture a hidden camera placed on the side of vehicles and in screens)?:

Cloud Quality?:

Terrain Quality?:

Change each of these values independently and monitor CPU / GPU usage of a specific time and note the FPS hit of each option.

Edited by Polymath820
Additional.

Share this post


Link to post
Share on other sites
That video-card would be great if you were using it as mainly a work-station rendering side of things etc. But AMD:

http://www.hwcompare.com/14638/geforce-gtx-770-vs-radeon-hd-7850/

Is that really a fair comparison? There's a fairly significant price gap between a 770 and a 7850 (more than fairly significant IMO, it's about a $200 difference). And as that article points out, the 770 has better specs in nearly every single category, so I'm hardly surprised.

I do know Intel has better CPUs for gaming, although I do not think it's as clear-cut for GPUs - if I'm not mistaken AMD currently has the highest performing GPU? I'm not totally sure, that changes every few months.

Anyway my system does perfectly fine for me in every other game, I don't plan to go into such detailed analyses of the performance in one game and then buy entirely new pc components just for that. I just find it kind of a shame it has to be so complicated for this game to run smoothly. But I do understand it's an intensive game hardware-wise.

Share this post


Link to post
Share on other sites
Is that really a fair comparison? There's a fairly significant price gap between a 770 and a 7850 (more than fairly significant IMO, it's about a $200 difference). And as that article points out, the 770 has better specs in nearly every single category, so I'm hardly surprised.

I do know Intel has better CPUs for gaming, although I do not think it's as clear-cut for GPUs - if I'm not mistaken AMD currently has the highest performing GPU? I'm not totally sure, that changes every few months.

Anyway my system does perfectly fine for me in every other game, I don't plan to go into such detailed analyses of the performance in one game and then buy entirely new pc components just for that. I just find it kind of a shame it has to be so complicated for this game to run smoothly. But I do understand it's an intensive game hardware-wise.

http://products.amd.com/en-us/GraphicCardDetail.aspx?id=309&f1=&f2=&f3=&f4=&f5=&f6=&f7=&f8=&f9=&f10=&f11=&f12=&f13=&f14=&f15=&f16=&f17=&f18=&f19=&f20=&f21=&

If you look on AMDs website they state the video-card supports 128GB/a Memory bandwidth this simply put means how fast the video-card can move data in and out of the memory chips to the inner processing die. That number is best to be larger. For example My current GTX 650 OC is running at 6008Mhz and your video-card at 4008Mhz your video card compensates for the smaller memory clock by having a larger bit width interface. Now there is also another key thing about your video card it has less TMUS(Texture mapping units) simply put it helps the video card process textures better. Faster memory clock means it is capable of dealing with anti-aliasing better as well

Now you say the GTX 770 is more expensive it has both higher memory clock and a higher core processor clock nearly 3000Mhz more than the AMD and more TMUS it also has an extra 400 processor cores than the AMD. And your AMD CPU from a technical stand point nothing wrong with it. 12 pipelines is much faster than 14 in Intel CPUs

http://videocardz.com/ati/radeon-7000/radeon-7850

http://videocardz.com/nvidia/geforce-700/geforce-gtx-770

You are likely to get benefit in arma 3 by enabling the use of your threads and cores by doing this in steam -exThreads=6 and -cpuCount=6 see if that improves anything I know ArmA 3 has auto detection for quad-cores etc but I don't think hexa-cores are.

On a side note people expect huge battles in arma 3 but that being said a large amount of missions are bogged down with unnecessary scripts. Which cause large performance issues find a mission where the mission maker manages the mission correctly such as not revealing AI until the player is close enough an simple Idea is to conceal them as they spawn hiding the fact they just popped in. And when certain groups of AI die reveal the next lot and so on. So the players screen is not overwhelmed and the server is not either.

Kind of sad Bohemia doesn't let you invoke the simulation manager in a more dynamic way other than the module.

Edited by Polymath820

Share this post


Link to post
Share on other sites

those numbers look normal with your cpu, sadly

you'd have to go with intel cpu to get those numbers up, upgrading the cpu alone to another amd cpu isn't worth it as fps is still quite low on the faster amd cpus compared to intel :/

Share this post


Link to post
Share on other sites
those numbers look normal with your cpu, sadly

you'd have to go with intel cpu to get those numbers up, upgrading the cpu alone to another amd cpu isn't worth it as fps is still quite low on the faster amd cpus compared to intel :/

I'm sorry but my FX8350 says otherwise. Sure that Intel has the overall fastest CPU's but the FX8350 surely is capable to run ArmA 3 with good FPS. After all it is cheaper to just buy a CPU instead a CPU AND a Motherboard (and maybe even Ram).

Share this post


Link to post
Share on other sites

You are likely to get benefit in arma 3 by enabling the use of your threads and cores by doing this in steam -exThreads=6 and -cpuCount=6 see if that improves anything I know ArmA 3 has auto detection for quad-cores etc but I don't think hexa-cores are.

This won't do anything, cpucount 4 and exthreads 7 is the max, on all CPUs. If you enter 6 and 6, it will revert to autodetect. Which will probably work perfectly and has done since at least Arma 2 1.59

exthreads can have values of 0,1,3,5,7 . These are not the numbers of extra threads, they are values that define which row in the extra threads table is used.

0 = zero extra threads

1 = 1 file ops extra thread

3 = 1 file ops and 1 texture loading

5 = 1 geometry loading and 1 file ops

7 = 1 geometry, 1 texture, 1 file ops

Edited by jiltedjock

Share this post


Link to post
Share on other sites

@zwa223

You could turn object detail down to normal but that's the only tweak that can really help you from the options. It brings LOD flickering but eases CPU load. After that your only hope probably is overclocking. Everytime you see under 99% GPU usage and you're under the fps you would like to it's the CPU that bottlenecks. If you don't see full GPU usage you can increase some graphics that don't affect much CPU load and the fps will remain about the same ;)

When AI comes in you just can't anything about the fps anymore.

Share this post


Link to post
Share on other sites
Myke;2686515']I'm sorry but my FX8350 says otherwise. Sure that Intel has the overall fastest CPU's but the FX8350 surely is capable to run ArmA 3 with good FPS. After all it is cheaper to just buy a CPU instead a CPU AND a Motherboard (and maybe even Ram).

shame there's no real way of benchmarking this in multiplayer, at least for sp there's arma3mark

im not saying 8350 will run the game badly, back when i had one i got 35fps average on stratis wasteland with one oc'd to 4.8ghz, which will probably be slightly higher now with more mature game and mission files, if you're fine with numbers like that then go ahead and grab a 8350 or better as it is a great system in general, just not superawesome for arma or other games that are very heavy on cpu :)

what i am saying is intel will do it a lot better, for example my old i5 2500k @ 4.8ghz averaged 45 fps on altis wasteland, 4770k @ 4.5ghz is closer to 50 fps average.

if he would upgrade dumping another 50-70 euro on a motherboard for intel might be a good idea, but then again it might not :D

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×