Jump to content
Sign in to follow this  
v8_laudi

SLI benchmarks - My results

Recommended Posts

You can rant all you want, cpu limited is the right term.

I don't care much what's the "right term", i care about the fact that the engine is (currently) uncapable of using the CPU efficiently, and after years of development this is disappointing... to put it mildly... unless they are going to set the recommended specs to: 10Ghz CPU suggested. lol

Share this post


Link to post
Share on other sites
I don't care much what's the "right term", i care about the fact that the engine is (currently) uncapable of using the CPU efficiently, and after years of development this is disappointing... to put it mildly... unless they are going to set the recommended specs to: 10Ghz CPU suggested. lol

Of course you have a better, more CPU efficient engine that deals with large scale infantry, armour, air/sea engagements, offers unlimited sandbox play and has an incredible editor none of which have been even close to matched in ~12 years right?

Didn't think so :rolleyes:

It may not be perfect but the OFP/ArmA series of games are far more ambitious and complex to design than other shooters.

With that ambition and complexity come many technological hurdles, thankfully BIS is unique in that they vigorously support their products years after release.

Contrary to releasing a new version every 6-12 months at a premium price-point and promptly dropping support for it as soon as the last DLC is out the door in preparation for the next Call of Battlefield: Gears of HALO.

Edited by BangTail

Share this post


Link to post
Share on other sites

so i was looking forward to upgrading soon but now im not so sure. every arma release has dictated an upgrade for me, so what would you do if in my shoes?

current rig:

i7 950 @4.2

gtx580 sli

6gb ram

27inch dell

intels ssd

so i was going to upgrade to haswell and nvidia 780 or equivalent. was even thinkking getting titans when haswell comes out but not spendin that much money if a3 wont utilise them propperly.

i guess the question is, do you think i would see a noticable increase in performance with a oc'd haswell cpu and 780s in sli compared to my setup atm.

Share this post


Link to post
Share on other sites

The CPU will make more difference than the GPU's imo.

The game can't really push 2 580's too much unless you're using three way surround resolutions or something.

Newer gen CPU with faster and more cache on it would probably be more worth the upgrade.

Unless those are the 1.5 gb 580's then it might be worth the upgrade, as I've seen this game us alot of VRAM.

Share this post


Link to post
Share on other sites
so i was looking forward to upgrading soon but now im not so sure. every arma release has dictated an upgrade for me, so what would you do if in my shoes?

current rig:

i7 950 @4.2

gtx580 sli

6gb ram

27inch dell

intels ssd

so i was going to upgrade to haswell and nvidia 780 or equivalent. was even thinkking getting titans when haswell comes out but not spendin that much money if a3 wont utilise them propperly.

i guess the question is, do you think i would see a noticable increase in performance with a oc'd haswell cpu and 780s in sli compared to my setup atm.

Titan really shines in the minimum FPS department (as important as Max FPS) but I play a lot more than just A2/3.

We won't really know how well A3 will scale across multiple GPUs until it is released (and possibly beyond).

I think it will be more of a priority this time as BIS seem to be trying to attract a wider audience.

I know Nvidia have talked about A3 specific tweaks but they have not been implemented via drivers at this point.

On CPUs, I'd really like to see A3 scaling across as many as 8 CPUs as well as Frostbite 2 does but the complexity of the AI may hamper that.

One other area where I'd like to see some work is in built up areas - these seem to cause massive FPS hits and for cities where most of the buildings are just static objects that you can't enter - that doesn't make a lot of sense.

On the 780 - no one really knows when that will appear and I doubt very much it will beat Titan - My guess is that it will be around the same performance but cheaper with gimped DC (like the 680) - time will tell :D

Unfortunately, as long as Nvidia remains uncontested, the speed bumps are going to be small and pricey.

AMD has tech in the pipe but the rumors are pointing toward next year.

Share this post


Link to post
Share on other sites
Of course you have a better, more CPU efficient engine that deals with large scale infantry, armour, air/sea engagements, offers unlimited sandbox play and has an incredible editor none of which have been even close to matched in ~12 years right?

Didn't think so :rolleyes:

It may not be perfect but the OFP/ArmA series of games are far more ambitious and complex to design than other shooters.

With that ambition and complexity come many technological hurdles, thankfully BIS is unique in that they vigorously support their products years after release.

Contrary to releasing a new version every 6-12 months at a premium price-point and promptly dropping support for it as soon as the last DLC is out the door in preparation for the next Call of Battlefield: Gears of HALO.

I don't think its unreasonable to request an engine that more effectively uses the CPU power we currently have. Citing examples of RV's strengths is not a valid reason for why they should not try to improve the engine's CPU usage efficiency.

Share this post


Link to post
Share on other sites
Of course you have a better, more CPU efficient engine that deals with large scale infantry, armour, air/sea engagements, offers unlimited sandbox play and has an incredible editor none of which have been even close to matched in ~12 years right?

Didn't think so :rolleyes:

It may not be perfect but the OFP/ArmA series of games are far more ambitious and complex to design than other shooters.

With that ambition and complexity come many technological hurdles, thankfully BIS is unique in that they vigorously support their products years after release.

Contrary to releasing a new version every 6-12 months at a premium price-point and promptly dropping support for it as soon as the last DLC is out the door in preparation for the next Call of Battlefield: Gears of HALO.

Isn't that hard to figure it out: Arma3 uses my CPU and GPU by 40% of its total "power".. and it runs at 25 fps on average. So it's not my PC short on "power", it's the engine that sucks. I'm not sure why we're still discussing it.. it's so obvious! :)

Share this post


Link to post
Share on other sites
Isn't that hard to figure it out: Arma3 uses my CPU and GPU by 40% of its total "power".. and it runs at 25 fps on average. So it's not my PC short on "power", it's the engine that sucks. I'm not sure why we're still discussing it.. it's so obvious! :)

amen

and tying this back on topic! SLI performance is bad because the engine is bad because it does not use all of its power! BOOM back on topic!

Share this post


Link to post
Share on other sites
amen

and tying this back on topic! SLI performance is bad because the engine is bad because it does not use all of its power! BOOM back on topic!

We are on topic.. i'm on SLI as well, and my SLI is used "fine".. and with fine i mean: my gfx cards are capable of process the graphic, BUT if the frames are capped by something else (bad/unoptimized engine) and the fps are so low (for another reason that is not depandant of my gfx cards) there's NOTHING my two super-huber-gfx-cards have to process, because thre's only 25 frames per seconds to "calculate". This is why with a single card you have the same fps of two (or 3 of them) .. coz they aren't used, because the engine: s-u-c-k-s (atm).

Share this post


Link to post
Share on other sites
We are on topic.. i'm on SLI as well, and my SLI is used "fine".. and with fine i mean: my gfx cards are capable of process the graphic, BUT if the frames are capped by something else (bad/unoptimized engine) and the fps are so low (for another reason that is not depandant of my gfx cards) there's NOTHING my two super-huber-gfx-cards have to process, because thre's only 25 frames per seconds to "calculate". This is why with a single card you have the same fps of two (or 3 of them) .. coz they aren't used, because the engine: s-u-c-k-s (atm).

Not really, your understanding of how these things work is seriously flawed so there isn't much point in continuing this discussion

And as Leon said, this is a topic about SLI.

FYI, (for the 3rd or 4th time) SLI works fine.

Have a good one :)

Share this post


Link to post
Share on other sites

May i remind everyone that the topic is about SLI Performance, not engine optimization. There are surely other threads to talk about it. If someone has a hard time to drop the discussion right here and now, i'm willing to assist.

Share this post


Link to post
Share on other sites

If I artificially increase the GPU load by doing something like 200% rendering quality, then you see the load across my 3 GPU's evenly. SLI is indeed functional.

Share this post


Link to post
Share on other sites
Isn't that hard to figure it out: Arma3 uses my CPU and GPU by 40% of its total "power".. and it runs at 25 fps on average. So it's not my PC short on "power", it's the engine that sucks. I'm not sure why we're still discussing it.. it's so obvious! :)

Yes, the engine does not utilise all of the avaliable CPU cores fully, this is because most important stuff gets done in a single thread, so once that single core is maxed you have a performance bottleneck, nevermind what the other cores are doing and nevermind what the GPUs are doing. Am I excusing this or saying it trivial or acceptable? No, I'm just saying that this is where the performance issue lies at the moment and until BIS have a solution (if ever) we're stuck with it.

However, you can mitigate the problem by setting a much shorter object draw distance and turning down object detail a notch ~ simple quick fix to plaster over the cracks whilst BIS work on it.

Share this post


Link to post
Share on other sites
Yes, the engine does not utilise all of the avaliable CPU cores fully, this is because most important stuff gets done in a single thread

Exactly.. and this is the problem we had.. how many? five years ago? And they didn't found a solution in these years... so why they didn't put all their heads (including familiaries, dogs, even the man who clean their offices) concetrate to SOLVE this issue and to update the game engine to the YEAR 2013, instead of giving us a "nice gfx" that you cannot even see with a couple of Titans? They should have solved this problem before anything else, it should have be the VERY TOP priority.

Share this post


Link to post
Share on other sites
Yes, the engine does not utilise all of the avaliable CPU cores fully, this is because most important stuff gets done in a single thread, so once that single core is maxed you have a performance bottleneck, nevermind what the other cores are doing and nevermind what the GPUs are doing. Am I excusing this or saying it trivial or acceptable? No, I'm just saying that this is where the performance issue lies at the moment and until BIS have a solution (if ever) we're stuck with it.

However, you can mitigate the problem by setting a much shorter object draw distance and turning down object detail a notch ~ simple quick fix to plaster over the cracks whilst BIS work on it.

Did you read what the mod said in the post 2 above yours by any chance?

*crickets*

Didn't think so.

Share this post


Link to post
Share on other sites

With a pair of 680's if I don't overclock my 3930k at all then its hard to even get one card fully used let alone 2. Without overclocking it is extremely difficult to get to 60 fps in a real game let alone the 120Hz I would like to have.

As others have said it seems almost entirely limited by the CPU performance, and that seems to be dominated by the object view distance setting. Decreasing view distance dramatically improves performance.

One thing I did note however was that setting the game into AFR mode 2 improved frame rates about 20% compared to the default, it seemed to increase the utilisation of the second card. It only has an impact when the machine is overclocked however, it seems to make no difference with the CPU at default clocks.

Share this post


Link to post
Share on other sites

Just a quick post with links to some infor on MutiGPU/SLI testing; http://www.guru3d.com/articles_pages/fcat_benchmarking_review.html

Some of the "group of hate" should really study what the hate about...

Capturing at 1920x1080 @ 60 Hz in real-time shows IO writes of roughly 200~250 MB/s.

Capturing at 2560x1440 @ 60 Hz in real-time shows IO writes of roughly 400~475 MB/s.

Sli works, and works better if you take the time to understand what is happening.

As usual BangTail, you are correct.

Share this post


Link to post
Share on other sites

Just a question that popped into my head today is why is no one doing some type of SLI but for cpu's.I have two motherboards both with good cpus/memory etc.Why aren't people learning how to combine them somehow since we all know that cpu speeds have hit a brickwall and will probably become even more stagnant in future.My 4 yr old Q9550@3.4GHZ is not much slower than my new I5-3570K@4.4GHZ.4yrs pass and I would think I would be running an 8.0MHYZ system but its not happening.

So as stupid as the question seems,does anyone think this is possible if people start looking into running two systems linked together?Or maybe two CPU motherboards:j:

Share this post


Link to post
Share on other sites

It would require the same thing is required for simple CPU solution - multithreading. If the game can't spread across all available threads and use them efficiently, then adding extra CPUs is useless - consider the fact that a "simple" quad core is not properly used at the moment as it is.

Share this post


Link to post
Share on other sites

Well that just sucks.I just was researching quad cores and I found an article that says that 99% of game developers just don't know how to code for multiple processors and that it would require additional training on their part.BIS get back to the books!!!:rolleyes:

Share this post


Link to post
Share on other sites
Just a quick post with links to some infor on MutiGPU/SLI testing; http://www.guru3d.com/articles_pages/fcat_benchmarking_review.html

Some of the "group of hate" should really study what the hate about...

Capturing at 1920x1080 @ 60 Hz in real-time shows IO writes of roughly 200~250 MB/s.

Capturing at 2560x1440 @ 60 Hz in real-time shows IO writes of roughly 400~475 MB/s.

Sli works, and works better if you take the time to understand what is happening.

As usual BangTail, you are correct.

Well cheers for that mate - I appreciate the vote of confidence :)

Edited by BangTail

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×