Jump to content
Sign in to follow this  
McMick

CPU/GPU Balance?

Recommended Posts

I was trying to figure out why nobody uses ArmA as a benchmark, and I found this article:

http://jonpeddie.com/reviews/comments/arma-2-review-a-tale-of-wasted-flops

I was wondering if any of the developers had read it and would you care to comment on it? Will the GPU be utilized more effectively in ArmA 3? I think it would become more popular as a benchmark utility and thus gain more popularity overall, not to mention the benefits it will give in performance gains, if the article is accurate in its assessment.

Share this post


Link to post
Share on other sites

If the GPU is being used to it's best potential already, for this engine, what would be the result of throwing more at the GPU for it to do?

I don't know that when the CPU is the bottleneck, the answer is to throw more work at the GPU, presumably to bring it's bottleneck status up to it's CPU level.

Share this post


Link to post
Share on other sites

Actually, the thread isn't too far off, but I think the author is getting ahead of himself.

If you want to do anything to fix this games popularity, its to fix the netcode which gives performance drops in MP without any AI running on your client! My hunch from all the desynch issues and fps drops is the fact the game tries to organize a UDP style system. The problem with UDP is its a raw packet broadcast, and makes no guarantees about order or packet loss by itself. Instead, the netcode has to work extra hard to maintain this over the entire MP session network, and thus makes machines struggle to keep up.

Share this post


Link to post
Share on other sites

I think the article's real point is that if the CPU is floored and the GPU is being underutilized, then it would make sense to leverage that unused GPU power to help the game. I don't think it's nonsensical by any means. Certainly the fact that ArmA games aren't used for benchmarking must have some relation to this point.

Really I guess what I'd like to know is, have developers looked into using GPU for offloading threads from the CPU, and are there any parts of the code that *could* be reworked to take advantage of the GPU when possible, instead of using it only for rendering? Like AI, perhaps, or physics, or ballistics?

Oh yeah, and the article uses the term FLOPS but I'm pretty sure they're referring to floating point operations per second, and they tease a little with a tongue-in-cheek wordplay. The article itself, if you read it, speaks rather highly of ArmA 2.

Edited by McMick

Share this post


Link to post
Share on other sites
Really I guess what I'd like to know is, have developers looked into using GPU for offloading threads from the CPU, and are there any parts of the code that *could* be reworked to take advantage of the GPU when possible, instead of using it only for rendering? Like AI, perhaps, or physics, or ballistics?

If BIS are able to separate out processes to different threads then it would probably be more beneficial to use unused cores on the CPU rather than unused cycles on the GPU. IMO natch :)

Share this post


Link to post
Share on other sites

I dont care who Jon Peddie is, but i dont think somebody writing such an article would buy an SSD just for running ArmA2 to get the best out of it. Let alone checking out latest patches to see how this game performs now compared to its release.

Share this post


Link to post
Share on other sites

I do however really hope they make ArmA3 a 64-bit application this time. A game like this could really need that extra performance both from the CPU and higher amounts of RAM. Also of course improved usage of multiple cores and threads, especially now that new generations of CPU's are about to be released 6 months before A3 hits the market.

Share this post


Link to post
Share on other sites

I'll never say never, but I think the 64-bit thing is a no-go because BIS' HDD streaming tech is currently working out for them and there are other problems with using vast amounts of RAM. Dwarden mentioned something like this.

How does a 64-bit executable improve CPU performance?

Share this post


Link to post
Share on other sites
If the GPU is being used to it's best potential already, for this engine, what would be the result of throwing more at the GPU for it to do?

Better graphics with a negligable decrease in performance of course. :p

Edited by NeMeSiS
I cant spell

Share this post


Link to post
Share on other sites
If BIS are able to separate out processes to different threads then it would probably be more beneficial to use unused cores on the CPU rather than unused cycles on the GPU. IMO natch :)

The big wave in computing is using GPUs to do stuff ordinarily done by CPUs, in fractions of the time a CPU (even a multi-core) takes. First CUDA and now OpenCL are being used on Nvidia and AMD cards to deliver a great speed improvement on a wide range of applications. I think this is what the article was driving at, that it's time for simulators to take advantage of the abilities of the GPU to outperform a CPU in certain tasks, and that would allow better performance on the same hardware people are currently using. If the CPU is already floored, and the GPU has a lot of headroom, there's an opening there for a performance increase.

Like the author of the article, I'm not a programmer, so I don't know what sort of complexities and problems this entails, but originally I was trying to find out why ArmA is not popular as a benchmark (with hardware manufacturers, reviewers, etc.) and other games are favored instead; and the article implies, though it doesn't actually state, that it's because they *do* take better advantage of modern GPU architecture and therefore more accurately show the advantage of this card or that. The graphs in the article point out that even moderate graphics cards aren't really the bottleneck on ArmA 2, it's the CPU, and so the high-end cards could be doing some of that work and the game could be getting better framerates as a result.

Share this post


Link to post
Share on other sites
I was trying to figure out why nobody uses ArmA as a benchmark, and...

Unpredictability. There's no demo recording. And perhaps more importantly, there's FPS-based level of detail switching, or scaling, for graphics and AI, ie, lesser hardware might seemingly get the same test results, because it actually does less work.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×