Jump to content
Sign in to follow this  
7

ArmA2 dual GPU (X2 cards) support?

Recommended Posts

I can see where you are coming from, but I'm not entirely convinced.

I'd imagine that the algorithms that figure out which card does what is going to take up quite a chunk of processing power. I read that the chip it uses is a specialized RISC design (which are known for their efficiency for general purpose tasks, let alone when theyre specialized to only do one). What might only require a small specialized chip may actually correspond to quite a serious amount of power when implemented on a jack-of-all trades x86 CPU. So I don't think its going to be as simple as you claim. Besides, I don't think the idea behind integrating things into the CPU is to make things cheaper - the thing has to exist somewhere, so you're just throwing cost from one point to another. Rather it's to do with performance. That's why things such as network controllers and sound cards aren't built into CPUs. Of course, you could always build the actual chip into the inside of the CPU, but that may not be entirely feasible just yet - the chip was about the size of a CPU die.

On the other hand - if you mass produce the special chip, it's going to be quite cheap. Consider that you can buy ARM (a popular RISC CPU) based computers for well under $100 (with nowhere near the sort of mass production that you'd see with Intel motherboards) it's quite obvious than Intel could probably churn out these chips for about $10 or $20, which is around the price that manufacturers have to pay per board for nVidia's SLI license. You have to remember that the people who would want things like dual graphics cards are going to be prepared to spend a bit extra anyway, and the X58 board is high-end, not for average Joe's office email machine. As for the MicroOS - it's pretty irrelevant. Most pieces of hardware have some sort of firmware or software running on them.

Then again, that's all just speculation, I could be completely wrong!  whistle.gif

Share this post


Link to post
Share on other sites
Took a look at the Anandtech review of the GTX295. Some were scaling up as high as 60-70% which shows things have obviously improved somewhat since the last generation of cards. Still, is spending 100% more to get 60-70% more really a good idea? And you're still ignoring the fact that most people don't have €800 to spend on two high end graphics cards. If nVidia was going to go mainstream with SLI, I think they would have done it long before now. The technology has been kicking around for almost five years.

Fine argument but it ignores the economic rule of depriciating returns. ie.the gain vs cost on almost anything is almost always unbalanced. I think that 60-80% extra for twice the price is probably quite good and if your smart you hold of buying a second card till the next gen comes out and the price drops.

Of course it horses for courses. I have two 8800's but don't use SLI. That was because I always used 2 monitors and didn't want a performance loss. Now if Arma was SLI compatible it wouldn't have stung so bad when my No2 monitor blew up last week.

Share this post


Link to post
Share on other sites
I can see where you are coming from, but I'm not entirely convinced.

I'd imagine that the algorithms that figure out which card does what is going to take up quite a chunk of processing power. I read that the chip it uses is a specialized RISC design (which are known for their efficiency for general purpose tasks, let alone when theyre specialized to only do one). What might only require a small specialized chip may actually correspond to quite a serious amount of power when implemented on a jack-of-all trades x86 CPU. So I don't think its going to be as simple as you claim. Besides, I don't think the idea behind integrating things into the CPU is to make things cheaper - the thing has to exist somewhere, so you're just throwing cost from one point to another. Rather it's to do with performance. That's why things such as network controllers and sound cards aren't built into CPUs. Of course, you could always build the actual chip into the inside of the CPU, but that may not be entirely feasible just yet - the chip was about the size of a CPU die.

On the other hand - if you mass produce the special chip, it's going to be quite cheap. Consider that you can buy ARM (a popular RISC CPU) based computers for well under $100 (with nowhere near the sort of mass production that you'd see with Intel motherboards) it's quite obvious than Intel could probably churn out these chips for about $10 or $20, which is around the price that manufacturers have to pay per board for nVidia's SLI license. You have to remember that the people who would want things like dual graphics cards are going to be prepared to spend a bit extra anyway, and the X58 board is high-end, not for average Joe's office email machine. As for the MicroOS - it's pretty irrelevant. Most pieces of hardware have some sort of firmware or software running on them.

Then again, that's all just speculation, I could be completely wrong!  whistle.gif

I'm not convinced on any part of the entire story, not even mine which to me seems logical but they say all ways lead to Rome and some might be better than others.

The whole setup of the Lucid Hydra chip seems very expensive.

They could either integrate it on the CPU die or combine it with the newer IOH and ICH on a single SoC hub/bridge kind of IC.

I can't imagine they (Intel) would tell moboco's to add yet another IC on their PCB.

Running it on an x86 CPU seems very strange, but a lot is possible with ISA's these days. But fusing it with a hub or bridge would be a more logical solution to me.

About sound cards and network controllers, Intel could easily have integrated those in the ICH but that would be abuse of their market position.

nVidia actually annexed a Russian audio software company which was working on a shader-powered DSP-like emulator so OpenAL calls could be done on GPUs. This hasn't really worked out, but ATI has something similar to provide audio through HDMI.

I don't think it will take long before Intel would make a move to do something similar with their Flexible Display Interface.

There are so many factors, much of which are overlooked. Which creates a whole lot of possibilities. I doubt any of us would be anywhere near the result. But it's always fun discussing about it.

Just like with automotive technology, you never know what's next.

Share this post


Link to post
Share on other sites
I'm wondering if preferably a BIS developer could tell me if ArmA2 will work well with quad core CPUs and dual-GPU cards?

If not, please make ArmA2 take advantage of quad core CPUs and dual-GPU cards (GeForce GX2, Radeon X2, etc.).

Dual GPU is one video card with dual GPUs which shouldn’t be an issue; two video cards in crossfirex/SLI are not dual GPU’s and the game needs to support this mode to fully utilize the dual cards, this really diminishes the meaning of installing two graphic cards if the game would support such thing, never mind it will probably need extra coding. I’m not too sure if ATI/Nvidia or even motherboards have made the interface easy and handy for game developers to use or left it entirely to the developers to fiddle with!

Share this post


Link to post
Share on other sites

simple question, but isn't it true that a game has really nothing to do with what system it's on and what GPU you are using?

ArmA 2 is written for direct-x 9 (correct me if I'm wrong, I don't know much about all this)

So, if you have a Core i7 running with a gtx 295 without problems , the game should also run without problems right?

Or am I missing something?

I plan on buying a new pc soon, but don't really understand if I should buy a core i7 and gtx 295 running a 64 bit os or not..

Or should I just stick to a quad core and a single gpu card? because it doens't make a lot of difference anyway..

Share this post


Link to post
Share on other sites
simple question, but isn't it true that a game has really nothing to do with what system it's on and what GPU you are using?

I don't know what you mean. The game has to be optimized for certain hardware. Alot of games aren't properly optimized for dual GPU or dual graphics card systems. So, in a certain sense you are better off getting a single GPU card because you aren't guarnteed any of the speed increases from a dual GPU setup.

Share this post


Link to post
Share on other sites
Or should I just stick to a quad core and a single gpu card? because it doens't make a lot of difference anyway..

You can't go wrong with powerful single-core card.

Share this post


Link to post
Share on other sites

It depends what resolutions you run.

I run at 3086x1024.

So if you were a mad Quake/Crysis/Unreal monster and mega fast frame rates for competative play you might consider jumping to SLI.

But on standard resolutions with a decent new card your frame rates are already going to be around the hundreds anyway.

When I upgrade to 5000x1050 I will consider a 295 card to replace my 280. As long as they make one with 1+ GB RAM. But I'm hoping there will be a next gen GPU on the market by then.

ArmA didn't support SLI. I wish it had. If any game ever needed a frame rate boost, OpF and ArmA are the ones.

So I hope ArmA2 does. That way if ever I get over excited I'll have room to upgrade. (Although I don't expect to).

I've gone back to single cards from SLI. I tried it, there was the odd conflict and most games didn't support it (or need it for that matter), only a few FPS, and I personally found the price far in excess of what I was getting for the money.

These days I can't be bothered to turn the SLI on, when I play SLI games.

I am more likely to wait for the next gen of GPU's to get my extra 50% frame rates, than I am to buy a dual GPU card or SLI.

Share this post


Link to post
Share on other sites

if performance bottleneck of Arma2 will stay on CPU , even with boosted quadcore, there will be no need for multi-gpu.

(let aside Tripplescreen ...)

the main uselessness of bi-gpu with arma, is , your cpu can't handle so high framerate anyway. why would you use it ?

cranked max FSAA & anisotropic , but it 's geekism.

biggrin_o.gif

now, as arma2 is optimized for multi CPU , there is a chance bi-gpu will take all advantage of that.

i, for one, must recognize my M4A1 kickback isn't same at 28 fps , than 60 fps. the only reason i would grab few FPS more.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×