Jump to content
dna_uk

DirecxtX 12 for ArmA 3?

Recommended Posts

To me, this DX12 hype looks like a elaborated marketing campaign.

After Windows 8 flaw, Microsoft had to do something to lead consumers to get Windows 10, otherwise Windows 10 would have the same destination of Windows 8. No one is going to drop Windows 7 for a matter of games, even more when Windows 10 requires a huge loss of privacy.

DX12 is the way to convince gamers for a update and also is good for hardware manufacturers, lots of people will bury hardware just to get the newest awesome DX12 ready.

And when AMD is leading the hype for DX12, it makes me even more suspicious, can't forget when DX11 came out the troubles that AMD cards had, to make cards usable they had to add a option to disable Tessellation.

Let's see.

Mantle proved already that a low level API can do wonders in an environment where the CPU is the limit; same with dx12. BTW, if you have a decent gaming card (7xxx GCN and upwards from AMD and gtx4xx and up for nvidia), then you can benefit from dx12. Moreover, older CPU will get new life into them thanks to this, so how exactly the hardware providers are profiting from this other than making a HUGE jump forward for the industry and the players? Sure, it does make PC gaming more sexy, but they don't force you to buy anything.

LE: Different sources, but the numbers I suppose should be close to that.

2h5a4ry.png95FWoxI.png

From the AMD blog, the system should be - AMD FX-8350, AMD Radeonâ„¢ R9 290X

http://community.amd.com/community/amd-blogs/amd-gaming/blog/2015/03/26/amd-enables-incredible-directx-12-performance-in-new-3dmark-api-overhead-feature-test

http://www.pcworld.com/article/2900814/tested-directx-12s-potential-performance-leap-is-insane.html

Edited by calin_banc

Share this post


Link to post
Share on other sites

When I start to see synthetic benchmarks with AMD pointing the beauty of 8 cores I start to have some doubts.

It's a fact that 8 cores have no use with whatever game engine, the engine from those I know (and I know a few) that scales better in matters on multicore is UE3 and it uses 4 cores.

When I start to see performance proofs using 3DMark I am even more suspicious, game engines are not benchmark tools optimized to squeeze hardware in order to achieve high scores.

Also from my own personal experience, when DX11 came out I had three 5870 in triple Crossfire and for long time the highest world score in 3DMark11 was mine (for my hardware configuration), still in games (with DX11) was a disgrace in matters of performance with all of them.

I am not saying that DX12 sucks, I am saying that I need to see this DX12 thing applied in to real things (like games) to see how it behaves, maybe it will be just awesome as they claim.

And if Microsoft has nothing to hide will provide DX12 support for windows 7.

Share this post


Link to post
Share on other sites

Awsp6O4.pngRVWVa3D.pngKbe6Biw.png

The pictures from above show what a low level API can do. Keep in mint this was still at the beginning, without games design specifically to take advantage of this. The extra performance is there, for sure.

As you can see in ArmA 3, at least in the version they've tested back then, much of the action is happening on the main core. If they manage to properly thread the render part, at least in low AI action scenes or perhaps even in MP games, should do well. Hopefully, more stuff will be added around the game, like furniture, more cars, trash and others like this in order to make the world feel real.

Last set are from Crysis 3. The 1st one, is from a level with grass, which uses physics simulated grass extensively and if coded right, the results are good. The 2nd, there isn't much going around, so of course, AMD falls behind a bit.

All in all, the fact that at least on the render AMD can keep up with Intel in a good way and both offering great improvements, is a nice step ahead. It means more complexity. Of course other systems are in a game engine that requires proper multi threading and will determine the fps of a game. We need to wait and see, but the future's bright. :)

LE: DX12 – Multi-Threading Shadow Map Rendering Performance Boost, Can Be Applied to Existing Engines

http://www.dsogaming.com/news/dx12-multi-threading-shadow-map-rendering-performance-boost-can-be-applied-to-existing-engines/

Edited by calin_banc

Share this post


Link to post
Share on other sites
Right now Arma 3 is using about 2000 draw calls even with high rendering distance. As you increase the render distance you'll see the profiler show the rendr section of the profile increase, however when you look in the GPUView output you wont see a corresponding increase in DirectX API overhead. Thus I came to the conclusion some months ago that the game is rendr (and sim) limited but its not DirectX's fault, BI are no where near the limits of the API. Infact what we see is that the rendr section of the game is the only part that shows any amount of multithreading at all, but most of the time is in BI code and not in DX 11. I have done everything I can think of to determine the root cause of the problem (as a developer with years of experience in profiling and fixing problem programs incidentally) and my end conclusion is that all of the problem lies in BI's code. The primary factors for poor performance on the client is the games simulation, which includes running the scripts but not the AI (which is mostly server side) and the rendering process but this is mostly in BI code and not in DX11. I have all but categorically proved it with profiling output, Microsoft debugger tools and draw call estimations all of which tells us a lot about what Arma is doing when its running slow. The next step is BI needs to fix the game, and DX12 isn't going to do a lot in this case, it will improve the frame rate because it will reduce an overhead in the main thread and it might give BI some more avenues for parallelism, but fundamentally the problem is 100% with what BI has done and not the API they are using.

The best way for you to check how DX11 affects the game in matters of rendering and consequently performance is to grab a dedicated server, running tests in our local machine when involves AI it will provide a inaccurate result.

In the dedicated server use a empty terrain, spawn there 500 vanilla AI units (infantry and vehicles), then join server and jump in to middle of them. Make your tests in matters of rendering (use average graphics settings in game)

Now, in the same server and scenario, place there 500 less detailed units (from RHS mod, for instance), repeat the process.

You will see that the rendering efforts are much less intensive with less detailed, obviously.

Now, if your repeat the process in a scenario like a Altis town you will see that the difference between both situations is much less noticeable.

All this seems obvious and could be normal if running within acceptable performance, but is not and that is mainly due to DX11.

Tessellation (even if dynamic), geometry, level of detail, reflections, shading and mapping in DX11 are really great but are intensive in matters of hardware.

In a game like Arma 3, large scenarios, countless objects, everything high detailed there is no way for it to work smoothly.

Take a look at the rendering breakdowns when we aproach (in a vehicle) to place that has some buildings or constructions (like a Altis town where every single piece is rendered individualy), thats the level of detail transition and tesselation working and those breakdowns are normal for such high detalied, rich and complex environment.

As I said somewhere, Arma 3 is too big for the current available technology in matters of software and hardware and thats not engine fault, honestly I dont see how it could be possible to afford such complexity with whatever engine architecture

However, DX9 could solve much of the issues, is the miraculous cure? Obviously not but it would bring Arma 3 performance to some other level. A game like Arma 3 needs DX11? I dont think it needs, everything that we see in Arma 3 can be achieved with DX9.

Now people wants DX12, let's see.

Share this post


Link to post
Share on other sites

You don't seem to understand how DX 9, 10, 11 and 12 work. Have another look, especially at this:

DirectX 11: Your CPU communicates to the GPU 1 core to 1 core at a time. It is still a big boost over DirectX 9 where only 1 dedicated thread was allowed to talk to the GPU but it’s still only scratching the surface.

DirectX 12: Every core can talk to the GPU at the same time and, depending on the driver, I could theoretically start taking control and talking to all those cores.

That’s basically the difference. Oversimplified to be sure but it’s why everyone is so excited about this.

The GPU wars will really take off as each vendor will now be able to come up with some amazing tools to offload work onto GPUs.

http://www.littletinyfrogs.com/article/460524/DirectX_11_vs_DirectX_12_oversimplified

In terms of rendering, the above it should do far more than what's ArmaA doing at the moment.

Of course, fixing the render part doesn't magically fix the issues with other poorly threaded systems, but it should be a great start!

Edited by calin_banc

Share this post


Link to post
Share on other sites

The boost from DX9 to DX11 is only in matters of architecture, in the way it operates.

In real situations there is no boost, is quite the oposite and you know why?

Because DX 11 is much more heavier, the boost gained from operation is surpassed by the demands. In real situations, there is no boost.

Also I forgot to mention reflections, with DX11 its a pain for graphics cards, if Arma 3 had a option to disable reflections there would be a big boost in performance.

Look at PIP, such a pain in matters performance. DX11 again, if was DX9 there would be no issue, surely you know why.

Share this post


Link to post
Share on other sites

Are real time reflections in ArmA 3? :D

Using the same amount of detail, DX11 is faster than DX9. However, DX11 gets slower because developers add extra detail to their games.

Le me make an analogy:

DX9: 4 guys in the position of giving orders to a huge group of people. Only one of them can talk to the group, so if you have a large one that can do it's work extremely fast and efficient, that single guy won't manage to extract as much performance as possible;

DX11: 4 guys in the position of giving orders to a huge group of people. More than one can talk to the group, but not at once. It's a step in the right direction, however that group of workers is still not used properly;

DX12: 4 guys in the position of giving orders to a huge group of people. Each one of them can talk to that group, at the same time. It's also possible, to speak wich each individual from that work group, so the efficiency is increased.

None of the test so far show a situation in which DX11 or other older API is faster than Mantle or DX12 and I would say, the internetz will step over their fingers to show/prove that (poor implementations are not an example).

You believe dx9 would work better because of the latest RV engine is still poorly threaded, but let's have a look at it. Imagine a situation where the game is needing 33ms for each frame (which totals to around 30FPS). The render is doing it's job and up to some point, adding detail and more objects + land mass, will scale up just fine. However, after a while the CPU usage and consequently the GPU usage drops due to a bottleneck somewhere. Because the render is not independent from the simulation, it cannot work in parallel (if I understood this right, but is not a big deal in this example anyway), so that 33ms is the simulation plus other systems and the render. In a situation where the render can do it's job in parallel with the game's other system and not be tied with the simulation, can have that 33ms time to do the job required before becoming the limiting factor (the weakest link in the chain principle). Moving on, the render in DX9, has only one dedicated core that speaks with the GPU, when that one's full, that's it, so you are limited by the performance of one core in a multi core CPU when it comes to 3D. Other system may be serial or multithreaded, however, the end result is limited by either the single core render or by the other systems, just as per above, depending on settings.

Moving to DX11, you get some boost in performance because while one core speaks to the GPU, the other(s) can prepare new work and get into action after the first one finishes and then reverse. This is faster than DX9, because in that 33ms example from above, in DX11 you can have more complex scenes that otherwise will require more time beyond 33ms.

In DX12, the boost is significant, because the GPU doesn't have to wait anymore at all, each core can talk to it and give instructions continuously, allowing far greater complexity in the same time budget (33ms), that other wise would be impossible in real time.

To finish with another analogy: think of 4 teachers giving to 80 students some work to do. DX9, only one can give it the entire work to the group. In DX11, the 1st one writes on the table some of it, the the 2nd one comes in while the 1st teacher looks over the information and prepares himself to continue with the writing, then switch. In both of these examples, the 80 students finish the work faster than the teachers can write it. In DX12, all 4 teachers are writing at the same time, giving the information to all the 80 students, no time lost (or minimal at least).

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Now, if the other systems are not efficient or ideal (properly multi threaded or using the CPU instead of the GPU) and go beyond 33ms (or 16ms if it's 60fps), won't give extra performance even on DX12, because the render will finish its job, but will have to wait for the rest. However, having an efficient render will help to have greater view distances, more detail, objects, light sources that cast shadows and so on, on screen, at the same time. May not seem much, but could help greatly improve the immersion and perhaps alleviate some existing problems in that department. If the render is not independent and goes in a serial manner with the simulation to fit in the 16/33ms budget, then should finish its work faster and give more CPU time to the simulation -> some extra performance nevertheless.

PS: I believe you think DX9 would help, because at times, threading to much will bring zero performance or even decrease it. This is not the case.

Share this post


Link to post
Share on other sites

With all respect, your analogy does not make much sense.

Developers did not added extra detail, DX11 requires more workload. Also DX11 was intended to enhance visual details along with some improvements in matters of how hardware operates. Whats the point of DX11 if is to have less visuals than with DX9?

But the fact remains, DX11 has huge threading inefficiencies causing a huge cpu bottleneck. The inefficiencies of DX11 have a nasty impact in Arma 3 engine (due to concept) and that is not Arma 3 engine fault.

About DX12, some benchmarks (independent) are showing that scales pretty well with 2 and 4 cores and this is in fact an improvement since DX11 cant do the same, BUT does not go beyond 4 cores.

Even so, allowing 4 core cpus to have a better scaling it seems good news, now we just need to see the workload of the API applied in real situations, we will have to wait.

Now placing 6 or 8 cores and DX12 in the same sentence looks a bit unrealistic, to me.

Share this post


Link to post
Share on other sites

if dx11 version of a game run worse than its dx9 with no visual difference its the devs fault..world of warcraft dx11 vs dx9 performance difference is massive..dx12 actually scales up to 6 cores

Share this post


Link to post
Share on other sites

The evidence is to the contrary, DX11 is not the limiting factor to Arma 3's frame rate. The game frame rate is limited more by its own simulation and the code it has surrounding DX11. This is a fact based on profiling information, its not a guess, its not a suspicion, its supported by cold hard numbers captured in development tools. People seem to ignore this like its a matter of opinion, it isn't, I have shown you the data and told you how to collect and verify it yourselves.

Share this post


Link to post
Share on other sites
The evidence is to the contrary, DX11 is not the limiting factor to Arma 3's frame rate. The game frame rate is limited more by its own simulation and the code it has surrounding DX11. This is a fact based on profiling information, its not a guess, its not a suspicion, its supported by cold hard numbers captured in development tools. People seem to ignore this like its a matter of opinion, it isn't, I have shown you the data and told you how to collect and verify it yourselves.

People do ignore it, hell the developers ignore it. No use fighting it, just let it happen. One day it will catch up.

People hear buzzwords like "CPU limited", see that DX12 eliminates "CPU limited" problems in games and assume it will solve all the problems without really understanding why or what's going on. That's all the industry has been for awhile is hype and buzzwords anyways.

Imagine what would have to happen for BI to actually "fix" RV in the sense that it would both run better and make better use of current and future gen hardware. It's reached a point where I don't think it's feasible to actually fix TBH. Writing a new engine from scratch might be a less time consuming approach rather than trying to debug and re-code the preexisting engine. Who knows though.

Share this post


Link to post
Share on other sites
The evidence is to the contrary, DX11 is not the limiting factor to Arma 3's frame rate. The game frame rate is limited more by its own simulation and the code it has surrounding DX11. This is a fact based on profiling information, its not a guess, its not a suspicion, its supported by cold hard numbers captured in development tools. People seem to ignore this like its a matter of opinion, it isn't, I have shown you the data and told you how to collect and verify it yourselves.

I had asked you last time on what settings were doing it and to try and do the same at higher ones and details (ultra for objects and 7-12km view distance and detail distance), but you provided no data for that :p

Share this post


Link to post
Share on other sites
I had asked you last time on what settings were doing it and to try and do the same at higher ones and details (ultra for objects and 7-12km view distance and detail distance), but you provided no data for that :p

If you think it some how changes the nature of the research then all the tools and capabilities are available to you to do this yourself. You don't get to demand that I do an experiment because you think it some how changes the validity of the results, what needs to happen is you do the research and show the difference and then we discuss its implications. Its a common tactic I see in every forum, someone turns up with real data and a whole bundle of people just say "uhuh its not valid because you didn't do X" and all the while they have zero evidence that it makes any difference and aren't willing to take the 10 minutes it would take to test their theory. So you are the one that wants to do that scenario, do it yourself and report back, we are all awaiting your amazing results that show somehow that in this one scenario DX12 would solve everything. Its not like I have any magic tools that others can't get its all available to all of us.

I wont dance because you told me too. So take your tongue face and stick it, this is just a classic trolling tactic I wont have any of it. Do your own research.

Share this post


Link to post
Share on other sites

Take it easy buddy, I could not find that software at that time, or else I would have done it myself, so I was asking if you want to redo the test under different circumstances since you were more familiar with it, that was it. Of course the number of draw calls will increase or decrease depending on settings. It's all in the interest of this comunity/game, nothing else.

Edited by calin_banc

Share this post


Link to post
Share on other sites

Directx 11 is not limiting factor, DirectX 11 is the reason for the limiting factor with game engine. We can see it clearly with cpu and gpu usage under different circumstances.

Also these engine limitations in matters of hardware usage were introduce later, in alpha stage the usage of hardware was more balanced, more intensive perhaps, but was better in my opinion. One example is the VRAM usage, in alpha we were able to load and use all VRAM available with the game settings on very high with a 3000 view distance.

Right now VRAM usage is very limited and is very low (for the game needs) and also very unstable. Even with all settings in Ultra with a view distance of 8000, VRAM usage dont go above 1.3 GB and still keeps jumping to lower values. It is clear that this is being forced to some limits which are not enough to achieve smooth transitions.

Also the quality of the graphics in relation with LOD transition and rendering distance was greatly reduced after alpha stage.

From what I can see these limitations in the engine were introduced to allow a wide and more equal use of the game, no matter the hardware configuration.

In my opinion all this was made exactly due to the heavy DirectX 11 workload and threading inefficiencies and because the impact of LOD transition (tesselation) in matters of performance since it is a nasty situation for a game like Arma and the main issue with Arma 3.

If this was not limited by the CPU, only those how have super machines would be able to load the game.

Some other example of how DX11 was a step back is the foliage, with Arma 3 and because DX11, the foliage was intensively reduced due to massive workload that the foliage requires to be rendered with DX11, even more if anti-aliased. This we can see clearly in game choosing the different types of foliage available.

Arma 3 (DX11) foliage is better or looks better? Not even close, it looks really bad when compared with the foliage that could be available if was with DX9.

Btw, want to see your GPU working at 100% without limitations, load Bornholm and jump to the trees. There is no engine limitation in these circumstances and you know why? Because foliage is not "tessellated" (cpu dependent), it depends only of GPU.

Will DX12 solve this? Will DX12 be able to transfer the workload from CPU to GPU? I need to see to believe.

Most likely the same or similar limitations will have to be introduced in order to allow a wide game usage, or else only those who have super machines will be able to load the game, decently.

Edited by Bratwurste

Share this post


Link to post
Share on other sites
[...]due to the heavy DirectX 11 workload and threading inefficiencies and because the impact of LOD transition (tesselation) in matters of performance since it is a nasty situation for a game like Arma and the main issue with Arma 3.

[...]

Arma 3 (DX11) foliage is better or looks better? Not even close, it looks really bad when compared with the foliage that could be available if was with DX9.

Btw, want to see your GPU working at 100% without limitations, load Bornholm and jump to the trees. There is no engine limitation in these circumstances and you know why? Because foliage is not "tessellated" (cpu dependent), it depends only of GPU.

What tesselation dude? What are you on about?

Will DX12 solve this? Will DX12 be able to transfer the workload from CPU to GPU? I need to see to believe.

Most likely the same or similar limitations will have to be introduced in order to allow a wide game usage, or else only those who have super machines will be able to load the game, decently.

DX12 is not about transfering things from CPU to GPU, it is about freeing up the GPU from the constrains regarding direct communications between different cores..

Share this post


Link to post
Share on other sites
Directx 11 is not limiting factor, DirectX 11 is the reason for the limiting factor with game engine. We can see it clearly with cpu and gpu usage under different circumstances.

Also these engine limitations in matters of hardware usage were introduce later, in alpha stage the usage of hardware was more balanced, more intensive perhaps, but was better in my opinion. One example is the VRAM usage, in alpha we were able to load and use all VRAM available with the game settings on very high with a 3000 view distance.

Right now VRAM usage is very limited and is very low (for the game needs) and also very unstable. Even with all settings in Ultra with a view distance of 8000, VRAM usage dont go above 1.3 GB and still keeps jumping to lower values. It is clear that this is being forced to some limits which are not enough to achieve smooth transitions.

I've always wondered people who say "Arma doesn't use more than 2GB or VRAM". I'm getting easily 3GB but should it use more without AA stuff? With AA I get it buffed pretty high on that 3GB.

Share this post


Link to post
Share on other sites
One example is the VRAM usage, in alpha we were able to load and use all VRAM available with the game settings on very high with a 3000 view distance.

Right now VRAM usage is very limited and is very low (for the game needs) and also very unstable. Even with all settings in Ultra with a view distance of 8000, VRAM usage dont go above 1.3 GB and still keeps jumping to lower values. It is clear that this is being forced to some limits which are not enough to achieve smooth transitions.

----------------------------------------------------------------------------------------

Some other example of how DX11 was a step back is the foliage, with Arma 3 and because DX11, the foliage was intensively reduced due to massive workload that the foliage requires to be rendered with DX11, even more if anti-aliased. This we can see clearly in game choosing the different types of foliage available.

Arma 3 (DX11) foliage is better or looks better? Not even close, it looks really bad when compared with the foliage that could be available if was with DX9.

.

The issue is at your end, mine works just fine, with vRAM going as far as 3,2-3,3GB.

Also, I'd say the foliage can look quite good in DX11 and it looks ok în ArmA (of course, not as good as it does in Cry Engine or Crysis games):

Share this post


Link to post
Share on other sites

Anti alias renders the image at a higher resolution and then scales it down to the resolution you have set, so yes, AA does increase VRAM usage a lot.

You guys need to state your resolution and other settings when talking about VRAM usage. It will vary a lot from one PC to the other.

Share this post


Link to post
Share on other sites
Anti alias renders the image at a higher resolution and then scales it down to the resolution you have set, so yes, AA does increase VRAM usage a lot.

You guys need to state your resolution and other settings when talking about VRAM usage. It will vary a lot from one PC to the other.

Yes this is why I tried to ask what we should expect without that AA stuff?

Share this post


Link to post
Share on other sites

my vram usage can easily go past 2gb..and also dx11.2/12 has tiled resources

Share this post


Link to post
Share on other sites
Anti alias renders the image at a higher resolution and then scales it down to the resolution you have set, so yes, AA does increase VRAM usage a lot.

You guys need to state your resolution and other settings when talking about VRAM usage. It will vary a lot from one PC to the other.

It goes close to 3GB on 1080p all maxed (8MSAA, CSAA/SMAA) and about the same or just above on 5280x1050 without MSAA. The game most likely needs around 2GB, the rest it's just cache.

Share this post


Link to post
Share on other sites
my vram usage can easily go past 2gb..and also dx11.2/12 has tiled resources

Pretty sure Arma does not use tiled resources. It's the same thing as when that other guy brought up tessellation earlier in the thread. Arma does not use it. You cannot assume these dx11 features are used by all games that support dx11. Most of these features are optional, and a lot of games does not use them at all.

Share this post


Link to post
Share on other sites

Correct, it doesn't use tilled resource. One of the devs said a while back that the install base would be too small (then win 8 wasn't that popular) and wasn't worth it for them - and like that, a circle goes around: devs not doing it because the install base is small, players don't upgrade because they don't have anything to upgrade for.

Share this post


Link to post
Share on other sites

im just stating the dx11 textures tech advantage over dx9 since that guy praise dx9 so much and hopefully arma will use it because its also part of dx12

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×