Jump to content
dna_uk

DirecxtX 12 for ArmA 3?

Recommended Posts

Don't confuse the multithreading of the render part with the overall MT optimizations of a game/engine.

  • Like 1

Share this post


Link to post
Share on other sites

Don't confuse the multithreading of the render part with the overall MT optimizations of a game/engine.

This comes from another thread i've read through some pages.

 

"Although having more than 4 physical cores did improve draw-call performance. Below is a graph which shows that more cores = better. Take this with a grain of salt though, those are still early performance tests."

 

dx12-980.png

Share this post


Link to post
Share on other sites

This comes from another thread i've read through some pages.

 

"Although having more than 4 physical cores did improve draw-call performance. Below is a graph which shows that more cores = better. Take this with a grain of salt though, those are still early performance tests."

 

dx12-980.png

 

That is the render. Besides that you have AI, physics, sound, etc., that all have been multithreaded for quite some time now. For maximum efficiency you need all aspects of the engine to be able to properly use multiple cores.

  • Like 1

Share this post


Link to post
Share on other sites

That is the render. Besides that you have AI, physics, sound, etc., that all have been multithreaded for quite some time now. For maximum efficiency you need all aspects of the engine to be able to properly use multiple cores.

So Ashes of the singularity is a good example.. its prolly just even an early version of DX12 in AotS, i am not too sure, but i have read something about.

Really interesting to the benchmarks on theyre forums.. from old i7-9xx series. to 5960x, and you'd be suprised by the performance gap between DX12/DX11, even though its an early version of DX12 i think. unless they updated it more frequently, i dont know.

 

Also this comes from oxide itself, these guys have one of the best developers in world in their team.

 
"Our game will scale with more and faster cores.  Of course, if your GPUs aren't fast enough, the CPU will be waiting on them.: 
 
"As you can see both DX11 and DX12 can utilize more than 12 threads in this game. DX12 shows a significantly improved resource allocation between threads, while DX11 shows the majority of the work load left to the primary thread."
 
 
"wow, thanks for sharing. DX12 CPU usage so steady and equal across all those cores, very impressive"
 
Really interesting benchmarks to be seen on theyre forum aswell.
 
But you are right about this pretty much : 
"For maximum efficiency you need all aspects of the engine to be able to properly use multiple cores."
 
 
 
Clueless.

Share this post


Link to post
Share on other sites

As good multi threaded engines, you also have Frostbyte and Cry Engine, engines that are already there and working. FB was the first in low level APIs world under Mantle and it shows.

Anyway, coming back to ArmA, although it's indeed multi threaded, it's not that efficient in what it does. Dwarden  in the way he phrase it, would make you think everything is just fine and that just some crazy people are there that just complain for the sake of complaining. But if you look at ArmA, let's say they add DX12, is that the limiting factor, as in a MAJOR one? Well it depends on the scenario, but that isn't a magic wand that you move around and it fixes everything.

 

Let's remember Bohemia said they didn't saw major improvements experimenting with the API, but then let's remember about Ark and how it should have received DX12 a while back. After AoS story broke out, that was postponed for "ASAP" and almost half a year later, nothing... Is it because nVIDIA is having problems and they have a partnership with them? The latest Tomb Raider used an Ambient Occlusion solution developed by them and Asynchronous Shaders on XBOX1, but came without those on PC, but had some GW love. Again, nVIDIA sponsored game. Going back to AoT, under DX12 nvidia is using more of the CPU by drivers, due to the way it's working with A(synchronous) S(haders) and how it goes around the lack of a hardware scheduler in their cards. Add that to an already high demanding game on the CPU and you have the situation where it's actually loosing performance. Considering how RV 4 works (the engine under ArmA 3), that is a problem most likely Bohemia already encountered. If they can't go around that,  DX12 could be scraped or put there by name alone, no AS, no nothing. Just AMD's install base won't matter to them, just as DX11.2 and tilled resourced didn't matter, although it can bring great benefits to the game.

 

More on AMD, Mahigan (the guy that was quite vocal about this in the past and that worked for AMD a few years back, so he knows hardware), says that under DX11 AMD can't do much to multi thread their drivers due to their hardware, but it performs great under DX12 - opposite to nVIDIA, which can use its "magic" to do wonders under DX11. The green team, may or may not have a solution with Pascal to AS, we'll see soon enough. Although I hope it would, because that would motivate developers to actually use what they put in their console ports. :D Of course, that's bad luck for every nVIDIA GPU sold until now.

 

More on Ashes of Singularity, nVIDIA "fixing" solution implied removing some effects apparently: http://forums.anandtech.com/showpost.php?p=38023537&postcount=317  http://forums.anandtech.com/showpost.php?p=38024684&postcount=343

 

 

Anyway, no question the R9 290 produced more afterglow. The GTX980TI also did but it looked "staged" or planned and was NOT nearly as effective or broad based as the results from R9 290 (Sapphire Tri-X at 1000/1300). For instance in a fire fight scene with the 290 each hit on an object appears to produce an afterglow while only certain hits on the 980TI produce the same result.

Interesting phenomenon.

P.S. It's fun having both a R9 290 and GTX980TI to see the differences in DX12 as they evolve.

 

 

So in this benchmark AMD is working harder and it's still competitive with the green team.

 

Zlatan is a game developer:

 

 

Because their kernel drivers hurt the parallelism. Even if I use a lot of rendering threads with D3D11 70-80 percent of the CPU time is idle and unusable. No matter how optimized the engine is, the kernel driver threads will steal the available resources, and use it in a very inefficient way.

Every engine programer do the same job. Write the code and test it. I can see a lot of stalls, but unable to fix it because the IHVs don't allow me or the other devs to debug the kernel driver. We don't have the source, don't have the tools, nothing. If I can find a fix for a stall in the engine, than I'm the luckiest man in the world, but if not, I have to contact with the IHVs. At this point the whole development will get very nasty. They may provide an updated driver wich will fix the bug, but most of the time another problem will come, which requires another driver fix, which will rise another problem, and so on. If I'm lucky enough I may be able to meet some IHV engineers to talk about the problem directly, and I'm done this in the past. Every time I'm sit down the IHVs we just agreed that the API is the problem. One time I sit down with Microsoft and they told me the drivers are the problems. Sometimes I was thinking about if the IHVs and Microsoft talk about this, probably they agree that the devs are the problem. But in the end, these conversations doesn't make my code run faster, and this is really sad.
I don't know how to explain it to you without a lot of technical sentence. I think the easiest explanation is that the abstraction of D3D11 is in the wrong level. It is too high level to be fast and too low level to be easy to use. The new low level APIs aren't really low level. These just put the abstraction to the "right" level. Nothing more, nothing less.
The devs can do a much better job if they can manage the memory explicitly. The only thing what the IHV must do is to open up the tools and the architecture documents. That's all we want.

 

So basically under DX11 you depend a lot to Microsoft and AMD plus nVIDIA, while DX12 gives you liberty to fix your own problems.

 

Getting back to ArmA 3 and DX12. Even though they manage to extract some performance from there, there is still de matter of 32 bit client and AI that isn't that well multi threaded. Those require "fixes" as well, but at least the 64 bit is worked on and with some luck, DX12 will arrive as well. I'd guess the major bonus of this will be a higher draw distance and object details, while in situations with high number of Ais we'll be limited just as we are now.

Share this post


Link to post
Share on other sites

As good multi threaded engines, you also have Frostbyte and Cry Engine, engines that are already there and working. FB was the first in low level APIs world under Mantle and it shows.

Anyway, coming back to ArmA, although it's indeed multi threaded, it's not that efficient in what it does. Dwarden  in the way he phrase it, would make you think everything is just fine and that just some crazy people are there that just complain for the sake of complaining. But if you look at ArmA, let's say they add DX12, is that the limiting factor, as in a MAJOR one? Well it depends on the scenario, but that isn't a magic wand that you move around and it fixes everything.

 

Let's remember Bohemia said they didn't saw major improvements experimenting with the API, but then let's remember about Ark and how it should have received DX12 a while back. After AoS story broke out, that was postponed for "ASAP" and almost half a year later, nothing... Is it because nVIDIA is having problems and they have a partnership with them? The latest Tomb Raider used an Ambient Occlusion solution developed by them and Asynchronous Shaders on XBOX1, but came without those on PC, but had some GW love. Again, nVIDIA sponsored game. Going back to AoT, under DX12 nvidia is using more of the CPU by drivers, due to the way it's working with A(synchronous) S(haders) and how it goes around the lack of a hardware scheduler in their cards. Add that to an already high demanding game on the CPU and you have the situation where it's actually loosing performance. Considering how RV 4 works (the engine under ArmA 3), that is a problem most likely Bohemia already encountered. If they can't go around that,  DX12 could be scraped or put there by name alone, no AS, no nothing. Just AMD's install base won't matter to them, just as DX11.2 and tilled resourced didn't matter, although it can bring great benefits to the game.

 

More on AMD, Mahigan (the guy that was quite vocal about this in the past and that worked for AMD a few years back, so he knows hardware), says that under DX11 AMD can't do much to multi thread their drivers due to their hardware, but it performs great under DX12 - opposite to nVIDIA, which can use its "magic" to do wonders under DX11. The green team, may or may not have a solution with Pascal to AS, we'll see soon enough. Although I hope it would, because that would motivate developers to actually use what they put in their console ports. :D Of course, that's bad luck for every nVIDIA GPU sold until now.

 

More on Ashes of Singularity, nVIDIA "fixing" solution implied removing some effects apparently: http://forums.anandtech.com/showpost.php?p=38023537&postcount=317  http://forums.anandtech.com/showpost.php?p=38024684&postcount=343

 

 

 

So in this benchmark AMD is working harder and it's still competitive with the green team.

 

Zlatan is a game developer:

 

 

So basically under DX11 you depend a lot to Microsoft and AMD plus nVIDIA, while DX12 gives you liberty to fix your own problems.

 

Getting back to ArmA 3 and DX12. Even though they manage to extract some performance from there, there is still de matter of 32 bit client and AI that isn't that well multi threaded. Those require "fixes" as well, but at least the 64 bit is worked on and with some luck, DX12 will arrive as well. I'd guess the major bonus of this will be a higher draw distance and object details, while in situations with high number of Ais we'll be limited just as we are now.

^^ This is one of the reasons Vulkan is so attractive. AMD and Nvidia have MAJOR support for it and Nvidia runs VERY well under Vulkan as does AMD (should considering it contains Mantle Code). Vulkan is one API for Multiple Platforms. No code changes in the API to go from Mobile to Desktop to Console. Pretty Slick, Eagerly awaiting the the Vulkan SDK Webinar on Feb 18th. Anything and everything you wanted to know about Vulkan in one hour.

Share this post


Link to post
Share on other sites

@bratwurste

 
Clueless.

 

Not exactly.
This is pure Nvidia propaganda.
In first place, not everyone will be using a 980TI,
In second place every Nvidia GPU performs considerably worse with DX12, in comparison with DX11 and even more worse when you go to low end Nvidia GPUs.
In third place the only GPUs that are performing (a little) better with DX12 are the AMD ones.
In fourth place, who needs DX12 for a AMD gpu, when we have Vulkan?
  • Like 2

Share this post


Link to post
Share on other sites

 

Not exactly.
This is pure Nvidia propaganda.
In first place, not everyone will be using a 980TI,
In second place every Nvidia GPU performs considerably worse with DX12, in comparison with DX11 and even more worse when you go to low end Nvidia GPUs.
In third place the only GPUs that are performing (a little) better with DX12 are the AMD ones.
In fourth place, who needs DX12 for a AMD gpu, when we have Vulkan?

 

So a DirectX 12 API benchmark is NVIDIA propaganda? lol.

 

Are you serious? 

 

 

Did you even check the graph, or what this is about?

Since you probably have no idea what you are talking about.

 

'In fourth place, who needs DX12 for a AMD gpu, when we have Vulkan?"

 

Think im done here.

Share this post


Link to post
Share on other sites
Are you serious?

Anyway, I will tell you the truth.

DirectX 12 is nothing more than Microsoft Windows 10 Train saying everyone to jump a board because is the road to glory. This is the truth. 

And the way this subject is being aproached over the web is, at least, hilarious.

Do you know that basically everything that is being claimed by DX12 can be achieved with DX11?

Do you know that DX11 at RAW state (3D/Geometry) DX11 barely use CPU?

Heck, I can run DX11 at different stages having mine 3 GPUs working at 99% and mine cpu basically remains idle.

The "issues" or "constraints" with DX11 arise when you start  to add third part effects (let's say) such like, particles, occlusion, lighting, shadows, etc, and the way these are being applied (as a matter of game architecture) is dependent of the game engine (or its limitations) and/or Developers preferences.

Why do you dont ask Frostbite about the plans for DX12? They are not interested and you know why? Because they are already doing with DX11 what DX12 claims to do.

TUTUTU, Microsoft Train moving, everyone aboard.

  • Like 1

Share this post


Link to post
Share on other sites

 

Are you serious?
Anyway, I will tell you the truth.
DirectX 12 is nothing more than Microsoft Windows 10 Train saying everyone to jump a board because is the road to glory. This is the truth. 
And the way this subject is being aproached over the web is, at least, hilarious.
Do you know that basically everything that is being claimed by DX12 can be achieved with DX11?
Do you know that DX11 at RAW state (3D/Geometry) DX11 barely use CPU?
Heck, I can run DX11 at different stages having mine 3 GPUs working at 99% and mine cpu basically remains idle.
The "issues" or "constraints" with DX11 arise when you start  to add third part effects (let's say) such like, particles, occlusion, lighting, shadows, etc, and the way these are being applied (as a matter of game architecture) is dependent of the game engine (or its limitations) and/or Developers preferences.
Why do you dont ask Frostbite about the plans for DX12? They are not interested and you know why? Because they are already doing with DX11 what DX12 claims to do.
TUTUTU, Microsoft Train moving, everyone aboard.

 

Same can be said about anything else with your mind state....

Thats like going "Nvidia or AMD are capable of making a GPU that last forever(regarding frames) obviously they can do this, but this is how marketing works...

This is how it entered my mind atleast.

 

Sadly this is not how it works >

 

 

 

"Why do you dont ask Frostbite about the plans for DX12?"

They already support DX12, and most big engines will support vulkan aswell, which they should anyway.

 

 

 

 
 

 

Also, i was talking about the CPU utilization on DX12, and not the GPU....  CPU utilization is horrible in DX11.

Because its funny how games are still not utilizing my 5820K properly, which i still find Amusing, regardless of the performance it still gives me  B)

 

 

People said im too "hyped" for vulkan, besides, vulkan/dx12 will take time aswell. the graph was old.

Also,it seems you are talking like i'm an Nvidia fanboy for some reason.. note i own a fury x.. and i support AMD to the fullest, since Mantle was already GREAT, i cannot wait for Vulkan.

  • Like 1

Share this post


Link to post
Share on other sites
Hint

TH4QnK2.png

 

This is talking for itself, 3 FPS is so big gain (DX12 over DX11)? As you (bratwurste) sayed, it all depend of game engine design, DX will gain only 3 fps in certain situations. For me, arma 3 runs smooth since last 4 or 6 updates, they surely done something in optimalisation steps, but thats all they can do, nothing more will be propably done, beacuse low level engine part need a lot of tweaks. Only help for Arma series is building new engine from begin, with low level "update", allowing to run game on new machines, AND ENGINE ITSELF AS 64 BITS APLICATION. @dwaight, you know how "public relations" (aka propaganda) and marketing works? Do you still believe microsoft?

  • Like 1

Share this post


Link to post
Share on other sites

At the same time there is this:

CPU_06.png

Btw if I don't have faith in DX12 then I don't believe Vulkan will be any different (and other way around though Vulkan hasn't really showed much yet).

For Arma 3. Well it's said that devs need to do something with their engine if they want to see any benefit from DX12. And there was this comment from BI dev that so far there hasn't been any good results, so I got to wonder if BIS can or will do any chages...

Share this post


Link to post
Share on other sites

At the same time there is this:

CPU_06.png

Btw if I don't have faith in DX12 then I don't believe Vulkan will be any different (and other way around though Vulkan hasn't really showed much yet).

For Arma 3. Well it's said that devs need to do something with their engine if they want to see any benefit from DX12. And there was this comment from BI dev that so far there hasn't been any good results, so I got to wonder if BIS can or will do any chages...

 

If you don't have faith in DX12, you should have faith in Vulkan as DX12 is a one pony shop. And Vulkan is a whole herd. Vulkan approaches things differently from DX12. And, Vulkan has shown via early driver stack demos how powerful it can be.

 

Example being the Gnomes demo:

  • Like 2

Share this post


Link to post
Share on other sites

Why do you dont ask Frostbite about the plans for DX12? They are not interested and you know why? Because they are already doing with DX11 what DX12 claims to do.

 

Funny you should say that - https://twitter.com/repi/status/585556554667163648

 

 

Would like to require Win10 & DX12/WDDM2.0 as a minspec for our holiday 2016 games on Frostbite, likely a bit aggressive but major benefits

 

Here's a presentation of DX12 and this part is a good indicative of what could be good in ArmA

aka: More objects, more variety while you have less CPU usage. However, the devs already said their engine is multi threaded enough. So that's that.

 

 

How DX11 works vs. DX12 - https://youtu.be/H1L4iLIU9xU?t=15m12s

 

You can watch the whole thing, is quite good and informative.

 

Vulkan will not fix the engine as well if they don't wanna go into deep optimizations.

 

PS: I can also show that ArmA is rather GPU bound and not server side or CPU side. Post figures that are relevant to subject at hand please.

 

LE: Here is a more recent test of AoS. http://www.computerbase.de/2016-02/directx-12-benchmarks-ashes-of-the-singularity-beta/#diagramm-ashes-of-the-singularity-1920-1080 24% increase moving to DX12 on R390 and is doing that while doing more work than nVIDIA.  An that's in an engine already proper multi threaded. 

  • Like 2

Share this post


Link to post
Share on other sites

"thats another 100 bucks you paid for silicon thats not working"

 

 

Damn right.

Share this post


Link to post
Share on other sites

Arguing about a new graphics api like DX12/Vulkan with this:

 

Unfinished drivers.

Unfinished games.

Unfinished graphics engine.

Unfinished optimizations.

Early supported hardware or not completely supported

 

To me is futile.

 

True data: BF4 works better with mantle in the middle tier graphics and improves the high end graphics cards.

 

Meanwhile the usual battles between "haters" and "fanboys" dont have solid data to prove who have right.

 

Leaving my mind fly and making and "oracular" prediction: I see the future of the improvement in the middle tier graphics card (60-70% of gaming cards) and no so severe impact in the high end graphics. The high end graphic card  - to me - has a huge "horsepower" to sink any software benchmark unles you create a game designed to squeeze this his maximum level ( and lost your 90% of your sales because nobody can run the game).

 

 

The tech demo are very beatiful but: they are all thech demos without a full game engine behind, I mean phsyx, player intectaction, full IA, and designed to show raw data.

 

A real hint:

 

 

Back on topic:

 

The Arma 3 engine has right now a severe re-work job inside. The new functions added to the engine (occlussion, decorate, sound) returns to me the faith in his developers after the "sway thing".

 

If you join the new functions and other factors like most probably 64 bits with dx12 version of the Arma 3 engine brings a lot of fps power to the game.

Share this post


Link to post
Share on other sites
 I see the future of the improvement in the middle tier graphics card (60-70% of gaming cards) and no so severe impact in the high end graphics. The high end graphic card  - to me - has a huge "horsepower" to sink any software benchmark unles you create a game designed to squeeze this his maximum level ( and lost your 90% of your sales because nobody can run the game).

 

 

The tech demo are very beatiful but: they are all thech demos without a full game engine behind

 

 

Actually you gonna see more improvement over high end cards rather than slower cards and that's logical. Look in the test I've posted above -> R390 gains more compared to R280x. Also AoS is in development, you can download and I think it's playable. It's not just a demo.

  • Like 1

Share this post


Link to post
Share on other sites

Here's a presentation of DX12 and this part is a good indicative of what could be good in ArmA   aka: More objects, more variety while you have less CPU usage. However, the devs already said their engine is multi threaded enough. So that's that.

 

 

That's the whole thing, Frostbite (aka BF4) barely use CPU, you get the same output no matter if you have a low or high end CPU, its all about GPU and that's DX11.
And you know why, because those developers knew and know how to adapt the game engine to DX11 architecture, instead of adding, forcing or wanting some "upper" effects such doubtful shadows, terrains, particles or some inventions based on nothing that only work in some illuminated minds.
Also they already have done a great work in BFBC2 with DX10, probably that's where they got the knowledge instead of "flying" from DX9 to DX11 with a engine that barely supports DX8.
They will move to DX12? Sure they will, they cant afford to lose the train, They need DX12? No.
Also it was a good decision from BIS devs not to move to DX12 (for now), with the weight of the effects that we have in game, I cant see a GPU (currently) that could run it under acceptable performance parameters.
 
s3Lg65c.png

Share this post


Link to post
Share on other sites

You're quoting all the wrong scenarios and it looks like you know more about what they need than Johan himself (one of the major guys behind Mantle and low level APIs). Of course a linear part in single player won't pose much of challenge.  Multiplayer is where is at.

Share this post


Link to post
Share on other sites

Mantle, yes.

I like it since now I am using AMD.

AMD always had troubles with DX11, thats a known fact. Thats why they had to add some tesselation controls in to their drivers (Nvidia does not have such thing). The perfromance with DX11 from AMD always have been a perfect crap, while Nvidia "blunderbuss" always have ruled on it. Why? I dont know. Maybe the owner of Nvidia is cousin of the owner of Microsoft.

Not a surpise when Mantle gives a boost for AMD.

And since I own 3 AMD, just bring Vulkan to ARMA 3.

Share this post


Link to post
Share on other sites

Theres more and more new features with dx12 that are incoming to our pc's.

 

Here's another one:

 

http://wccftech.com/amd-improves-dx12-performance-45-gpu-asynchronous-compute-engines/

 

And another:

 

http://wccftech.com/dx12-nvidia-amd-asynchronous-multigpu/

 

All this improvements will be critical to have better fps, this is really important matter.

Share this post


Link to post
Share on other sites

How DX11 works vs. DX12 - https://youtu.be/H1L4iLIU9xU?t=15m12s

 

PS: I can also show that ArmA is rather GPU bound and not server side or CPU side. Post figures that are relevant to subject at hand please.

 

LE: Here is a more recent test of AoS. http://www.computerbase.de/2016-02/directx-12-benchmarks-ashes-of-the-singularity-beta/#diagramm-ashes-of-the-singularity-1920-1080 24% increase moving to DX12 on R390 and is doing that while doing more work than nVIDIA.  An that's in an engine already proper multi threaded. 

 

Excellent presentation to see thing more clear.

 

When all the set of the hardware/software be ready, we have a severe improvements and better graphics quality into the games.

 

To avoid thread spam I add this:

 

 

Isnt directly related but I think the Enfusion guys are doing a great job.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×