Jump to content


  • Content Count

  • Joined

  • Last visited

  • Medals

  • Medals

Community Reputation

11 Good


About windies

  • Rank
    Master Sergeant

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. windies

    Tanoa - Performance optimizations? yes/no

    I honestly doubt DX12 alone is going to do much for ArmA. Problems really aren't with the API being used but the engine. I would be more interested in hearing if any of the work on Enfusion is going to make it's way into the expansion and therefor into ArmA 3.
  2. windies

    New terrain reveal - Tanoa

    From some of the frame time analysis I've seen, the actual process itself tends to stall out a lot on basically nothing. I'm just guessing but it could be that during these "pauses" it's waiting for data to be streamed into the process work space. AFAIK what's stored in the file mapping API, even if it's stored in RAM, resides outside the process work space therefor it has to be paged in in some way. Yeah you can see this as hitches and stutters in game but even if you don't notice a stutter or hitch it could literally stall the process out enough on a constant basis to create a significant frame time overhead. Again I'm just guessing but it seems plausible. Pretty sure even with the HC, AI Pathfinding is calculated on the server and on the client both and is sync'd together over the network. The slowdown you get is the FSM's initializing and again AFAIK They're still purely in script and due to the serial limitation of SQL cannot be processed in parallel which is sorely needed. The HC is another one of those band-aid type fixes that I would rather not see. Not only would a new scripting language capable of parallel processing need to be introduced, be it Java or whatever, but then all the FSM's and scripts currently in the game would then need to be ported over and depending on how they're coded it would most likely cause a huge headache. Although really all that would realistically need to be ported over would be AI algorithms and most of the simulation and physics. Again would still be a headache but it's not like it's not doable. I think as far as multiplayer is concerned, a lot of the disparage in performance between MP and SP stems from data streaming and needless simulation processing on both client and server due to flawed design and implementation. MP hits the commit charge in the nuts figuratively. Easily you can get a 12gb commit charge if not more playing with 10-12 people with moderate amounts of assets with a starting commit charge of 1-2gb. You only have a 3.5gb work space, probably closer to 2gb realistically, so you're basically streaming about 2-3x your process work space on the fly. In a sense, whats really needed is a new engine foundation, something they can build on and port as much of the current engine into, while revising and updating what needs to sorely be fixed. Hopefully what they are doing with Enfusion but we will wait and see.
  3. windies

    DirecxtX 12 for ArmA 3?

    How is DX12 gonna be a kick in performance since we're majority limited by the engine and not by the API? Didn't they say the same thing about DX11 in ArmA 3? Highly doubt a new API is the savior to A3's woe's, be happy if I'm wrong I just seriously doubt it.
  4. windies

    Development Blog & Reveals

    Tanoa definitely looks multitudes better than Altis IMHO.
  5. windies

    New terrain reveal - Tanoa

    The problem with MP isn't really to do with object count but more to do with increased simulation which basically adds to frame time therefor making rendering slower. Actually I don't think any of the performance issue's currently present within ArmA really have much to do with the rendering aspect, unless you increase resolution or AA or settings until it DOES become a factor, rather it has much more to do with how much simulation is going on and therefor increasing frame time. I actually tend to think that the issue's with Altis and it's size stem from the data streaming that BI uses to constantly stream data into and out of the process work space, not so much object count or view distance. If you think about it, if the process is waiting for data to be streamed into the work space, that's cause for a stalled thread right there. It's probably why draw calls aren't much the issue but the actual size of the island is the issue. Also why it would seem older content runs better, because it's smaller data sizes being streamed. How that plays into Tanoa and performance I think will depend more on it's detail and data size rather than raw object count or anything render related. It's one of the reasons why I've always been a proponent of 64 bit binaries. I think it would be one alleviation of a problem, but only if they truly do away with the actual streaming and map completely to RAM, something which would probably require work on the engine which I think they are reluctant to do. It would probably increase the minimum RAM requirement to 8-12gb+ but if that's what's needed for a stable platform then so be it. RAM is honestly pretty cheap anymore.
  6. As an example, from a functionality pov how do bipods differ from BF4 in ArmA? They're actually quite similar in function for the most part with roughly the same imparted limitations. For that matter how does FFV differ from BF4? They both share very basic commonalities in implementation.
  7. @ Bad Benson I think you're pretty much correct about it being like a house of cards no one wants to touch. I also think the engine was built to have so much functionality within scripting that it's also limited by this. From my understanding SQL is very serial. Also I think there are a lot of engine design choices for the sake of scripting that compound the performance issue's. Rendering being synchronous to simulation, inefficient data streaming instead of effective memory management which honestly is a pretty hackish" way to do it and I don't mean data streaming in general simply the way BI did it to overcome 32 bit limitations that they experienced back in ArmA not even ArmA 2. I think because the AI is so tied to scripting in it's execution that being able to effectively parallelize it without honestly another scripting language or serious work to SQL is practically impossible. It's just A LOT of things that compound the issue and it's frankly because they've let it reach this point while probably using the excuse "Some things are too hard" even to themselves. It's not that I don't get it, it's just that I sometimes honestly wonder if BI would be happy reaching a point where their engine basically won't run anymore because they basically keep adding and adding and adding without ever fixing or maintaining anything. If anything hardware is just going to keep getting more and more parallel in it's operation as far as logic units are concerned.
  8. I agree with it not really being worth it for ArmA. There's gains to be had even in ArmA by overclocking under certain circumstances where the engine isn't thread locked to hell. You just generally won't notice it 95% of the time though. GPU usage doesn't equal anything but GPU usage. It's not a performance metric, it's not even really a metric of how hard the GPU is working. It's simply how many GPU cores or shaders, IIRC, are active. It's not even like CPU usage where you have one unit doing processing and it's a metric of how busy that unit is during a polling interval. If rendering requires 600 shaders and you have 1000 shaders, then you're only gonna use 60% of your GPU no matter what speed. How fast that work is done, how fast that frame is rendered as far as the GPU is concerned is based on how fast those shaders can calculate which is what clock speed is. Usage doesn't change when overclocking. You're not suddenly creating more shaders on your GPU by overclocking. Usage doesn't lower when overclocking. This whole thing about overclocking at 99% usage or it's a waste or usage having anything to do with performance is seriously BS more or less. There's no correlation, none. This engine outgrew the best hardware the second parallel processing became the norm and CPU's started having more than one core. That's just a fact. Having the best hardware anymore means very little to ArmA's performance. A G3258 will run the game as well as an i7-5960X within a very comparable margin of error. Considering how "CPU Intensive" ArmA is, that's just plain sad. ---------- Post added at 03:15 ---------- Previous post was at 03:11 ---------- Probably because you can't comprehend it. It's OK though, keep laughing. Ignorance is bliss they say.
  9. The workload is still the same, it's still rendering. The only difference is the simulation, sound, AI etc in ArmA being mitigating factors to performance. Case in point, as far as overclocking the GPU us concerned it doesn't matter. Whether you get better performance in ArmA is is irrelevant to whether GPU clock speed has any correlation to GPU workload or usage as was the argument. It's still beneficial in ArmA anyways, it's just that you really don't see much of an improvement because there's very little rendering overhead versus the massive simulation and core thread overhead. To the thread topic, Yeah it will produce more frames. Will it produce a lot more? Probably not. Has nothing to do with GPU workload, but it has to do with the entirety of the thread being stalled by things like simulation, scripting and AI taking up a major bulk of frame time. You're still increasing the speed at which you can calculate that Rendering workload, but if it's only .5-1ms out of a 16-24ms frame time then it's not going to make a big difference. ---------- Post added at 17:17 ---------- Previous post was at 17:15 ---------- At this point you have your own logic and it's extremely flawed. Really ArmA isn't CPU limited or GPU limited, It's strictly engine limited at this point.
  10. I can easily prove it using a utility such as Furmark or 3DMark. No matter GPU speed, either will run at 99-100% GPU utilization. Besides I'm not the only one that said that "explanation" is wrong. Also your thinly veiled insults generally summarize yourself better than me.
  11. Again still wrong. If I have 1 render task per frame that takes 5ms at say 900mhz to compute but only uses 10% of the GPU and I increase that clock speed to 1000mhz and it only takes 4.5ms then I have increased FPS by like 10% or something like that, Point being usage and performance are irrelevant to each other irregardless of the math. Usage just simply means that X amount of the GPU's core's are in use. Has nothing to do with speed or performance. It doesn't matter if I'm at my target FPS or not, overclocking makes a difference as far as GPU tasks are concerned, usage being irrelevant. The reason it doesn't with ArmA is because of other threads operating within the same frame time stalling and causing that frame to take longer to render. In fact really, overclocking your GPU is actually having an affect on actual rendering thread performance. You're just completely limited by the engine, not even some "CPU limit point" but literally the engine itself. Anyways, there's no correlation between GPU usage and some CPU limit, increasing GPU speed simply increases how fast the GPU can calculate. It doesn't decrease GPU usage either.
  12. Because they keep adding to it and adding to it. If they stop then they piss off all the people who don't care if the game runs like a potato. 95% of the problem is simply that the game is heavily scripted and most of it's functionality is written to be modified by script and also limited by that. The fact that it's so monolithic in nature is because of how heavily scripted it is.
  13. The only thing I can think is that they are working on Enfusion as a solution to the inherent issue's in RV. If so I can understand why they're silent about performance improvement with ArmA 3 as there's probably nothing they can do, even though they could it would be a waste if Enfusion is the end goal. It's still a big IF but honestly I don't think they can bury their heads in the sand about the issue much longer. I highly doubt we will see it for ArmA 3, maybe ArmA 4 however?
  14. Utilization is a workload metric, Not a measurement of speed that workload is done at. When you increase clock speed, you're increasing the speed at which the work can be done, not how much work can be done. A GPU at 1mhz can have 99% utilization and 1 fps while the same GPU at 1000mhz can have 99% utilization and 1000 fps. Clock speed and utilization are not necessarily indicative of each other.
  15. windies

    Low CPU utilization & Low FPS

    FYI you can turn off downclocking by setting minimum processor state in power options to 100%. If you use constant voltage versus turbo multipliers and turbo voltages for overclocking then the speedstep or powersaving is pointless anyways.