Jump to content

fred41

Member
  • Content Count

    536
  • Joined

  • Last visited

  • Medals

Community Reputation

42 Excellent

6 Followers

About fred41

  • Rank
    Gunnery Sergeant

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. fred41

    the ARMAIII needed fix DLC

    @dlegion, i think i would actually buy such an DLC. This idea sounds a bit crazy first (and maybe gentle provocative too), but somehow it is inspiring. Thanks for your effort to bringing some fresh ideas and spirit back here ;)
  2. I am assuming you use the 64 bit version of arma (are you?), just look here: How to check pagefile size. With 8 GB RAM, a pagefilesize of ~12GB should be sufficient.
  3. this registry entry is defined as a 32-bit DWORD and this is where your OS is looking, but yet a funny idea :)
  4. fred41

    the ARMAIII needed fix DLC

    No, like this very thread.
  5. @abudabi, simple spoken, this both advantages are based of large page mapping for 1. code and 2. data for 1.(code): use regedit to create/set DWORD value 'UseLargePages' = 1, for key [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\arma3_x64.exe] for 2.(data) either use BI's default allocator and simple add -hugePages to your arma start line (i think the launcher provides a checkbox for this already), or use blub's 'xtbbmalloc.dll' for some interesting features like diagnostics, etc, the performance advantage should be the same In any case, your (windows) useraccount must have the 'Lock pages in memory' privilege enabled (probably already done).
  6. fred41

    the ARMAIII needed fix DLC

    I think OP has requested legal things and there is nothing wrong doing so. There are a lot of outstanding fixes/improvements that can't be fixed, or at least not efficiently, without access to arma's source code. Since some of this things are requested for many years, it probably makes sense to remember from time to time.
  7. I think, monitoring a server is not only useful, it is essential for tuning a server and to keep things properly running. Hence it is hard to understand, why BI still does'nt provide a serious (and easy to use) interface to support this requirement. Using callextension for this purpose was a very limited, tricky and ressource consuming approach. I nevertheless did it in ASM, to show how monitoring could basically look and there was no other way to do it. But now, years after armas release? I guess you guys have to bug them a bit harder, to get such an interface ;)
  8. @abudabi, actually this is very unlikely, since BE isn't allowing any modification/investigation at binary level and arma is still closed source. Anyway, there is meanwhile a '-hugePages' launch param for client and server and my tests are showing that it is working well. Binary large page mapping (this very registry tweak) works flawless for 64 bit server and client too. Just use this both advantages, it is still totally free and easy to use.
  9. ... yes. arma_x64 is now able to access data cache directly, instead via file mapping api. The large file mapping, used by 32bit process, doesn't contribute to process virtual memory usage, hence this difference.
  10. @nikiforos, if i feel i have something serious to contribute again, why not, but currently this is not the case. Regarding your home edition's lack of 'secpol.msc', i remember there was a guy at guru3d, hijacking this idea here and offering a tool to setup binary-LP mapping for many different games, including arma. I didn't try it, but perhaps this is an solution for you.
  11. Hello @archibald tutter, i am still on my good old 2500k@4GHz (bought used, btw), so i am not a victim of any cartel's :) But yes, loocking forward to AMD's ZEN incarnation too. As for arma's performance problems, i think it's memory bandwidth is still way to hi. Cache locality is the first thing you have to care about on today's cpu's, for optimal performance, even since DDR4. There is still a lot of potential lurking around, so buying more and more powerful hardware, just to get 33 instead of 30FPS, is not the way, imho.
  12. I just now benchmarked with helos 0.60 and the improvement is like from 75.0 to 77.5, compared to 32bit version a very small difference. With yaab i don't see any consistent difference. At the end not enough to make a tool for that again. However, this probably means BI did his homework better now.
  13. fred41

    64-bit Executables Feedback

    @NoPOW ... lurking, of course, especially if waked up by 64bit announcements ;) While modern cpu's, like 'skylake', are equipped with larger TLB (translation lookaside buffers) and hence reducing the number of costly TLB misses, the TLB's are still a bottleneck and even more for 64 bit applications, because they usually have to access more memory pages. So yes, large page usage for code and data (in this order) will still help arma to run better. BTW: If you remember the good old 'GMF tweak', a totaly free and very efficient way to improve memory performance, this thing seems to work perfectly with the 64bit arma binarys again, without any executable patches. Just the 'Lock Pages in Memory' privilege and the registry entry adapted to 'arma3_x64.exe' works flawless for me.
  14. fred41

    64-bit Executables Feedback

    First congratulation to this important step, there is already a nice difference compared to limited 32 bit version. This should open the door for further improvements, requiring large amount of process virtual address space. A question/hint though, related to file/data cache implementation: A closer look reveals, that windows file mapping API is still used, just with a larger section object (~4GB) initially created. There are still frequently MapViewOfFile/UnmapViewOfFile calls to map 64kb windows in process virtual address space. While this approach did make sense in 32bit implementation, i would say this is unnecessary complicated (and a wasting of resources), considering the huge 64bit address space. If not the caching system should be generally reworked, at least a simple adaption could be done as following: - replace the initial CreateFileMappingA(...) by a VirtualAlloc(...) with identical size - replace all related subsequent MapViewOfFile/UnmapViewOfFile(..., FileOffset) calls, by simple pointer arithmetic Perhaps, to prevent windows from early paging to disc, increase the min/max workingset size, respecting systems physical memory availability. I wish you all a nice winter holiday :)
×