Jump to content
🛡️FORUMS ARE IN READ-ONLY MODE Read more... ×

DBGB

Member
  • Content Count

    26
  • Joined

  • Last visited

  • Medals

Everything posted by DBGB

  1. Anyway to improve this ? http://gifmaker.me/PlayGIFAnimation.php?folder=2013102208zvEsfpnNmtgwncSpII27Kb&file=output_Mx9kPW.gif Specs: 1 GB Vram (ATI Radeon 5870) 24 GB ddr3 @1333Mhz Xeon w3690 @3.47 Ghz
  2. Thanks for replying ramius86. I guess I might need a GFX card with more vRam in order to check if that (texture loading from regular ram) is 'the' bottle neck, and not some arma engine stuff. The white tiles IDK .... The blurry tiles are probably because of only 1GB vRam At the same time (I haven't been clear about this, sorry) I posted first in the DEV discussion because I wanted to 'hint' the issue to some BI developers in the hope of getting a reply that would 'hint' me that they are working on a solution already :) Expect a fix in one week :-D I wish this has something easy fixable like too big terrain texture LOD's and there was an easy fix, like using 'smaller' detailed terrain tiles when flying high.... It looks like the engine is trying to stream 'fullsize' 'fullytextured terraintiles and then downscaling them... Anyway I'll let it rest... Thx for replying - Will save up for a new GFX so testing can continue :-)
  3. Ok back again... Ram disk does nothing for me. Using licensed DATARAM disk and mounting Arma 3 folder into empty arma 3 steam folder. FPS is at 40 to 60 because object dist is less than 1200. Reason why I posted in the DEV branch (initially before post was moved) was because = http://forums.bistudio.com/showthread.php?163640-Arma3-and-the-LARGEADDRESSAWARE-flag-(memory-allocation-gt-2GB)/page7 and the discussion about midrange blurry texture in the DEV BRANCH discussion I uploadet a new GIF where a lot of the terrain tiles are white initially. http://gifmaker.me/PlayGIFAnimation.php?folder=2013102209qkaD5UyCoR5BhZde0TaXdF&file=output_ULbg5S.gif Specs: 1 GB Vram (ATI Radeon 5870) 24 GB ddr3 @1333Mhz Xeon w3690 @3.47 Ghz Arma DEV version 1.05.0.111.433 Launch options = -nosplash -nologs -maxMem=2047 -skipIntro -malloc=tbb4malloc_bi
  4. Hmm... I can try that - I am just assuming that the terrain streaming would eventually reside in the OS filecache...BRB
  5. DBGB

    Arma 3 && Multithreading!!

    How about implementing something akin to an 'AI' arbiter thread that decides the result of all AI calculations done in separate threads (for instance on a gpu). When I wanted to enjoy Arma 2 with a little more oommph I started one instance of the Arma 2 server executable on my 16 core (quad socket) machine and locked it to half the CPU cores, and then started the game client up on the other 8 cores. Then I played a network game with myself or together with others with my server hosting the game.... The server executable decided who hit / didn't lived / died... not the client I guess ?? Thus I had some additional AI headroom since the client was doing it's own AI calculations for my squad...was my understanding. In lack of better words -> Move all AI (physX) calculations to separate / unused cores either on a CPU or on a graphics card - I considered something similar for Arma 2 here : http://forums.bistudio.com/showthread.php?100519-exThreads&p=1653998&viewfull=1#post1653998
  6. I was wondering about the way that users at dev-heaven in the community issue tracker can vote a bug/feature/etc. up in the same way that a Product Owner (as viewed within a Scrum development framework) decides priorities for a product backlog item? So th BI devs use the CIT tickets to monitor potential product backlog items and pick 'public CIT' items for the next sprints ? Just curious about the similarities - Don't know if this is the right forum for this post.....feel free to move this post to the sprint backlog ;-)
  7. I also tried to compile the latest tcmalloc from google perftools - the differences between the sourcecode provided by BI and v. 1.9.1 can be used to infer what needs to be updated in 1.9.1. I worked a bit on adapting 1.9.1 -> creating the interface, but it's a wip because there are some substantial differences that gives me some issues. You have to specify some compile options that will enable C++ exceptions. Especially in the sourcefile tcmalloc.h + tcmalloc.cc changes have to be done that I haven't figured out yet. But maybe its easier to adapt TCMalloc_bi to comform to the functions and number of arguments in tcmalloc.h (+ tcmalloc.cc). Further inferences can be done by comparing between the base for BI's implementation of the different mallocs and the sourcecode they are based on. FX. take TBB4 from intel and compare that to BI's implementation which includes the necessary interface.
  8. Is it just me or is the allocator not reported in the .rpt file anymore ? Only seems that the argument's passed when launching the game is written in the latest official patch 87580.
  9. For those interested Was googling through sourceforge.net when I stumbled across this: http://mpc.sourceforge.net/ The (MultiProcessor Computing) framework MPC 2.2.0 (stable) haven't figured out if Intel is using that as the basis for their TBB... Anyway just posting FYI Added: TBB wiki FAQ link - http://threadingbuildingblocks.org/wiki/index.php?title=Using_TBB
  10. I have used VS2010 to build a 'malloc' DLL implementation based on NedMalloc from the sources here http://prdownloads.sourceforge.net/nedmalloc/nedmalloc_v1.10beta1.zip As an absolutely noob I might have done something wrong - but after modifying the code - I had to add a section from BI's NedMalloc source -> namespace nedmalloc { int VirtualReserved = 0; ..... the VirtualReserved was linked from NedMalloc_bi.cpp but didn't exist in the updated source I grabbed from source forge so I got some linker error. I also did some other modifcations - But whatever - the allocator is not used in the game - it reverts to the windows allocator. My custom DLL is renamed NedMalloc_bi_upd.dll - was originally NeMalloc_bi.dll as I used BI's VS project files. And it's the only one in the dll folder - mentioned as NedMalloc_bi_upd in the startup parameter... Hmmm..... next step..... ? BTW: I notice that BI's Dll's are signed - So the ones I built a week ago from the sources to get to learn VS2010 compiling is a bit less in size. Is this a problem ? (Mine are obviously not signed) Update: On with the thinking hat... I got it running - ended in small disaster ===================================================================== == E:\Games\Bohemia Interactive\Expansion\beta\arma2oa.exe == "E:\Games\Bohemia Interactive\Expansion\beta\arma2oa.exe" -mod=Expansion\beta;Expansion\beta\Expansion -nosplash -malloc=NedMalloc_bi_upd ===================================================================== Exe timestamp: 2011/11/09 16:09:53 Current time: 2011/11/09 16:34:55 Version 1.59.86218 Allocator: E:\Games\Bohemia Interactive\Expansion\beta\dll\NedMalloc_bi_upd.dll Item str_disp_server_control listed twice Warning: looped for animation: ca\wheeled\data\anim\uaz_cargo01_v0.rtm differs (looped now 0)! MoveName: kia_uaz_cargo02 Warning: looped for animation: ca\wheeled\data\anim\uaz_cargo01_v0.rtm differs (looped now 1)! MoveName: uaz_cargo02 ======================================================= ------------------------------------------------------- Exception code: C0000005 ACCESS_VIOLATION at 008F5111 Allocator: E:\Games\Bohemia Interactive\Expansion\beta\dll\NedMalloc_bi_upd.dll ... ... Distribution: 1486 Version 1.59.86218 Fault address: 008F5111 01:004F4111 E:\Games\Bohemia Interactive\Expansion\beta\arma2oa.exe file: intro world: Desert_E Prev. code bytes: 44 24 44 0F 59 D0 83 C4 04 83 45 08 10 0F 58 CA Fault code bytes: 0F 29 08 0F 28 C4 83 C1 04 3B 4D 10 0F 58 C6 0F Registers: EAX:1157EFD8 EBX:00000020 ECX:00000010 EDX:00000010 ESI:00000020 EDI:00000010 CS:EIP:0023:008F5111 SS:ESP:002B:0182F410 EBP:0182F4B0 DS:002B ES:002B FS:0053 GS:002B Flags:00210206 ======================================================= ...will play a bit with the working implementations instead....now
  11. Academic Discussion NUMA_aware_heap_memory_manager_article_final.pdf Source code based on google-perftools-0.97 code is provided in the pdf.s second last page incl. diff Source code link provided here as well: http://developer.amd.com/Assets/NUMA-aware%20TCMalloc.zip Update: I wrote a link to a comparison between TBB4 an Quickthreading in a previous post here : http://forums.bistudio.com/showpost.php?p=2049014&postcount=66 - Quickthreading is apparently building on what's described here: New NUMA Support with Windows Server 2008 R2 and Windows 7 Some MSDN example code is given here : Win7NumaSamples.zip
  12. Suma you're right Noob error, I hadn't downloaded the source files from TBB3 = tbb30_20110427oss_src.tgz only tbb30_20110427oss_win So I only searched in BI's TBB3_source dir src - missed that this dir was missing from tbb30_20110427oss_win... Now I can see the modifications...thx
  13. I won't have time to look into messing around with any 'new' malloc implementations during the weekend. I'm traveling from tomorrow but... But one hint regarding the game engines interface. It looks like TTB3 which was mentioned as being used as the default memory allocator in the engine - the interface specification from the BI wiki is taken directly from tbbmalloc.ccp - line 216 to 223 #ifdef _WIN32 #define DLL_EXPORT __declspec(dllexport) extern "C" { DLL_EXPORT size_t __stdcall MemTotalCommitted() {return scalable_footprint();} DLL_EXPORT size_t __stdcall MemTotalReserved() {return scalable_footprint();} DLL_EXPORT size_t __stdcall MemFlushCache(size_t size) {return scalable_trim(size);} DLL_EXPORT void __stdcall MemFlushCacheAll() {scalable_trim((size_t)-1);} DLL_EXPORT size_t __stdcall MemSize(void *mem) {return scalable_msize(mem);} DLL_EXPORT void * __stdcall MemAlloc(size_t size) {return scalable_malloc(size);} DLL_EXPORT void __stdcall MemFree(void *mem) {scalable_free(mem);} So basically from my perspective it's necessary to figure out for MemTotalCommitted() what type is returned and what argument's (pointer/object/struct ref) the function accepts.... DLL_EXPORT size_t __stdcall MemTotalCommitted() {return scalable_footprint();} Points to scalable_footprint() - which again looks like it's 'templated' and really the 'overlloaded?? function internal_footprint that is really MappedMemory So it's a bit tricky to me figuring out atm how I can convert 'others' malloc function calls to the TBB3 interface. I know I need to look up TTB3 it seems - then go figure out if my "custom malloc" implementation have a single or several functions combined that does what TTB3 does - figure out how I can make a call to that/these functions and what kind of data they return and maybe typecast them into something that TTB3 accepts. But nevertheless it's fun to look into - I got some colleagues at work who have given me some directions - although they didn't really understand my motivation for creating a 'custom' dll. Hope that community will start to look into this as well... I definitely need to read up on TTB3 (is that pthreads ?) tonight...
  14. JEMalloc_bi dll size reduced to 58 KB from 484 KB TCMalloc_bi dll only 36 KB from 184 KB NedMalloc_bi is down from 383 to 80 KB Thx ;-) (Have no clue if the above is super optimized... or if I can set some other options I don't know about yet) Wonder if the debug version contained all kinds of debug symbols and other not used struff - that impacted the performance I saw when testing the different 'debug' builds. BTW: SW License wise - is it legal to distribute the above DLL's without the source / (+with or w/o VS project files) - like for instance somebody can't figure out how to build in VS or anywhere else - can I compile the DLL and send it / post it somewhere without risking violating the SW license for the given source code (GPL vs Booster license vs etc)? ---------- Post added at 10:57 PM ---------- Previous post was at 10:37 PM ---------- Haven't tested the release builds of my malloc builds from previous post... Got curious when I saw that BI also had provided the source code for TBB4 - That code is obviously different from what's available here : http://threadingbuildingblocks.org/ver.php?fid=174 But could provide some insight on how to modify / export the functions from other malloc implementations when adapting it to the interface described in BI's malloc wiki. I wonder if BI's implementation is from the latest code commit from threadingbuildingblocks.org since there is differences. I guess the intel http web download link could be old - maybe somewhere there is a newer repository (subversion/github link please) :-) Well - I'm going to try to build from the TBB site. I recommend using something like BeyondCompare to adapt/modify source when doing this in windows - and you know almost nothing about programming... Found this post "How does TBB load balance between muti-cores" on a TBB forum : http://software.intel.com/en-us/forums/showthread.php?t=86049&o=a&s=lr Reads to me that there's better NUMA awareness using the QuickThreading paradigm - Comparison link between TTB and QT http://www.quickthreadprogramming.com/Comparative%20analysis%20between%20QuickThread%20and%20Intel%20Threading%20Building%20Blocks%20009.htm
  15. I built tcmalloc_bi and got a out of mem. error in Arma. Anyway it was quite easy creating the dll using Visual Studio 2010 - the project built a dll file with a size of 184 kb - larger than the TTB versions. Maybe I can set some optimization options in VS - though I'd have too look into that. Anyway here's an excerpt of the arma2oa.RPT == E:\Games\Bohemia Interactive\Expansion\beta\arma2oa.exe == "E:\Games\Bohemia Interactive\Expansion\beta\arma2oa.exe" -nosplash -skipintro -cpucount=12 "-mod=expansion\beta;expansion\beta\expansion -malloc=TCMalloc_bi ===================================================================== Exe timestamp: 2011/10/31 16:31:28 Current time: 2011/11/01 17:21:03 Version 1.59.85889 Allocator: E:\Games\Bohemia Interactive\Expansion\beta\dll\tcmalloc_bi.dll Item str_disp_server_control listed twice Cannot register unknown string STR_VERY_LARGE ... ... Virtual memory total 4095 MB (4294836224 B) Virtual memory free 2951 MB (3095097344 B) Physical memory free 16251 MB (17041092608 B) Page file free 15880 MB (16652148736 B) Process working set 719 MB (754245632 B) Process page file used 755 MB (792195072 B) Longest free VM region: 2146865152 B VM busy 1217036288 B (reserved 342503424 B, committed 874532864 B, mapped 47562752 B), free 3077799936 B Small mapped regions: 8, size 36864 B ErrorMessage: Out of memory (requested 3 KB). footprint 408420352 KB. pages 16384 KB. ... A lot of these errors listed as well Link to 99c702d4 (Obj-224,206:724) not released Link to 9966f292 (Obj-222,203:658) not released I read in another post that tmalloc had been used previously. My experience so far looks like it's neck and neck between TTB v3 and v4... Will try to build some of the other allocators from source given and test. (JE Malloc from VS 2010 generates a dll around 454 kb...located in debug folder....maybe I have to set some VS options to strip it down....anyway this is fun) ---------- Post added at 06:52 PM ---------- Previous post was at 05:54 PM ---------- I got this out put from VS 2010: 1>cl : Command line error D8016: '/ZI' and '/GL' command-line options are incompatible Looks like the compiler CL.EXE command get's the options supplied by some of my default VS settings ?! /c /ZI /nologo /W3 /WX- /Od /Oy- /GL /D WIN32 /D _DEBUG /D _WINDOWS /D _USRDLL /D NEDMALLOC_BI_EXPORTS /D _WINDLL /D _UNICODE /D UNICODE /Gm /RTC1 /MTd /GS /arch:SSE /fp:fast /Zc:wchar_t /Zc:forScope /Fo"Debug\\" /Fd"Debug\vc100.pdb" /Gd /TP /analyze- /errorReport:prompt /GL Enables whole program optimization /ZI Includes debug information in a program database compatible with Edit and Continue Found the /ZI option and changed it to /Zi Generates complete debugging information And now the project build completes - dll size is 383 KB BTW: Was looking at the export section that BI already made (Is to be found in all the sources given from http://http://community.bistudio.com/wiki/ArmA_2:_Custom_Memory_Allocator) extern "C" { DLL_EXPORT size_t __stdcall MemTotalReserved() {return nedmalloc::VirtualReserved;} DLL_EXPORT size_t __stdcall MemTotalCommitted() {return nedmalloc::VirtualReserved;} DLL_EXPORT size_t __stdcall MemFlushCache(size_t size) {size_t before = nedmalloc::VirtualReserved;nedalloc::nedmalloc_trim(0);return before-nedmalloc::VirtualReserved;} DLL_EXPORT void __stdcall MemFlushCacheAll() {nedalloc::nedmalloc_trim(0);} DLL_EXPORT size_t __stdcall MemSize(void *mem) {int isforeign;return nedalloc::nedblksize(&isforeign,mem);} DLL_EXPORT void *__stdcall MemAlloc(size_t size) {return nedalloc::nedmalloc(size);} DLL_EXPORT void __stdcall MemFree(void *mem) {nedalloc::nedfree(mem);} // DLL_EXPORT __stdcall void *MemResize(void *mem, size_t size) {return moz_expand(mem,size);} // TODO: consider implementing expand? This is a nice hint for those that want's to roll their own implementation - Using BI's modified project sources I'm pretty sure that given some time it would be possible to check what to look for and possibly modify in a "3rd" party malloc implementation. So maybe the Hoard is coming in over the horizon...
  16. TTB4 http://threadingbuildingblocks.org/whatsnew.php I guess BI may or may not have a license for the commercial version (Intel resource link:) http://software.intel.com/en-us/articles/intel-tbb/ But apparently BI can distribute a version of TTB3+4 under GPLv2 + RE or maybe they have a license for the 'commercial' TTB. Anyway - OS or not - I'm quite interested in (alternative) implementations that are focused on optimizing code-paths for multicore - Numa systems. Dreaming again : Would be snazzy to have the core engine compiled specifically for the code-path that will give the optimal (parallel) execution flow - by the press of a button ;-) and a fallback default codepath. Update - Maybe nedmalloc should be my focus point instead of hoard http://www.nedprod.com/programs/portable/nedmalloc/ It looks like BI will provide the above list, execpt for the last of course.... :-D So maybe I should just be patient.....
  17. ok - Downloaded the windows source for winhoard -> hoard-38.zip Looking into the BI wikipage : http://community.bistudio.com/wiki/ArmA_2:_Custom_Memory_Allocator Take the 'MemTotalReserved()' - Total memory reserved by the allocator (should correspond to VirtualAlloc with MEM_RESERVE) I find this function call in the hoard source in the two header files: mmapheap.h mmapwrapper.h And in a c file - sbrk.c Since I'm such a novice at programming (anything) I'd like the community to help modify the hoard38.zip source to conform to the DLL Interface required by the game engine. I have Visual Studio so I should be able to figure out how to make/build/compile from the sources. But atm - it's too much work for me to figure this out on my own...I think ;-)
  18. Just tested with the winhoard.dll x64 downloaded from here : http://plasma.cs.umass.edu/emery/download-hoard I just made a backup of the dll folder (with tbb3malloc_bi and tbb4malloc_bi) only kept the winhoard.dll file there. Is this the correct way to do test - or do I still need to specify the malloc option in the command line ? Please confirm. Anyway - I tested Benchmark 2 (on chenaurus) and got a "to many virtual blocks allocated" error. It should be noted that I had this in my GFX options .ArmA2OAProfile version=2; blood=1; singleVoice=0; shadingQuality=100; shadowQuality=4; maxSamplesPlayed=80; anisoFilter=4; TexQuality=3; TexMemory=4; ... sceneComplexity=1000000; viewDistance=10000.001; terrainGrid=6.25; And this as my commandline: Bohemia Interactive\Expansion\beta\arma2oa.exe" -nosplash -skipintro -cpucount=12 "-mod=expansion\beta;expansion\beta\expansion;@CBA;@ACE;@ACEX;@ACEX_USNavy;@ACEX_SM;@ACEX_RU My experience: LOOOOOL - everything was less than one frame pr. second in the beginning but later the frames and the sounds of shots began to come in sync - so it sounded like a drummer on a slave galley that gradually increased his BPM - as the amount of objects shown/calculated in scene became less and less.... I think I watched the benchmark for a few minutes...thinking daaaamn...this is slowmo...but getting better...and better...and... .....WHAM CTD with this new wonderfull error message.... I'm gonna test with the other mallocs and 'regular' no ACE commandline. And maybe a bit less ambitious viewdistance ;-) 2nd update: Funny thing happended when testing tbb4malloc_bi.dll Note: I didn't change any gfx/cmd options - The benchmark initially ran just as slow as the winhoard.dll but gradually the gunshots/shell shots began to get in sync with the framerate and get faster and faster untill it actually got into something that felt like a few frames per second... Now the funny part....This benchmark run didn't crash - it also never ended... I alt-tabbed out to write this. The camera just stops sometime after flying over the control tower and the/ some AI shilkas goes crazy on the flying targets which sometimes circles into view (or stays out of the scene, who to tell). This is probably related to other beta (trigger) changes... but definitely a difference between the two malloc dll's so far. 3rd update Ahh the benchmark is about to end... I was just impatient... the screen is fading to black....I'm waiting for the FPS score....will alt tab back again in a minute...10 maybe to tell result ;-) Nevermind...it must be less than 1 FPS.. Will now test the default malloc (empty dll folder)....1..2...3 4th update: Default malloc (empty dll folder) - crashes to desktop with the same "to many virtual blocks allocated" error. Only seems to get about half into the benchmark 5th update: Reset GFX to default in options (VD=2400) and used winhoard.dll - no crash but still benchmarks never shows FPS / ends - will now try without ACE cmdline. Haven't paid attention to any cpu core affinity issues - but let the engine use 12 out of 16 cores on my rig in every benchmark. 6th update: Wooohooo got 8 FPS with VD=2400 and winhoard.dll - will make a table - give me 20 min. 7th update: Ran both intel/bi's 'beta' memallocators and the winhoard - and with empty dll folder - benchmark 2 - two runs each -> all hovered at the 8 or 9 FPS... default gfx options 1600x900 + VD=2400. (Radeon 5800+ latest drivers - server 2008 R2 x64) So maybe I'm doing it wrong ? Or the benchmark / malloc / engine options combo won't show any big miracles.
  19. Browsed through the links presented above - (interesting Hoard results). Seems some testing is imminent with this beta. I wonder if the Intel implementation (I have a quad-socket quad-core AMD Opteron setup in "Numa" mode) will invoke some kind of artificial throttling - or 'miss' some optimizations based upon CPU architecture.... ( Intel compiler controversy: http://www.agner.org/optimize/blog/read.php?i=49 That's why it would be NiceToHave some kind of malloc plugin benchmark interface that would help the casual user to decide the best malloc option to use (maybe even provide a compile option for 'custom' implementations)
  20. Nice... I guess... This would really be a firstmover thing AFAIBelieve. If the enduser could supply a commandline argument that would enable use of 'external' GPL/Commercial malloc implementations optimized for his/her's specific CPU/NUMA/RAM environment/topology. So basically I dream BI could enable that the coreengine could hook into a whatever 'malloc' implementation the enduser wanted to use... Is this the idea ? [Would be nice if the community or BI could supply a script that would run through a benchmark that would help the enduser choose the malloc giving the best performance]
  21. It's Shanghai's 8384 in a Tyan 4985-E board. (4x) I might be able to optimize memory 'bandwidth' / Game FPS by locking the arma2.exe to a single socket (affinity mask) - but will wait till offical patch comes along to create that launch param and experiment.
  22. Not really necessary the Arma2 install runs of a Pair of SSD's in RAID0 - and stuff tends to get cached after the first run. Any way here's some more test results. It's the two chenaurus / arma 2 benchmark missions I each ran through 5 times. It's on a computer with 24 GB and 16 [email protected] Test results produced with these settings 1280x720 3d res 75% VD = 555 eye candy turned to very low (except texture detail = normal; Vid-mem = high) First run B01 / B02 malloc 0 = 54 / 16 malloc 1 = 58 / 17 malloc 2 = 59 / 16 malloc 3 = 57 / 17 malloc 4 = 55 / 17 Second run B01 / B02 malloc 0 = 53 / 16 malloc 1 = 59 / 16 malloc 2 = 58 / 18 malloc 3 = 59 / 17 malloc 4 = 57 / 17 Third run B01 / B02 malloc 0 = 57 / 17 malloc 1 = 56 / 15 malloc 2 = 58 / 17 malloc 3 = 57 / 17 malloc 4 = 58 / 15 Fourth run B01 / B02 malloc 0 = 55 / 17 malloc 1 = 59 / 18 malloc 2 = 59 / 16 malloc 3 = 55 / 19 malloc 4 = 54 / 17 Fifth run B01 / B02 malloc 0 = 54 / 18 malloc 1 = 57 / 16 malloc 2 = 58 / 17 malloc 3 = 57 / 17 malloc 4 = 57 / 17 averages B01 / B02 malloc 0 54,6 / 16,8 malloc 1 57,8 / 17,0 malloc 2 58,4 / 16,8 malloc 3 57,0 / 17,4 malloc 4 56,2 / 16,6 So the best for my configuration seems to be malloc 2 or 3. But it also seems my B01 results are capped at 60 fps/vsync limit I'm really looking forward to what is behind thos numbers - and what would be the optimal one for a specific computer configuration.
  23. I have tested on a 24 GB machine and had no crashes in any malloc 'setting' - though the cpu's are only 2.7 Ghz - I forgot to turn down the eyecandy - so everything ran at 1280x720 max eyecandy and with VD@3600 - with this all mallocs hit between between 21-23 FPS. Actually malloc=4 hit the highest - 24 FPS. Will try to turn down eye candy to see if I can get some bigger differences later. Specs: CPU Opteron 2.7ghz GPU Radeon 5870 1 GB Edit again: This is with 82448
  24. I want to the above too!!!! Just wondering à have a dual cpu machine (2quad Opties 8384) and tried to run dedi server with affinity set to cpu 1 and client affinity set to cpu 2. The dedi server always 'halts' while "reading mission" (Mem use for server tops at 130 MB - client at app 400 to 600 MB at this point) and the client shows a 'waiting for host' message when setting up a mission connecting to localhost. Both Beta and normal combined OA shows this, should I change the server port from default 2302? In the lan lobby I can see two 'servers' with my servername one ping 0 another pin 1 to 4 ms... this doesn't happen when I do a 'remote' to local host... EDIT: It works now... server config was the culprit somehow...tried running dedi + client without server config
  25. DBGB

    exThreads

    Hi Could it be possible to thread the AI even more such that each faction (incl. civilian / wildlife) get's it's own 'local' thread that only interacts with the other threads when some interaction between the agents is required. I am imagining that clustering of AI like this could be done (down to a single 'agent' (solder/animal/vehichle etc.)) could be distributed onto real physical cpu cores that was idle or underutilized. For example when playing the Warfare MP mission in SP mode I have no interaction with the enemy faction untill we clash, which can take a long while if we start far apart. Instead both factions are fighting the 'local' village patriots. If I had 4 unused physcial cores, those could be used to 'battle it' out without interfering with other game threads untill AI agent's involved, they get so 'close' as to require 'computation' of actions. Maybe this is clustering of AI is already 'happening' I have a dual Opteron setup and most of the times I total a maximum of 33 % total CPU time = maxing 2 out of 8 physical cores.
×