Jump to content

darkpeace

Member
  • Content Count

    141
  • Joined

  • Last visited

  • Medals

Posts posted by darkpeace


  1. and as for the 32 FPS smile_o.gif you need to read Suma's post on this subject - it's stated clearing why this is caused and the fact that 32 fsp is the same thing as 50 fps simply becuase of a forumla error

    the CPY still out puts the same no matter what you see for top end FPS

    Sorry to double post (it didn't raise my post count) but I thought this was worth an extra entry in the forum.

    Where was it said this is a "forumla error" ?

    Last I heard it was timeslice length related, and they wouldn't recode it, so 32fps means 32fps and 50fps means 50fps.

    It appears to me to be 'wanting' 20ms timeslices, and getting 2 x 15.625ms ones instead (thus [20ms / (2x15.625ms)] = 64%. Where 64% of 50fps peak being 32fps.

    Or 1000ms / 20ms lengh timeslices = 50fps peak

    vs 1000ms / (2x15.625ms) len timeslices = 32fps peak

    Where the maximum server fps is limited by the amount of timeslices in 1 sec, but the OFPR_Server.exe seams to want 20ms, and this gets rounded to 2x15.625ms in most Windows current operating systems.

    I guess that is why 107 of 276 OFPR Servers are Linux based, the other admins just shy away from it, or don't use it because they are unaware it can raise performance, and save dollars.

    Since work wants me to learn SuSe 9.1 Pro, I may aswell migrate the GarageLAN, ACT server to it, and reap the benefits smile_o.gif


  2. I enjoy our different views and solutions to the same problem smile_o.gif

    I personally use Geil DDR550, running on 2.8v at 500mhz with the faster timings (CAS 2.5) on a 1000mhz FSB, on a Intel i865pe (no point having PAT at 1000 FSB, it doesn't work). Mainly due to price/performance ratio, its AU$150 cheaper than other DDR550, has similar timings, and performs 99% as well, it handles higher voltages though and is guaranteed at 3.1v smile_o.gif

    We both have a similar mind when it comes to hardware I think, but I find an overclocked Athlon 64FX still outperforms a highly overclocked Pentium 4, with less cooling issues (2U rack for online servers / towers for LAN servers)

    OFPR might be a few years old, but Half-Life (based on Quake II engine) is even older, and look what it has become, would it be a bad thing for BIS to strive for a community that size, with tht much effort put into server performance.

    Why go to a LAN to play Counter-Strike, when you can do that fine on Dial-Up ?

    Might as well encourage Flashpoint, as "The LAN Battle simulator to play"

    When was the last time you had a dial-up player on an MFCTI server ?

    We get them constantly here sad_o.gif

    You describe a different problem, and equally different solutions

    Also coding is not very hard, doing the above is far from impossible (it would take a dedicated programmer under 1 week to do and test)

    ===========

    RN Malboeuf: "and you tottaly miss the fact our server is the best place to play large scale CTI maps on the net with Kaos right behind us"

    Yep there is seriosuly nothing like a playing at:

    Roughnecks Whore House (www.roughnecks.org) - Fastest CTI server on the net!

    -With 279ms+ ping to 64.27.26.116:2302 (using far smaller than 1024 byte packets)

    -From Australia

    -In vastly different TimeZone

    -When the server is normally empty (as it is now, 8:07PM GMT+10)

    There are more servers outside America than within it, I come from a Flashpoint Minority myself, just like America is a Flashpoint Minority compared to Europe. Europe being where most of the battles happen.

    Thankfully due to ties with British players (semi realistic SAS squads, with Aussies and Brits) we get to hear some of the news smile_o.gif

    Server Tally:

    ==========

    56 in North America

    3 in South America

    12 in Asia (Japan, China, South Korea)

    228 in Europe

    5 in Australia (to support New Zealand and Oceania Region)

    RN Malboeuf: "this statement is not due to arogence it's due to fact that you are trying to discredit a proper server set up with utter nonsense"

    I never discredited the RoughNeck server, I just came to share a finding that you are still in dennial about.

    ===========

    I still agree a fast server eliminates many problems:

    Examples below:

    ----------------

    P4 Galatin (2MB L3 cache, 512kb L2 cache, 800 FSB, HyperThreaded w/ OFPR_Server.exe affinity on CPU0) I still think it an ideal solution, esp when OFPR_Server.exe runs on around 7 threads (performace reduction over multiple CPUs due to cache coherency and load limits over multiple CPUs), of course it would be overclocked via the FSB with higher performance memory.

    When the P4 PreScott gets 2MB L2 cache and a smaller die, it would overclock more, and likely outperform the above.

    On the AMD front is the Athlon 64/FX which are around 25% - 60% faster at equal clock speeds at certain tasks, so when they go 90nm and reach 2.8ghz (possibly with cache or Dual DDR2 memory controller upgrade) they will start to dominate the (overclocked) OFPR server field.

    However combining the above settings I suggested (this thread is titled "Server Optimization Request" after all) with a fast server would be the more ideal solution.

    Thankfully in a LAN environment with vast resources (some of) the above is possible today, however hosting such a server on the Internet is becoming of less and less interest to me, as we have a few servers here already, the continued costs of 10mbps dedicated line are not justifiable financially (in Australia), so local LANs with insane (once off) costs are a better option for many, esp since 85% of my mates have dial-up, and a good chunk of them can't even get a v90/v92/k56flex connection (33.6kbps or less)

    Now, I do highly doubt that BIS will ever go back and improve the server with the above suggestions, esp considering how sidetracked this thread has become. Hopefully they are working on *OFPR2 server now*, getting multiple simulation cycles in 1 thread timeslice, and spanning the (OFPR2 Server) load over multiple CPUs so when Dual Core stuff comes out before OFP2 is released they will have prepared for it, and we won't have this "single cpu = faster" situation we have now.

    Some brief history:

    ===================

    Australia mate, Australia, I don't need or want to hear about the Internet bandwidth in America. It does not help me at all. It would be considered an unrealistic scenario to any scientist or engineer planning in Australia.

    I've been playing OFP since the demo mate, and analysing it just as long as you have but on LANs, for LANs, with Gigabit backbones, since in Spring 2001 Australia did not have jack in the way of internet connections faster than 128kbps, so the only real option was LANs.

    I started in Multiplayer gaming LANs back in the days of one really old flight-sim for 286/386/486 systems, good old Retaliator F-22/F-29, which did not support LANs, it was pure manual serial/modem handshake and 2 players max.

    Moving onto Doom using IPX networking in DOS latter on, before games had dedicated servers, or used TCP/IP, back when BBS's where a better way to share information than the Internet, and FidoNet was as good as it got.

    Obviously a slow progression to TCP/IP, 4+ players and dedicated gaming servers occurred, then in Spring 2001 we all got Flashpoint, and eventually patched to 1.46, the rest is recent history.

    As I have an interest in http://sourceforge.net/ I found CTI by 'accident' one day, at http://mfcti.sourceforge.net - from there the real LANs begun, Counter-Strike was left for dead, even Battlefield 1942 could not compete.

    ===================

    Try a 10 hour Everon CTI (on 1.1b) then a 20 hour Nogova CTI only 8 hours later. 4-5 hours of anything isn't really a test [:P]

    Heck I run MemTest86 on workstations for 24 hours before installing an OS to ensure there are no 'pre burn in memory errors', one of which would render the OS useless (not straight away, but over time that 1 corrupt bit would be noticed, and cause problems)

    Bear in mind 1.1b was not that server friendly, and the server was only an Athlon XP PR1600@PR2000 with apx 320 FSB, there was minimum desync and side finances where in scientific notation at the end of the game [b)], (no joke it was)

    This was back before the building limits where put in, and barracks could be placed inside other barracks, and other buildings, you could hide a smaller building inside a church for example, and build anywhere you desired.

    We played realtime (no time acceleration) and actually battled well into the game night (in Australia we are hardasses).

    Best way to win a LAN is to be relentless, many opponents fold after 4 - 7 hours, so long as you can fight harder, longer, even if your lossing the battle, just hold out for hours, if the other side all falls asleep then you win. All their offensive and most defensive capability stop. Usually a ref would declare the winner, but sometimes you just want to eliminate them all B)

    ===================

    Q) What if the players link does not recover ?

    A) You are stuck with recurring desync until the end of the game, or until the player is dropped.

    I did say "What if the players link *does not* recover", not when it recoveres what happens, which is what you correctly noted above, when a player link recovers fully in a timely manner the server does recover, usually almost straight away, however as I stated:

    "Q) What if the players link does not recover ?"

    Well obviously since it never recovered, and may be choking on 6kbps or nothing at all for periods during line retrains (our phone lines in Australia are not the best mate)

    Then this happens:

    "A) You are stuck with recurring desync until the end of the game, or until the player is dropped."

    It can be recreated in lab conditions faster and prob cheaper than it can be recreated online, either method yeilds factual, consistant and reliable results.

    ===================

    RN Malboeuf: "Power is every thing in CTI, if you cant see that with your 2.4 then you have some personal issues you need to sort out"

    When did I say the server CPU was 2.4ghz ?

    I recommended it as a *minimum* for the server CPU for small CTIs.

    How the CPU relates to personal issues I will never know.

    ***Bear in mind that MFCTI 0.98 was around well before 2.4ghz CPUs.***

    ===================

    RN Malboeuf: "-1500-2000 kbps minimum downstream / -1500-2000kbps minimum upstream"

    As I stated above the *players* minimum recommended is 384/96, I would not try to host a MFCTI server for multiple players on such a link, it would be at an ISP on 10/100 Ethernet, if not on a LAN on a Gigabit backbone.

    ===================

    RN Malboeuf: "...if you actually think a Dual CPU helps the OFP servers you need to go back and re reseach the server loads and what they can handle - We've spent years on this where you have not"

    I have probably spent longer doing this stuff than most, I just don't post it all over the forums since making money from computer knowledge in my spare time is better than giving away free advise [:P]

    To quote myself from above: "Hopefully *Flashpoint 2* will have CTI built in, from the ground up, and by the time it comes out *Dual Core* CPUs will be available for DIY OFP2 servers, so it best utilise them and perform well."

    I was referring to OFP2, not OFPR, which everyone is well aware does not improve in performance over 2 or more CPUs, or over HyperThreaded virtual CPUs, or over VMware virtual CPUs. (cache coherency, and the fact it limits itself to 50% over 2 CPUs, or 25% over 4 CPUs)

    Dual Core CPUs have only just gone into development (Intel & AMD wise, excluding the UltraSPARCs grand plans that never where), and OFP2 better bloody damn well use them, or http://www.es.com may be providing the next "grand scale war simulation for civilians"

    ===================

    Don't get my wrong, with the CTIs of today a 3ghz or faster CPU would be better than 2.4ghz

    As for the 1.5 - 2.0 mbps for the server, it's not an issue with Intel CSA 1gbps integrated into the Northbridge section of the system (thus using no PCI / Southbridge bandwidth, which helps with other things)

    The 384/96 kbps per player minimum sustained would be accurate, we both agree on that (LAN or Internet wise), however Dual ISDN (128 kbps) does not meet this requirement in the downstream direction, so when push comes to shove and they don't have the bandwidth to download everything realtime, they desync.

    ===================

    RN Malboeuf: "and to clue you into gaming client speeds 95% are on fast lines cable of handing the speed i posted 10-20 times over"

    Well to clue you in, *in Australia*, as I already mentioned, only 10% of the 20 million of us actually have 'access' to broadband. (Thats access to it, usually at work, the home installation figure is far far lower, most people are still on dial-up)

    Apx 1 million of us have ADSL at home (and the statisics include partners twice, so the real figure will be lower than that still)

    Funny thing is WOLF host the VBS1 video file, for the world to see on the OGN server.

    WOLf being purely Australian / New Zealand based

    And as we cover 2 countries plus the surrounding Oceanic area our player base is not so concentrated (esp considering not many people in our corner of the world play OFPR), unlike America where 100 players might live within 500km of each other, and the telco infrastructure is everywhere you look.

    We also don't like our Telco very much, too much advertising, too little R&D and improvement plans, and jack all infrastructure out in the bush. So unless you live in a capital city (and in the ACT the nations capital the population is only 300,000 or so) your pretty much stuck on 33.6 kbps (if your lucky, more like 24kbps)

    ===================

    The above is just the tip of the iceberg of the stuff I deal with every day.

    They call us the lucky country, but when it comes to IT, we got shit fast PCs (we are as inovative in IT as we are in combat, just do a lookup on "Australia" AND "Soldier" for an idea.

    However we have bugger all bandwidth amoung us, mostly in frame relays and racks of ISDN (which is too expensive for most people and does not aggregate bandwidth in a manner that allows for gaming in typical 'home' setups)

    ===================

    So I await the day of OFPR2 CTI, and when ADSL2+ is released in Australia so we can all get 6mbps upload in Australia, but our lame telco holds us back. (We might be a Westernised country, but our infrastructure has not changed much, many old wooden phone poles cover the width of our country still)

    The fastest link you can get here for home is 1536/256 ADSL (some do 2048/384 at the DSLAM now though), but that is assuming you are one of the 5-8% of people with ADSL at home.

    Some have to put up with 256/64 ADSL, and the rest can't even get ADSL (dispite manipulated statisitics that say 95% of us can get ADSL, even though there are only enough ports for 10% of us)


  3. Yeah, Way I see it for CTI is currently:

    =============================

    -384 kbps minimum downstream

    -96kbps minimum upstream*

    (certain misconfigured server settings would require the upload figure to be much higher, like 192kbps+)

    -85ms maximum ping or less

    -Server 2.4 Ghz Pentium 4 (Northwood, 512 L2, 800FSB, Dual DDR) minimum (pref Linux as it may allow for thread length configuration, to get a higher peak and sustainable server fps)

    As for the 50fps limit because of the way it was coded it is limited by thread length (so 20ms threads would peak at 1000ms / 20ms = 50fps), the people over at www.udpsoft.com found a way to reconfigure Half-Life to work with 1 or 2ms tics, to get 1000 or 500fps sustainable peaks.

    However Half-Life netcode is far different from Flashpoint.

    Hopefully Flashpoint 2 will have CTI built in, from the ground up, and by the time it comes out Dual Core CPUs will be available for DIY OFP2 servers, so it best utilise them and perform well.

    CTI itself could use a few optimisations aswell. Although I suspect there are limits.

    eg:

    -Corpses in water should be cleaned up within 60sec

    -Pilot corpses should be cleaed up within 60sec

    -Stray Ammo should be cleaned up within 60sec

    -Disabled Vehicles should be cleaned up within 3 minutes

    -A new Repair Truck with new scripting should be made that just 'replaces' the vehicle, this would fix a few underlying issues

    Does Mike Mevlin still work on MFCTI, or is it all in CleanRocks hands now ?


  4. Here in Australia its a totally different story mate.

    A) Most the players lack the bandwidth to play MFCTI

    B) Our servers are far faster than you think

    C) Server sided bandwidth is ample

    Finding has little to do with the server side, it is client related (lack of bandwidth/poor ping).

    Server testing was peformed on a system with 1000 Mhz and Dual Geil DDR550 (running at Dual 500 with faster timings to better perform on 1000 Mhz FSB), as memory bandwidth starts to become very important in MFCTI, as I am sure you are aware.

    Assuming everyone elses server is slow is rather arrogant. Esp considering your own servers peaks at 32fps, while other servers can actually *sustain over 32fps in MFCTI* once the 'client' end is sorted, alleviating the need for a 4ghz CPU.

    I am sure your solution causes your server to reach 100% CPU load rather frequently, if not hit 100% and stay there :P

    Far from ideal

    There are far better solutions than throwing money into overclocking a server, and there is more than one solution to this problem.

    Enforcing that all players have low pings, fast download/upload speeds, are only a few hops away and have firmware that fixes line retraining / resyncing issues is a far cheaper alternative.

    And 'client filtering' can be configured at the server end to stop the problem from even occuring smile_o.gif

    Although CPU power is important hosting MFCTI eliminating the actual 'cause' of desync is a far better solution than allowing it to occur in the first place, and have the server bear the load.

    Q) What if the players link does not recover ?

    A) You are stuck with recurring desync until the end of the game, or until the player is dropped.

    The above settings would help 100's of admins, and boasting about your (peak 32fps, sustained FPS = unknown) server is not actually helping too many admins, sure it attracts some USA players to your server, but most the players in Europe, Asia, Australia & surrounding areas are not really that interested in playing on an overseas server.

    I am doubtful, as you are, that any of the above settings would be added to the server, so LANs and wireless WANs are really the best options in Australia until everyone can get real broadband.

    No amount of CPU power will help someone on Dual ISDN (128/128kbps) play MFCTI. The infrastructure we have to deal with down here is not great by any means, yet we still acomplish great things regardless. (It forces us to be highly efficient)

    I am sure with a few dial up players, the odd ISDN, and slow 256/64 ADSL your server would lag / desync.

    ================================================

    Worlds Fastest Lag Free CTI server (on the Internet perhaps :P)

    Roughneck Whore House - rn1.roughnecks.org

    P4 4.0 Ghz HT 961 FSB - 1 gig dual DDR PC4500 - Scuzzy HDs

    ================================================

    So the RAM runs at Dual 562.5 Mhz async to 961FSB ?

    Or does it run at 480.5 Mhz sync to 961 FSB ?

    I am also unaware of a 3329/800 Pentium 4 that would overclock to 4000/961, so I assume your server CPU runs at 4084.25 Mhz

    A P4 Galatin (Northwood + 2mb L3 cache, 800 FSB) or Athlon 64 (939 with Dual DDR) at around 2600 Mhz would likely outperform your server before they are overclocked.


  5. Note: the below tends to go on a bit, I may 'optimize' it later :P

    The server does not send message indicating which player is the 'cause' of desync.

    FACT: If one players link suffers or drops the server starts hitting 100% CPU load very fast, until their link settles or they drop after 90sec of "Lossing connection" (which initially takes too long to appear IMHO)

    (I have screenshots to prove this, where a player with 6kb/sec joins, starts desyncing, and the server CPU load rises, thus pushing server fps performance way down, to the point where it affects other players.... very bad)

    These settings could eliminate desync

    (or at least the main cause of it)

    ============================

    It would be nice if the below was configurable by admins:

    -MinimumClientFPS to play, (weighted average sustained over 10sec)

    -Minimum Bandwidth to play,

    -Minimum Bandwidth to join and chat when game in progress.

    -Minimum/Maximum Server Bandwidth given per player (not just in total as is the current)

    -Maximum Ping to play

    -Maximum Ping to join and chat when game in progress.

    -Delay before "Lossing connection" (it is too long by far, it should appear or flash yellow the instant it is noticed, if it lasts over 3sec alert other players of the player with a potentially bad link)

    -Time (in ms) a Players ping / bandwidth can be 'out of limits set above' before being kicked (setting to 0 just drops the player with a "fix link" message straight away)

    Obviosully the ability to detect a resyncing ADSL/Cable would be useful, but not as difficult as it sounds. Being that a player who has sustained over 256kbps (or whatever minimum bandwidth is set to) for 10sec could trigger a flag on the server and thus be excluded from the 'If link drops then kick this player' since they are not on dial-up obviouslly (using 256kbps+ as example) and within 5-120sec thier link should resync.

    The option to overide this would thus be useful

    As in: "Time (in ms) a Players ping / bandwidth can be 'out of limits set above' before being kicked (setting to 0 just drops the player with a "fix link" message straight away)" above.

    I am sure there would be other useful settings to add to the above list.

    Note that many of the above are similar to Punk Buster in RTCW.

    This would also stop bandwidth cheating using NetLimiter (sonuds dumb, but it can cause a server to desync out and wreck a game).

    (Some players cap their upload/download speed to 'cheat', and yes it does work to some degree, the above could combat this)

    The above might sound ruthless to some but they would (used correctly) reduce server CPU load (and thus boost server performance), so everyone wins, except the poor bastard causing desync, that should not be playing on your server anyway.

    It's 'funny' watching CPU load on a 2nd monitor when a Dial Up or ISDN (single channel 64kbps) player joins a big battle server and within 5 min (usually much sooner) the server CPU hits 100% and stays there, there is initially 'some' desync on that player, and server fps drops like a stone.

    (This can be monitored with NetLimiter and Task Manager on a 2nd monitor)

    Same is true during an ADSL or Cable 'line resync' sometimes lasting over 90sec (which is the bane of most ADSL owners)

    However a simple firmware upgrade (or downgrade) can fix this issue straight away (eg: Recent D-Link DSL-504 firmwares sometimes drop the link to retrain, and sometimes this takes 90sec as it occurs twice for an unknown reason)

    Server Performance dives, and in big battles it goes below 8fps, to the "point of no return"..... All because *one player* thought they wouldn't do any harm by joining, or lying about their connection type. Once at 8fps the server it taking 125ms per simulation cycle, and this starts to 'carry over' the desync to all other players, at this point a #reassign or #shutdown is usually done by the server admin (who is surely sick of burning money on a server that some idiot desyncs)

    Surely admins need more controll over their assets, some pay AU$3000 a year just to host a box, we don't want people wrecking that for us.

    The upside servers can be made more (or less) tolerant of low bandwidth / high ping players to suit the desires of the admin (depending what size battles they wish to host, what drain on CPU they can handle, etc)

    Q1) Why does a desyncing player cause OFPR_Server.exe process to start hamming the CPU (100% load) and thus lead to server fps reduction ? (sometimes to the point where it can not recover usually around or under 8fps)

    Likely because the server is keeping track of where everyone is in time 'relative' to everyone else, so the load increases exponentially sad_o.gif

    Using NetLimter to graph bandwidth to/from players it can be noted that some players who lack bandwidth to play larger battles do not drop, they just keep getting "Lossing Connection" over and over until the game ends. (and the game is just one big lagfest because of it)

    A simple message in game to a desyncing player (with the cause being a slow link, not ADSL/cable line resync) should just get a message "You lack the required bandwidth to play here, please upgrade your link to a higher speed before returning". Ideally the ID would then be banned for 5+ minutes to deter them from joining again.

    Thankfully I have noticed the low (almost 0 at times) upload requirement of players, and I commend the effect put into this.

    This would be fantastic at LANs and online alike (I have seen LAN players with dodgy NICs, chipset drivers, NIC drivers, cable, etc, have their connection drop in/out sometimes, ideally they should be warned beforehand, and if they fail to comply then punished by disconnection from the server)

    Same online, admins want the above features, badly, to put an end to the main cause of desync. (ignorant players on slow links, or from other countries with stupidly high pings)

    The only other causes of desync are non optimized missions, low performance servers being pushed too hard by some missions, and players on max detail getting under 15fps which can cause some desync.*

    * - Join a client with only 3-8fps to a high load server and see if your server performance suffers.


  6. What is the smallest thread length that OFPR_Server.exe can use ?

    < 1ms / 5ms / 10ms / 20ms / etc ?

    I run Windows XP Pro (SP1 on Athlon XP 2083mhz=PR2800+ & SP2 on a Pentium 4 3000), both servers exhibit the 32fps limitation.

    I am thinking of moving to SuSu Linux 9.1 Professional (I am getting it for training/work purposes anyway), would this 'fix' the problem ?

    Also does using 31.25ms length threads affect CPU load negatively ? (eg: does it rise and yield no extra performance gain)

    Would using 18ms length threads boost the peak server fps to 55.55fps and using 10ms boost the peak server fps to 100fps ?

    (I am sure that it would, assuming stability could be reached)

    It would be very nice to get the most out of the Flashpoint server hardware, esp since over 32fps (and soon over 50fps) would be possible in MFCTI with the hardware that is becoming affordable to more people now smile_o.gif

    biggrin_o.gif


  7. Yes, I run NetLimiter on my own server, and the CPU usage spikes if just one players becomes a bottleneck, it also is not reported on the 'P' screen until after they start coming back into sync.

    I am sure not every admin is compeltely clueless.

    Using NetLimiter you can also test 'what if' scenarios and see what happens in MFCTI when someone on single channel (64kbps) ISDN (or less) joins a MFCTI game, and other such bandwidth controlled tests.

    Anyone can simply draw a direct line from lack of bandwidth to server CPU load (and thus low fps), it really is very simple to do.

    However on a LAN server capping at 32fps (or 50fps as they claim) is totally pointless.

    The same is true for very high upload internet servers (assuming all players have 512/128 connection that never drops out).

    Thus my own server runs at 32fps constantly in MFCTI, and does not use 100% of the CPU (even though they claim it should run at 'up to' 50fps)

    I paid for the game, and a very nice LAN server, however I get 0 support, so I am considering just chucking in the towel.

    Sad to say it, but in some ways Counter-Strike (yuk) is superior to Operation Flashpoint and VBS1. (in that the author does not limit what the server side software can do even if your hardware is not at 95%+ load, which is total rubbish)


  8. Because most servers cap it at 32fps :P

    Thats why

    Anyway the default would be 50fps anyway (which may end up as 32fps due to the bug on most servers anyway)

    Just this way a server admin could just set 80fps and end up actually getting a limit of 51.2fps (if they are affected by the bug, and most servers currently are affected by it)


  9. Server peak fps should be configuatble by admins

    (via flashpoint.cfg variable on the server)

    eg: LimitServerFPS=255 or LimitServerFPS=60, etc

    Instead of having a limiter at 32fps or 50fps

    Note: This would help raise server performance esp on newer current (Sept 2004) hardware

    The default setting would of course be 32fps or 50fps, so voting No is rather pointless


  10. I did a text search on OFPR_Server.exe and also noticed this setting: "ThrottleGuaranteed" along side MinErrorToSend (its all plain text in the EXE).

    Now I know what MinErrorToSend does (setting it MinErrorToSend=0.005 makes units viewed from sniper scope and binocs move twice as smooth, 0.001 is ten times as smooth)

    But what does ThrottleGuaranteed=1 vs ThrottleGuaranteed=10 (and other settings) do ?

    And when will the severs peak fps be configurable via a flashpoint.cfg variable on the server ?

    UPDATE: Does anyone know, anyone at all ?


  11. Better yet instead of using an integer fps in #MONITOR output use the time it takes the server to complete a full simulation cycle in ms.

    This way a far more accurate figure is given and server admins will have a more meaningful figure (I find some server admins compare fps of server to fps in game which is a rather pointless comparison to make)


  12. Bear in mind if the server is running at 10fps (or simulation cycles per second) then the time between each cycle is:

    1000ms divided by

    10fps

    = 100ms

    So thats 100ms between simulation cycles, it would not matter if everyone was on a LAN that is still a full 100ms of 'lag' (or rather delay) between server simulation cycles.

    --------------------------------------------------

    Server delay table: (made in Excel)

    fps delay in ms for a full server simulation cycle

    --------------------------------------------------

    1 1000

    2 500.00

    3 333.33

    4 250.00

    5 200.00

    6 166.67

    7 142.86

    8 125.00

    9 111.11

    10 100.00 <- server is cause of alot of desync

    11 90.91

    12 83.33

    13 76.92

    14 71.43

    15 66.67 <- server becomes the cause of some desync

    16 62.50

    17 58.82

    18 55.56

    19 52.63

    20 50.00 <- with server physics taking 50ms+ players notice

    25 40.00

    30 33.33

    32 31.25 <- A bug causes many servers to peak at 32fps

    35 28.57

    40 25.00

    45 22.22

    50 20.00 <- Servers should ideally remain at 50fps*

    100 10.00 **

    200 5.00 **

    * - This would require that the server never reach 100% CPU load

    ** - Would require that they let server admins decide where the server should limit its fps, using 32fps or 50fps is no longer an ideal system.

    Note: The time taken for a complete server simulation cycle is not added to the players ping figure (as far as I am aware), thus the ping time is just that, realistically a 2nd row of figures would be below the pings adding the server simulation cycle time.

    Thus it becomes quite clear where the desync is coming from.

    If a server is running under 8fps then that would create alot of desync (obviously), However no amount of netcode changes will help a server that is taking over 125ms for each simulation cycle to occur it just simply is not possible.

    The solution would be to optimize the map, and upgrade the server so it can cope (20fps+ so simulation cycles only take 50ms)

    Of course if a player has a comms problem then the server needs to do additional work to process 'where in time' all the players are, thus the requirement for desync (otherwise players would remain out of sync, much like 'Magic Carpet' on LAN with vastly different speed PCs, this was a common occurance)

    Hopefully this table will help some admins understand what is going on.

    Many of us CTI players have no issue getting a high end server (Currently: P4 with 2MB L3 cache or AMD64 1MB L2 cache with Dual DDR, as Single DDR servers don't cut it for CTI over about 5-8 players), so limiting the fps of the server is hurting the community in a very bad way.


  13. I am not talking about the loby, as most the decent servers todat could do 50fps IN GAME without hitting 100% load, however they are artificially capped at 32fps for some reason.

    All they need to do is raise the bar 56.25% to get 50fps again.

    Better yet if someone knows the hex offset in the OFPR_Server.exe of the FPS limiter, it could be 'changed' to 255fps peak.

    My point is;

    A heavy map on a low end server will run under 32fps anyway.

    However, a heavy map on a high end server COULD run over 32fps (up to 50fps as it should).

    eg: Atrhlon 64 FX-53 - CPU is at 50% load, it gives 32fps.

    Obviously if it was not limited to 32fps, it would give 50fps while the CPU was at 78% load.

    So why limit it to 32fps, why not release a patch that lets the server admin decide the maximum fps ?

    This would be esp useful on larger scale maps, like RTS3 and MFCTI as a high end server could do 40+fps on those missions if the software let it.

    UPDATE:

    I ALSO JUST NOTICED THAT 50 IN DECIMAL IS 32 IN HEXADECIMAL. COULD THIS JUST BE A BUG ?


  14. Yes, the other day I experienced the same hosting a server on a LAN, using Windows XP Pro after changing the same (and more) settings in the .cfg file.

    It was 50 fps for one game, then 32 fps again

    It was MUCH smoother at 50 fps

    Can the server please be edited (recompiled) to limit at 50 fps (or better yet at a user definable fps set in the .cfg file ?

    Surely this is not a hard change to make ?

    1.96b anyone ?

    Maybe a recompile to support newer CPUs better (not a total rewrite for HyperThreading and SSE2 support, etc)

    Those 2 basic changes will keep this game alive another 12+ months.

    Also, how 'hard' is Linux hosting, I am not clueless (hell I was thinking SuSe 9.1) but I am no linux guru either.

    Just looking for quick ways to get max performance from 2.4 ghz Opteron (Single CPU Opteron 150), or a high end P4


  15. Tool to rip required addons from mission files ?

    Does one exist ?

    Then I could just index which missions need which addons in a large database.

    From there I could say which addons come in which addon packs (build a list)

    Then there would be no more (or less) issues with new players and finding those addons.

    Since BAS died, its been hard times

    Any ideas ?, Is one out there ?, That can dump to text output ?, can someone skilled enough with reading the mission files code one ?

    Also the ability to find missions made for older versions would also be a damn useful tool.

    If anyone from development is reading, please at least include this in OFP2. (improve on it if possible)


  16. This may lead to higher quality game

    Video and Info sites of Interest

    Biggerhammer.net - Miscellaneous Firearms Technical and Training Manuals: (Mostly American, with some Soviet)

    http://www.biggerhammer.net/manuals/

    Austrailian RAFF - Videos:

    http://www.defence.gov.au/raaf/interactive/video.htm

    Australian ARMY - Videos:

    http://www.army.gov.au/video/videos.htm

    Australian NAVY - Videos:

    http://www.navy.gov.au/gallery/video/default.htm

    EDIT: ADF: Online Media Room:

    http://www.defence.gov.au/media/index.cfm

    Hopefully this post might lead to makeing the game higher quality in some manner.

    (Not that it would be low quaility in any way)


  17. You have installed the Via 4in1 drivers then yeah ?

    (Also known at VIA Hyperion drivers)

    http://www.viaarena.com/?PageID=300

    The only issues that *should* be encountered with Creative SoundBlaster Live! cards is the old Via PCI latency issue, which the above drivers should fix.

    Note: Be sure you have a Via chipset based mainboard before installing.

    eg: Asus A7V-133 uses the Via KT133A chipset

    To find out what chipset your mainboard uses refer to your mainboard documentation.

    Personally, when all hardware is configured correctly, with the correct drivers that is, I've never had a problem with a Creative SoundBlaster Live! card.

    Heck, Kudos to them to ensure everyone actually bothers to install their mobo drivers, and not rely on the outdated Via drivers that Windows ships with (with many IDE and PCI related 'bugs')

    Regards,

    Tabris.DarkPeace

    GarageLAN, ACT


  18. Dual Opteron would be good, so long as each (or at least one) of the CPUs is running at 2.4Ghz or more, the current lower end Opterons have dismal performance.

    And as we've all mentioned 100 times before, the server does not thread over 2 CPUs, so 1 single CPU with the fastest memory interface (linpack, sandra memory benchmark, etc) with great x87/SSE FPU power would perform most excellent indeed.

    Still with the price of Opterons (and speed too) it may be a better idea to go for a Socket 939 AMD Athlon 64 at 2.6Ghz+ or 64FX-55 at 2.6Ghz as by the time the funds are up, these CPUs will be out on a new 90mm die which should (if desired) overclock well into the 3.6Ghz range (which for an Athlon 64 would kick sweet ass)

    The Athlon 64FX-55 is expected to have more L2 cache and Dual Channel Memory which would perform 10% (or more) faster than the Intel i875p chipset, which I suspect (overclocked) holds the record for MFCTI performance. (unless someone has a high end SiS mobo out there and is keeping quiet, see http://www.sis.com/products/index.htm#chipset for details on their cost effective chipsets, which deliver 10.6gb/sec peak memory performance and are available 'today', while remaining cost effective depending on the application)

    Cheapest, fastest solution today:

    ~~~~~~~~~~~~~~~~~~~~

    (A P4 running a FSB synced to the dual DDR2, thus overclocked, even with the PreScotts long pipeline, running at 3.8Ghz with 10.6gb/sec to memory would likely break a few records)

    Cost Effective, Fast solution in a few months:

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    For an Operton you would want to check if one CPU can read/write memory from multiple HyperTransports (which would enable up to 22.4GB/sec memory bandwidth, more than enough for MFCTI on a grand scale)

    Thanks just my 2c, but it will be 6 months before most of this tech is available, or drops to 'consumer' (cough) prices.

    Bearing in mind that it may take 6 months 'to get the funds up' for such a server, the above would be worth, at least looking into, at that time (Q3 / Q4, 2004, maybe Q1 / 2005)

    Intel's roadmap only shows Xeons with 1066 FSB, and the expected ramping up of the PreScott core, as well as the Pentium-M CPU being migrated to desktop because its so efficient at lower clock-speeds, just like the Athlon XP/64 is.

    Also shorter pipelines means that higher memory bandwidth is better utilised; (less cache misses means more available memory bandwidth); which would be a nice change from the 30+ stage PreScott core. (esp with systems that have over 12.8 Gb/sec of peak memory performance, should really ramp up the MFCTI scalability, esp for 'home' servers)

    If a single Operton CPU really can read & write from multiple HyperTransport memory buses at once, then I would consider getting on and just running 1 single CPU in it (so lnog as this is an available configuration option)

    The technology to do this is similar to the memory on the Radeon 9800, switched between 4 x 64 bit channels, for well over 16 Gb/sec, and we all fork out for video cards, we may as well use the same concepts for system memory, to scale performance far beyond the 6.4 Gb/sec of a non overclocked P4 server.

    The only 2 platforms that come close to doing this are from Sun Microsystems (not x86) and the AMD Opteron, Intel are 'working on' a scalable architecture, but still no news on that.

    (Post still undergoing edit, may truncate)


  19. The way I see it (getting MFCTI smooth as possible) is a balancing act;

    Between CPU load

    - To get as many fps, or 'simulation cycles' as possible

    And Bandwidth load

    - so ping and datarate remain good to all players

    By keeping the above settings around the 1mbps to 6mbps with only slighty tweaked packet sizes and max.packets.per.sec, as well as experimentation with MinErrorToSend, yields in excellent server performance, as players will (after testing is complete) get 0 desync, unless they have an actual connection problem, eg: ADSL retrains, and server fps (simulation cycles) should remain above 20fps (if not above 25 or even 35fps) at all times, as the server CPU is not being hammered.

    This way the AI are more responsive (as server fps are higher from doing less work), players get a good gaming experience as server fps are good, and their desync should remain 0 unless they have an isolated issue.

    When a server is under such load that 'random' players get desync even over a LAN connection you know you are pushing the server CPU too hard, under 15fps (simulation cycles) on the server can break MFCTI scripting, cause the AI to respond very slowly (they seam dimwitted or stop moving) and players will start to desync.

    Since MFCTI uses 'workers' you don't want under 20fps ever.

    I wouldn't mind seeing a server configured to use between 1mbps and 15mbps, to see where the server decides to sit to give the best performance.


  20. I thought true 'hex/bin' numbers where better ?

    Eg:

    net with 15 mbit line

    MaxMsgSend=512; //256;

    MinBandwidth=12582912;  // (12000000)

    MaxBandwidth=15728640; // (15000000)

    on a lan you could try these lol

    MaxMsgSend=512; //256;

    MinBandwidth=31457280; // (76800000) **

    MaxBandwidth=41943040; // (100000000) **

    As anything near 100mbps will saturate a LAN link, switch or no switch.

    I tried the above settings (the 75mb-100mbps) using a 3 Ghz Pentium 4 (Northwood 2.4) on 1 Ghz FSB and it was lag city, and I own a D-Link DGS-1008 Gigabit Switch, the server also had a cat6 link at 1Gbps full duplex using the Intel CSA (northbridge) Gigabit Ethernet with TCP checksums for both transmit and receive offloaded to the Intel CSA.

    (Btw: I can help get this gear cheap if anyone wants Gigabit backbones for LANs, etc, ACT, Australia area only !)

    I would never recommend anyone on a 100mbps LAN (even with a high end backbone in place) try those settings, anything over 40mbps is folly. (note my numbers round into hex/bin perfectly aswell)

    Even my above editing example with a minimum of 30mbps and maximum of 40mbps is simply overkill, the CPU (a 3Ghz P4) would be working overtime, and the clients don't use anywhere near that bandwidth for 0 desync, <5ms pings.

    Heck if you go over 40mbps pings start climbing (if it uses it) and at 100mbps everyones ping (on a standard LAN) would be over 300ms, which for a LAN setup looks plain unprofessional, esp when your backbone is 1gbps and people want to share files at a LAN, *while* gaming.

    Heck a max bandwidth of 20mbps is more than enough.

    I doubt even the *single* fastest CPU on the planet (prob a 5Ghz P4 liquid nitrogen cooled) would be able to process 100mbps of gaming packets in MFCTI.

    Correct me if I am wrong but if you are limiting the server to 512 packets per simulation cycle, then even at 50 simulation cycles per second that is 25600 packets per second.

    And above the default MaxPacketSize is used (as the variable was omitted)

    And to anyone not using the full feature list available in DS-ADMIN.RTF here it is.

    These commands also go in FLASHPOINT.CFG, not SERVER.CFG, even though many Server Admins may try and convince you it is otherwise, I have been assured it is FLASHPOINT.CFG for the FOLLOWING COMMANDS ONLY:

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    MaxMsgSend=<limit>; Maximum number of messages that can be sent in one simulation cycle. Increasing this value can decrease lag on high upload bandwidth servers.

    Default: 128

    MaxSizeGuaranteed=<limit>; Maximum size of guaranteed packet in bytes (without headers). Small messages are packed to larger frames.

    Guaranteed messages are used for non-repetitive events like shooting.

    Default: 512

    MaxSizeNonguaranteed =<limit>; Maximum size of non-guaranteed packet in bytes (without headers). Non-guaranteed messages are used for repetitive updates like soldier or vehicle position.

    Increasing this value may improve bandwidth requirement, but it may increase lag.

    Default: 256

    MinBandwidth =<bottom_limit>; Bandwidth the server is guaranteed to have (in bps). This value helps server to estimate bandwidth available. Increasing it to too optimistic values can increase lag and CPU load, as too many messages will be sent but discarded.

    Default: 131072

    MaxBandwidth=<top_limit> Bandwidth the server is guaranteed to never have. This value helps server to estimate bandwidth available.

    MinErrorToSend=<limit> Minimal error to send updates across network. Default value is 0.01. Using smaller value can make units observed by binoculars or sniper rifle to move smoother.

    MaxCustomFileSize=<size_in_bytes> Users with custom face or custom sound larger than this size are kicked when trying to connect.

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    MinErrorToSend - was mentioned in the change history of the older (pre 1.91) versions and appears to have been forgotten about long ago, it can also have the most affect, used correctly, on server CPU performance.


  21. Damn, there you go.

    We have the same problem with call logging software at work (Quantum by Software Support Solutions) using "\" as a control character (think ANSI ESC codes and you are half way there)

    You can image working on IT and being unable to use "\" in a log without a 25% chance of the software just crashing (it does no checking on text input obviously due to lack of knowledge by the programmers)

    Never would have though it would affect GameSpy in such a manner (learn something new everyday......sometimes)

    As for the quanta lengh issue, after Windows 2K Beta 1 they removed a whole section from the "Foreground Application Acceleration" section of the system options, it used to have Quanta lengh, Variable / Fixed, Short, Med, Long, etc

    As Flashpoint server uses the quanta values (its forced to) when running the server physics engine on the server it gets 'capped' at 32fps (in laymans terms this is why its 32fps)

    The fix, try a different Operating System, or find a very rare tool that can change the quanta lengths, and other settings (Microsoft don't want people to be able to pick'n'choose to this extent)

    The reason I recommended SuSe before (although Linux has other issues, and many people dislike SuSe for whatever reason) is because SuSe Linux holds the record for fastest server physics engine framerate, at 55fps.

    Bear in mind that it was not during a CTI game obviously.

    Although I wouldn't mind a server system that only hits 80% on CPU0 during a CTI game, and runs at 50fps constant.... Then again who wouldn't mind such a beast ?

×