darkpeace
Member-
Content Count
141 -
Joined
-
Last visited
-
Medals
Everything posted by darkpeace
-
Here is a question for you Mr RN Malboeuf: Does a Linux server peaking at 50fps (at 99% or less CPU load) scale the same as a Windows server peaking at 32fps load ? eg: -At 50% load they will both run at peak fps (50 vs 32) -At 99% load they will both run at peak fps (50 vs 32) -At 100%+ load they will start to slow down *Now permitting 110% load in (for simplicity sake, have it just mean *below peak fps*, where 100% is peak fps, and more means it runs slower as documented) -At 200%* load would the Linux Server run at 25fps, while the Windows server ran at 16fps ? -At 400%* load would the Linux Server run at 12.5fps while the Windows server ran at 8fps ? If not, (as in they scale the same past 100% load / aka: both do 16fps at 200%* normal load) then please explain why, since you obviously understand this far better than anyone else. If so, (as in Windows/Linux scale differently past the 100% normal load) then please explain your answer aswell. I would be most interested in your reply to this. Please don't use the 'you can't have more than 100% load' as I defined it above, where Under 100% CPU load = peak fps (duh), but once you hit 100% the server 'slows down its outputs per second' to compensate,... So past 100% CPU load do Windows and Linux scale the same, or differently. Only you have the knowledge to answer this question, as you say you have extensivly tested it. I look forward to your reply / explanation.
-
This is going nowhere fast. Cage, like me, I suspect is one of the few with 1536/256 ADSL, I would call him my equal in any local OFPR game, he is an exemplary soldier. How he performs on an overseas server however I remain to see. Frankly I am surprised he gets 100ms to an American server (considering the speed of light, electrity, and other laws of physics, he should getting around 250 - 300 ms) The servers several TimeZones away look empty to me too, and less are in use, wow, that makes perfect sense, this would be because you are around 6 or so hours behind them. While we are 10 hours ahead of them. I doubt you even play on any non American servers, let alone servers in another time zone. Eg: The US servers look empty to me now mate (different TimeZones would be the obvious reason), but I am sure if I didn't sleep for 24 hours and recorded the results every 60 sec then the servers (in any country) would be far, far from idle. Using the same logic you applied above that means only 6 USA servers are active out of 57, the other 51 should be decommissioned, you can not eb seriously thinking the Europe servers (like the homeland of Flashpoint) are idle so often. Do you honestly think people would put resources into hosting idle / empty servers ?, the ISPs would deactivate them within 2 months of almost no use for sure. I assure you, I have little (if any) catching up to do, and am yet to see even one shred of proof that Windows servers are faster, (esp those stuck peaking at 32fps) If a server peaks at 50fps (20ms timeslices), and another server peaks at 32fps (2x15.625ms timeslices), it is very obvious which one will be faster, It isn't really a question of Operating System, however statistically speaking almost 100% of Windows servers peak at 32fps, and almost 100% of Linux servers peak at 50fps. When 31.25ms < 20ms your logic alone is flawed Thus when 50fps gives same performance as 32fps the same basic fundamentals of math apply. For the same reason reducing floating point accuracy over distance may improve performance (or decrease CPU load, and thus provide a smoother MFCTI experience) Now forgetting your server can do anything, as it is only 25% faster than a 3ghz server, and only has 1mb cache. MinErrorToSend=0.02 (double default) might help in CTI/RTS missions, Where as MinErrorToSend=0.005 (half default) might help in smaller Close Quarter combat on fast servers. Thus having mutliple server/flashpoint cfg files is way to go, esp at LANs, where bandwidth is rarely an issue. Now you can say this does not matter on your server, well thats 1 of 276 servers, and the other 275 server admins no doubt are after settings they can also experiement with to record results, and their servers may only be 2.4ghz give or take. I am currently test this (deliberately) on an Athlon XP PR2800+ (2083mhz, Barton, 512 kb L2, 333fsb), and 2 hours into "MFCTI 1.16A Nogova, Heavy Resistance, High Income, Weather" (out since the 17th from Mike Melvin) the server CPU sometimes spikes to 100%, but sits at a rather nice 97%, thus giving 30-32fps on a Windows server. Now the same test on a Linux server, with CPU at 97% load, would yeild 30 - 50fps (30fps during the spikes and 50fps when under 100% load). So keeping the server around 97% lets the peak fps occur more often (duh), thus raising the average, and noticable (player perceived) performce. Consider this is on an older spec server, 2hours into a heavy game. When I am shown a Win2K server peaking at 50fps I will want to know how it was done. Ideally I think the trick is running around 97% average, so on a decent server (far far faster than the above, an almost constant 40-50fps can be acheived in MFCTI) Also, I still have yet to see the proof / formula you speak of that indicates that 32fps =(same performance as)= 50fps. Considering your 4ghz server is running under 32fps (you say it hits 100% load often, thus it can't be at 32fps in MFCTI),and almost all your players are on high speed American Cable, with low ping and high bandwidth, then I would conclude, based on the facts above, that Linux (can) be faster when/if configured. Just like a Windows server it takes ages to setup, tweak, record results (#MONITOR 1, 60, 120, 300 & 1800 used, over several games) to balance settings and CPU to that 'magic' 97%. So as is obvious to anyone, there is usually more than one solution to a problem, some are effecient and scale well (over a range of CPU grades), others throw raw CPU power, bandwidth, mass advertising (it helps convince the masses I'll give it that much), resources, etc. Both are perfectly acceptable responces to a problem (neither party is ignoring it, which would be the worst thing to do)
-
Sorry to double post (it didn't raise my post count) but I thought this was worth an extra entry in the forum. Where was it said this is a "forumla error" ? Last I heard it was timeslice length related, and they wouldn't recode it, so 32fps means 32fps and 50fps means 50fps. It appears to me to be 'wanting' 20ms timeslices, and getting 2 x 15.625ms ones instead (thus [20ms / (2x15.625ms)] = 64%. Where 64% of 50fps peak being 32fps. Or 1000ms / 20ms lengh timeslices = 50fps peak vs 1000ms / (2x15.625ms) len timeslices = 32fps peak Where the maximum server fps is limited by the amount of timeslices in 1 sec, but the OFPR_Server.exe seams to want 20ms, and this gets rounded to 2x15.625ms in most Windows current operating systems. I guess that is why 107 of 276 OFPR Servers are Linux based, the other admins just shy away from it, or don't use it because they are unaware it can raise performance, and save dollars. Since work wants me to learn SuSe 9.1 Pro, I may aswell migrate the GarageLAN, ACT server to it, and reap the benefits
-
I enjoy our different views and solutions to the same problem I personally use Geil DDR550, running on 2.8v at 500mhz with the faster timings (CAS 2.5) on a 1000mhz FSB, on a Intel i865pe (no point having PAT at 1000 FSB, it doesn't work). Mainly due to price/performance ratio, its AU$150 cheaper than other DDR550, has similar timings, and performs 99% as well, it handles higher voltages though and is guaranteed at 3.1v We both have a similar mind when it comes to hardware I think, but I find an overclocked Athlon 64FX still outperforms a highly overclocked Pentium 4, with less cooling issues (2U rack for online servers / towers for LAN servers) OFPR might be a few years old, but Half-Life (based on Quake II engine) is even older, and look what it has become, would it be a bad thing for BIS to strive for a community that size, with tht much effort put into server performance. Why go to a LAN to play Counter-Strike, when you can do that fine on Dial-Up ? Might as well encourage Flashpoint, as "The LAN Battle simulator to play" When was the last time you had a dial-up player on an MFCTI server ? We get them constantly here You describe a different problem, and equally different solutions Also coding is not very hard, doing the above is far from impossible (it would take a dedicated programmer under 1 week to do and test) =========== RN Malboeuf: "and you tottaly miss the fact our server is the best place to play large scale CTI maps on the net with Kaos right behind us" Yep there is seriosuly nothing like a playing at: Roughnecks Whore House (www.roughnecks.org) - Fastest CTI server on the net! -With 279ms+ ping to 64.27.26.116:2302 (using far smaller than 1024 byte packets) -From Australia -In vastly different TimeZone -When the server is normally empty (as it is now, 8:07PM GMT+10) There are more servers outside America than within it, I come from a Flashpoint Minority myself, just like America is a Flashpoint Minority compared to Europe. Europe being where most of the battles happen. Thankfully due to ties with British players (semi realistic SAS squads, with Aussies and Brits) we get to hear some of the news Server Tally: ========== 56 in North America 3 in South America 12 in Asia (Japan, China, South Korea) 228 in Europe 5 in Australia (to support New Zealand and Oceania Region) RN Malboeuf: "this statement is not due to arogence it's due to fact that you are trying to discredit a proper server set up with utter nonsense" I never discredited the RoughNeck server, I just came to share a finding that you are still in dennial about. =========== I still agree a fast server eliminates many problems: Examples below: ---------------- P4 Galatin (2MB L3 cache, 512kb L2 cache, 800 FSB, HyperThreaded w/ OFPR_Server.exe affinity on CPU0) I still think it an ideal solution, esp when OFPR_Server.exe runs on around 7 threads (performace reduction over multiple CPUs due to cache coherency and load limits over multiple CPUs), of course it would be overclocked via the FSB with higher performance memory. When the P4 PreScott gets 2MB L2 cache and a smaller die, it would overclock more, and likely outperform the above. On the AMD front is the Athlon 64/FX which are around 25% - 60% faster at equal clock speeds at certain tasks, so when they go 90nm and reach 2.8ghz (possibly with cache or Dual DDR2 memory controller upgrade) they will start to dominate the (overclocked) OFPR server field. However combining the above settings I suggested (this thread is titled "Server Optimization Request" after all) with a fast server would be the more ideal solution. Thankfully in a LAN environment with vast resources (some of) the above is possible today, however hosting such a server on the Internet is becoming of less and less interest to me, as we have a few servers here already, the continued costs of 10mbps dedicated line are not justifiable financially (in Australia), so local LANs with insane (once off) costs are a better option for many, esp since 85% of my mates have dial-up, and a good chunk of them can't even get a v90/v92/k56flex connection (33.6kbps or less) Now, I do highly doubt that BIS will ever go back and improve the server with the above suggestions, esp considering how sidetracked this thread has become. Hopefully they are working on *OFPR2 server now*, getting multiple simulation cycles in 1 thread timeslice, and spanning the (OFPR2 Server) load over multiple CPUs so when Dual Core stuff comes out before OFP2 is released they will have prepared for it, and we won't have this "single cpu = faster" situation we have now. Some brief history: =================== Australia mate, Australia, I don't need or want to hear about the Internet bandwidth in America. It does not help me at all. It would be considered an unrealistic scenario to any scientist or engineer planning in Australia. I've been playing OFP since the demo mate, and analysing it just as long as you have but on LANs, for LANs, with Gigabit backbones, since in Spring 2001 Australia did not have jack in the way of internet connections faster than 128kbps, so the only real option was LANs. I started in Multiplayer gaming LANs back in the days of one really old flight-sim for 286/386/486 systems, good old Retaliator F-22/F-29, which did not support LANs, it was pure manual serial/modem handshake and 2 players max. Moving onto Doom using IPX networking in DOS latter on, before games had dedicated servers, or used TCP/IP, back when BBS's where a better way to share information than the Internet, and FidoNet was as good as it got. Obviously a slow progression to TCP/IP, 4+ players and dedicated gaming servers occurred, then in Spring 2001 we all got Flashpoint, and eventually patched to 1.46, the rest is recent history. As I have an interest in http://sourceforge.net/ I found CTI by 'accident' one day, at http://mfcti.sourceforge.net - from there the real LANs begun, Counter-Strike was left for dead, even Battlefield 1942 could not compete. =================== Try a 10 hour Everon CTI (on 1.1b) then a 20 hour Nogova CTI only 8 hours later. 4-5 hours of anything isn't really a test [:P] Heck I run MemTest86 on workstations for 24 hours before installing an OS to ensure there are no 'pre burn in memory errors', one of which would render the OS useless (not straight away, but over time that 1 corrupt bit would be noticed, and cause problems) Bear in mind 1.1b was not that server friendly, and the server was only an Athlon XP PR1600@PR2000 with apx 320 FSB, there was minimum desync and side finances where in scientific notation at the end of the game [b)], (no joke it was) This was back before the building limits where put in, and barracks could be placed inside other barracks, and other buildings, you could hide a smaller building inside a church for example, and build anywhere you desired. We played realtime (no time acceleration) and actually battled well into the game night (in Australia we are hardasses). Best way to win a LAN is to be relentless, many opponents fold after 4 - 7 hours, so long as you can fight harder, longer, even if your lossing the battle, just hold out for hours, if the other side all falls asleep then you win. All their offensive and most defensive capability stop. Usually a ref would declare the winner, but sometimes you just want to eliminate them all B) =================== Q) What if the players link does not recover ? A) You are stuck with recurring desync until the end of the game, or until the player is dropped. I did say "What if the players link *does not* recover", not when it recoveres what happens, which is what you correctly noted above, when a player link recovers fully in a timely manner the server does recover, usually almost straight away, however as I stated: "Q) What if the players link does not recover ?" Well obviously since it never recovered, and may be choking on 6kbps or nothing at all for periods during line retrains (our phone lines in Australia are not the best mate) Then this happens: "A) You are stuck with recurring desync until the end of the game, or until the player is dropped." It can be recreated in lab conditions faster and prob cheaper than it can be recreated online, either method yeilds factual, consistant and reliable results. =================== RN Malboeuf: "Power is every thing in CTI, if you cant see that with your 2.4 then you have some personal issues you need to sort out" When did I say the server CPU was 2.4ghz ? I recommended it as a *minimum* for the server CPU for small CTIs. How the CPU relates to personal issues I will never know. ***Bear in mind that MFCTI 0.98 was around well before 2.4ghz CPUs.*** =================== RN Malboeuf: "-1500-2000 kbps minimum downstream / -1500-2000kbps minimum upstream" As I stated above the *players* minimum recommended is 384/96, I would not try to host a MFCTI server for multiple players on such a link, it would be at an ISP on 10/100 Ethernet, if not on a LAN on a Gigabit backbone. =================== RN Malboeuf: "...if you actually think a Dual CPU helps the OFP servers you need to go back and re reseach the server loads and what they can handle - We've spent years on this where you have not" I have probably spent longer doing this stuff than most, I just don't post it all over the forums since making money from computer knowledge in my spare time is better than giving away free advise [:P] To quote myself from above: "Hopefully *Flashpoint 2* will have CTI built in, from the ground up, and by the time it comes out *Dual Core* CPUs will be available for DIY OFP2 servers, so it best utilise them and perform well." I was referring to OFP2, not OFPR, which everyone is well aware does not improve in performance over 2 or more CPUs, or over HyperThreaded virtual CPUs, or over VMware virtual CPUs. (cache coherency, and the fact it limits itself to 50% over 2 CPUs, or 25% over 4 CPUs) Dual Core CPUs have only just gone into development (Intel & AMD wise, excluding the UltraSPARCs grand plans that never where), and OFP2 better bloody damn well use them, or http://www.es.com may be providing the next "grand scale war simulation for civilians" =================== Don't get my wrong, with the CTIs of today a 3ghz or faster CPU would be better than 2.4ghz As for the 1.5 - 2.0 mbps for the server, it's not an issue with Intel CSA 1gbps integrated into the Northbridge section of the system (thus using no PCI / Southbridge bandwidth, which helps with other things) The 384/96 kbps per player minimum sustained would be accurate, we both agree on that (LAN or Internet wise), however Dual ISDN (128 kbps) does not meet this requirement in the downstream direction, so when push comes to shove and they don't have the bandwidth to download everything realtime, they desync. =================== RN Malboeuf: "and to clue you into gaming client speeds 95% are on fast lines cable of handing the speed i posted 10-20 times over" Well to clue you in, *in Australia*, as I already mentioned, only 10% of the 20 million of us actually have 'access' to broadband. (Thats access to it, usually at work, the home installation figure is far far lower, most people are still on dial-up) Apx 1 million of us have ADSL at home (and the statisics include partners twice, so the real figure will be lower than that still) Funny thing is WOLF host the VBS1 video file, for the world to see on the OGN server. WOLf being purely Australian / New Zealand based And as we cover 2 countries plus the surrounding Oceanic area our player base is not so concentrated (esp considering not many people in our corner of the world play OFPR), unlike America where 100 players might live within 500km of each other, and the telco infrastructure is everywhere you look. We also don't like our Telco very much, too much advertising, too little R&D and improvement plans, and jack all infrastructure out in the bush. So unless you live in a capital city (and in the ACT the nations capital the population is only 300,000 or so) your pretty much stuck on 33.6 kbps (if your lucky, more like 24kbps) =================== The above is just the tip of the iceberg of the stuff I deal with every day. They call us the lucky country, but when it comes to IT, we got shit fast PCs (we are as inovative in IT as we are in combat, just do a lookup on "Australia" AND "Soldier" for an idea. However we have bugger all bandwidth amoung us, mostly in frame relays and racks of ISDN (which is too expensive for most people and does not aggregate bandwidth in a manner that allows for gaming in typical 'home' setups) =================== So I await the day of OFPR2 CTI, and when ADSL2+ is released in Australia so we can all get 6mbps upload in Australia, but our lame telco holds us back. (We might be a Westernised country, but our infrastructure has not changed much, many old wooden phone poles cover the width of our country still) The fastest link you can get here for home is 1536/256 ADSL (some do 2048/384 at the DSLAM now though), but that is assuming you are one of the 5-8% of people with ADSL at home. Some have to put up with 256/64 ADSL, and the rest can't even get ADSL (dispite manipulated statisitics that say 95% of us can get ADSL, even though there are only enough ports for 10% of us)
-
Yeah, Way I see it for CTI is currently: ============================= -384 kbps minimum downstream -96kbps minimum upstream* (certain misconfigured server settings would require the upload figure to be much higher, like 192kbps+) -85ms maximum ping or less -Server 2.4 Ghz Pentium 4 (Northwood, 512 L2, 800FSB, Dual DDR) minimum (pref Linux as it may allow for thread length configuration, to get a higher peak and sustainable server fps) As for the 50fps limit because of the way it was coded it is limited by thread length (so 20ms threads would peak at 1000ms / 20ms = 50fps), the people over at www.udpsoft.com found a way to reconfigure Half-Life to work with 1 or 2ms tics, to get 1000 or 500fps sustainable peaks. However Half-Life netcode is far different from Flashpoint. Hopefully Flashpoint 2 will have CTI built in, from the ground up, and by the time it comes out Dual Core CPUs will be available for DIY OFP2 servers, so it best utilise them and perform well. CTI itself could use a few optimisations aswell. Although I suspect there are limits. eg: -Corpses in water should be cleaned up within 60sec -Pilot corpses should be cleaed up within 60sec -Stray Ammo should be cleaned up within 60sec -Disabled Vehicles should be cleaned up within 3 minutes -A new Repair Truck with new scripting should be made that just 'replaces' the vehicle, this would fix a few underlying issues Does Mike Mevlin still work on MFCTI, or is it all in CleanRocks hands now ?
-
Here in Australia its a totally different story mate. A) Most the players lack the bandwidth to play MFCTI B) Our servers are far faster than you think C) Server sided bandwidth is ample Finding has little to do with the server side, it is client related (lack of bandwidth/poor ping). Server testing was peformed on a system with 1000 Mhz and Dual Geil DDR550 (running at Dual 500 with faster timings to better perform on 1000 Mhz FSB), as memory bandwidth starts to become very important in MFCTI, as I am sure you are aware. Assuming everyone elses server is slow is rather arrogant. Esp considering your own servers peaks at 32fps, while other servers can actually *sustain over 32fps in MFCTI* once the 'client' end is sorted, alleviating the need for a 4ghz CPU. I am sure your solution causes your server to reach 100% CPU load rather frequently, if not hit 100% and stay there :P Far from ideal There are far better solutions than throwing money into overclocking a server, and there is more than one solution to this problem. Enforcing that all players have low pings, fast download/upload speeds, are only a few hops away and have firmware that fixes line retraining / resyncing issues is a far cheaper alternative. And 'client filtering' can be configured at the server end to stop the problem from even occuring Although CPU power is important hosting MFCTI eliminating the actual 'cause' of desync is a far better solution than allowing it to occur in the first place, and have the server bear the load. Q) What if the players link does not recover ? A) You are stuck with recurring desync until the end of the game, or until the player is dropped. The above settings would help 100's of admins, and boasting about your (peak 32fps, sustained FPS = unknown) server is not actually helping too many admins, sure it attracts some USA players to your server, but most the players in Europe, Asia, Australia & surrounding areas are not really that interested in playing on an overseas server. I am doubtful, as you are, that any of the above settings would be added to the server, so LANs and wireless WANs are really the best options in Australia until everyone can get real broadband. No amount of CPU power will help someone on Dual ISDN (128/128kbps) play MFCTI. The infrastructure we have to deal with down here is not great by any means, yet we still acomplish great things regardless. (It forces us to be highly efficient) I am sure with a few dial up players, the odd ISDN, and slow 256/64 ADSL your server would lag / desync. ================================================ Worlds Fastest Lag Free CTI server (on the Internet perhaps :P) Roughneck Whore House - rn1.roughnecks.org P4 4.0 Ghz HT 961 FSB - 1 gig dual DDR PC4500 - Scuzzy HDs ================================================ So the RAM runs at Dual 562.5 Mhz async to 961FSB ? Or does it run at 480.5 Mhz sync to 961 FSB ? I am also unaware of a 3329/800 Pentium 4 that would overclock to 4000/961, so I assume your server CPU runs at 4084.25 Mhz A P4 Galatin (Northwood + 2mb L3 cache, 800 FSB) or Athlon 64 (939 with Dual DDR) at around 2600 Mhz would likely outperform your server before they are overclocked.
-
What is the smallest thread length that OFPR_Server.exe can use ? < 1ms / 5ms / 10ms / 20ms / etc ? I run Windows XP Pro (SP1 on Athlon XP 2083mhz=PR2800+ & SP2 on a Pentium 4 3000), both servers exhibit the 32fps limitation. I am thinking of moving to SuSu Linux 9.1 Professional (I am getting it for training/work purposes anyway), would this 'fix' the problem ? Also does using 31.25ms length threads affect CPU load negatively ? (eg: does it rise and yield no extra performance gain) Would using 18ms length threads boost the peak server fps to 55.55fps and using 10ms boost the peak server fps to 100fps ? (I am sure that it would, assuming stability could be reached) It would be very nice to get the most out of the Flashpoint server hardware, esp since over 32fps (and soon over 50fps) would be possible in MFCTI with the hardware that is becoming affordable to more people now
-
Server peak fps should be configuatble by admins (via flashpoint.cfg variable on the server) eg: LimitServerFPS=255 or LimitServerFPS=60, etc Instead of having a limiter at 32fps or 50fps Note: This would help raise server performance esp on newer current (Sept 2004) hardware The default setting would of course be 32fps or 50fps, so voting No is rather pointless
-
Yes, I run NetLimiter on my own server, and the CPU usage spikes if just one players becomes a bottleneck, it also is not reported on the 'P' screen until after they start coming back into sync. I am sure not every admin is compeltely clueless. Using NetLimiter you can also test 'what if' scenarios and see what happens in MFCTI when someone on single channel (64kbps) ISDN (or less) joins a MFCTI game, and other such bandwidth controlled tests. Anyone can simply draw a direct line from lack of bandwidth to server CPU load (and thus low fps), it really is very simple to do. However on a LAN server capping at 32fps (or 50fps as they claim) is totally pointless. The same is true for very high upload internet servers (assuming all players have 512/128 connection that never drops out). Thus my own server runs at 32fps constantly in MFCTI, and does not use 100% of the CPU (even though they claim it should run at 'up to' 50fps) I paid for the game, and a very nice LAN server, however I get 0 support, so I am considering just chucking in the towel. Sad to say it, but in some ways Counter-Strike (yuk) is superior to Operation Flashpoint and VBS1. (in that the author does not limit what the server side software can do even if your hardware is not at 95%+ load, which is total rubbish)
-
Server peak FPS should be configurable by admins
darkpeace replied to darkpeace's topic in MULTIPLAYER
So in 9 months time a 4.5ghz CPU with 2-4mb of cache will still choke ? I find that very hard to believe. Anyways I am thinking of ditching OFP now, I've been ignored long enough and the community collapsed long ago, no point trying to revive it now -
Server peak FPS should be configurable by admins
darkpeace replied to darkpeace's topic in MULTIPLAYER
Because most servers cap it at 32fps :P Thats why Anyway the default would be 50fps anyway (which may end up as 32fps due to the bug on most servers anyway) Just this way a server admin could just set 80fps and end up actually getting a limit of 51.2fps (if they are affected by the bug, and most servers currently are affected by it) -
I did a text search on OFPR_Server.exe and also noticed this setting: "ThrottleGuaranteed" along side MinErrorToSend (its all plain text in the EXE). Now I know what MinErrorToSend does (setting it MinErrorToSend=0.005 makes units viewed from sniper scope and binocs move twice as smooth, 0.001 is ten times as smooth) But what does ThrottleGuaranteed=1 vs ThrottleGuaranteed=10 (and other settings) do ? And when will the severs peak fps be configurable via a flashpoint.cfg variable on the server ? UPDATE: Does anyone know, anyone at all ?
-
Better yet instead of using an integer fps in #MONITOR output use the time it takes the server to complete a full simulation cycle in ms. This way a far more accurate figure is given and server admins will have a more meaningful figure (I find some server admins compare fps of server to fps in game which is a rather pointless comparison to make)
-
Bear in mind if the server is running at 10fps (or simulation cycles per second) then the time between each cycle is: 1000ms divided by 10fps = 100ms So thats 100ms between simulation cycles, it would not matter if everyone was on a LAN that is still a full 100ms of 'lag' (or rather delay) between server simulation cycles. -------------------------------------------------- Server delay table: (made in Excel) fps delay in ms for a full server simulation cycle -------------------------------------------------- 1 1000 2 500.00 3 333.33 4 250.00 5 200.00 6 166.67 7 142.86 8 125.00 9 111.11 10 100.00 <- server is cause of alot of desync 11 90.91 12 83.33 13 76.92 14 71.43 15 66.67 <- server becomes the cause of some desync 16 62.50 17 58.82 18 55.56 19 52.63 20 50.00 <- with server physics taking 50ms+ players notice 25 40.00 30 33.33 32 31.25 <- A bug causes many servers to peak at 32fps 35 28.57 40 25.00 45 22.22 50 20.00 <- Servers should ideally remain at 50fps* 100 10.00 ** 200 5.00 ** * - This would require that the server never reach 100% CPU load ** - Would require that they let server admins decide where the server should limit its fps, using 32fps or 50fps is no longer an ideal system. Note: The time taken for a complete server simulation cycle is not added to the players ping figure (as far as I am aware), thus the ping time is just that, realistically a 2nd row of figures would be below the pings adding the server simulation cycle time. Thus it becomes quite clear where the desync is coming from. If a server is running under 8fps then that would create alot of desync (obviously), However no amount of netcode changes will help a server that is taking over 125ms for each simulation cycle to occur it just simply is not possible. The solution would be to optimize the map, and upgrade the server so it can cope (20fps+ so simulation cycles only take 50ms) Of course if a player has a comms problem then the server needs to do additional work to process 'where in time' all the players are, thus the requirement for desync (otherwise players would remain out of sync, much like 'Magic Carpet' on LAN with vastly different speed PCs, this was a common occurance) Hopefully this table will help some admins understand what is going on. Many of us CTI players have no issue getting a high end server (Currently: P4 with 2MB L3 cache or AMD64 1MB L2 cache with Dual DDR, as Single DDR servers don't cut it for CTI over about 5-8 players), so limiting the fps of the server is hurting the community in a very bad way.
-
I am not talking about the loby, as most the decent servers todat could do 50fps IN GAME without hitting 100% load, however they are artificially capped at 32fps for some reason. All they need to do is raise the bar 56.25% to get 50fps again. Better yet if someone knows the hex offset in the OFPR_Server.exe of the FPS limiter, it could be 'changed' to 255fps peak. My point is; A heavy map on a low end server will run under 32fps anyway. However, a heavy map on a high end server COULD run over 32fps (up to 50fps as it should). eg: Atrhlon 64 FX-53 - CPU is at 50% load, it gives 32fps. Obviously if it was not limited to 32fps, it would give 50fps while the CPU was at 78% load. So why limit it to 32fps, why not release a patch that lets the server admin decide the maximum fps ? This would be esp useful on larger scale maps, like RTS3 and MFCTI as a high end server could do 40+fps on those missions if the software let it. UPDATE: I ALSO JUST NOTICED THAT 50 IN DECIMAL IS 32 IN HEXADECIMAL. COULD THIS JUST BE A BUG ?
-
Here is a question ? Does ANYBODY using a Linux OFPR 1.96 server have the cap at under 50fps ? (eg: 32fps like some bugged Windows servers)
-
Yes, the other day I experienced the same hosting a server on a LAN, using Windows XP Pro after changing the same (and more) settings in the .cfg file. It was 50 fps for one game, then 32 fps again It was MUCH smoother at 50 fps Can the server please be edited (recompiled) to limit at 50 fps (or better yet at a user definable fps set in the .cfg file ? Surely this is not a hard change to make ? 1.96b anyone ? Maybe a recompile to support newer CPUs better (not a total rewrite for HyperThreading and SSE2 support, etc) Those 2 basic changes will keep this game alive another 12+ months. Also, how 'hard' is Linux hosting, I am not clueless (hell I was thinking SuSe 9.1) but I am no linux guru either. Just looking for quick ways to get max performance from 2.4 ghz Opteron (Single CPU Opteron 150), or a high end P4
-
This may lead to higher quality game Video and Info sites of Interest Biggerhammer.net - Miscellaneous Firearms Technical and Training Manuals: (Mostly American, with some Soviet) http://www.biggerhammer.net/manuals/ Austrailian RAFF - Videos: http://www.defence.gov.au/raaf/interactive/video.htm Australian ARMY - Videos: http://www.army.gov.au/video/videos.htm Australian NAVY - Videos: http://www.navy.gov.au/gallery/video/default.htm EDIT: ADF: Online Media Room: http://www.defence.gov.au/media/index.cfm Hopefully this post might lead to makeing the game higher quality in some manner. (Not that it would be low quaility in any way)
-
If it is text, why not use NTFS Compression on the log file. Should reduce the size by quite a margin. Also maybe RAR it using a batch file between runs.
-
You have installed the Via 4in1 drivers then yeah ? (Also known at VIA Hyperion drivers) http://www.viaarena.com/?PageID=300 The only issues that *should* be encountered with Creative SoundBlaster Live! cards is the old Via PCI latency issue, which the above drivers should fix. Note: Be sure you have a Via chipset based mainboard before installing. eg: Asus A7V-133 uses the Via KT133A chipset To find out what chipset your mainboard uses refer to your mainboard documentation. Personally, when all hardware is configured correctly, with the correct drivers that is, I've never had a problem with a Creative SoundBlaster Live! card. Heck, Kudos to them to ensure everyone actually bothers to install their mobo drivers, and not rely on the outdated Via drivers that Windows ships with (with many IDE and PCI related 'bugs') Regards, Tabris.DarkPeace GarageLAN, ACT
-
High Pingers as Commander, Driver, Pilot, Gunner Bug or MP Syncing Feature ? Eg: If a Driver/Pilot, Gunner or Commander has a high ping it appears 'rolls over' to anyone else trying to get in or out of the vehicle. If you are a high pinger and go Driver, Gunner, Pilot, Commander, then anyone with ping lower than you, or a datarate higher than you appears to get stuck in the cargo esp if your connection starts screwing around speed wise, and the server sending your low speed link more data than it can handle. Is this a Bug that can be fixed, or a MY Syncing Feature ? From my Bugs in Flashpoint list: ~~~~~~~~~~~~~~~~~~~~ If a player gets 'Lossing Connection..." then drops out, due to bad ping and desync, no one in their vehicle can do shit (eg: eject be the high pinger crashes the chopper do to lag), as all the players are ment to be in sync, and due to the 90sec time out, it takes 90sec for them to drop and the chopper to crash, killing everyone, Team Kill style. This normally results in a ban, and with good reason. Is there a way to over-ride the 90sec timeout, so they still get their 90sec, but if it occurs then everyone can just eject or disembark (damage free, as taking damage from a lag induced event is pure folly) Also with the above feature change / bug fix, you you increase the timeout timer, or let the server admin set it to as low as 15 sec, and as high as 10 minutes (hey it could be helpdul to some, for testing at least) Would be nice to be able to set a 5 minute timeout value, as sometimes ADSL can take a long time to retrain (over 90sec_ the connection, or may experience intermittent drop-outs lasting 5+ minutes while the Telcos are dicking around. Normally you reconnect with the same IP address anyway so I don't see it being an issue. Also, on that note, would it be possible while changing / fixing the above, to have different IP address 'reconnect' (or re-establish a timing out connection) to a player, so long as the playername and player-id match the one the server is expecting to time out, due to lack of, or no dataflow ? Would be very good to see some extra dial-up players online, you know how they like to complain and all ? (Also a 15sec time-out could be useful too.....for the same reasons, personally I would use a 5 minute timeout) Be nice to have the timeout, and related settings configurable by the admin. Surely this would only need a few basic changes to the the OFPR_Server source code, and maybe, if any, slight changes to the client ? (Whats another version anyway, 1.97 Beta with new improved netcode, everyone wants it anyway)
-
New Australian Server up by BuggFix and Terr^n: Athlon XP PR2600+ (Barton core) MSI MSI KM4M-L mainboard (Recent BIOS) 100mbps link to iPrimus Game Network (iPGN) Windows 2003 Server Undergoing some testing for Flashpoint.cfg and Server.cfg setting changes. (anyone with recommendations) Notes: -May need to hack SETVIEWDISTANCE down -Any other tips for .CFG files ?, (besides the usual) -Any other tips on top of killing SVCHOST.EXE process ? -Any tips on registry settings to change for timeslice / quanta's so it gets 50fps in loby while idle or not at 100% load, vs only 32fps or 33fps while idle in the loby*** -(RN Malboeuf feedback if you read this please) *** - Bug only seams to affect Flashpoint, asking for a registry fix to change the timeslice or quanta's to a recommended value to get maximum performance. - Anyone know the offset in hex of the 50fps (or is it 32fps) limiter in the server exe ? (might need to change it to 255fps maximum manually I think) - Bugg isn't known to kick High Pingers, even in MFCTI 1.16 (see http://mfcti.sourceforge.net for the 1mb addon anf the original MFCTI 1.16) -Play nicely children Tabris.DarkPeace GarageLAN, ACT Australia
-
Dual Opteron would be good, so long as each (or at least one) of the CPUs is running at 2.4Ghz or more, the current lower end Opterons have dismal performance. And as we've all mentioned 100 times before, the server does not thread over 2 CPUs, so 1 single CPU with the fastest memory interface (linpack, sandra memory benchmark, etc) with great x87/SSE FPU power would perform most excellent indeed. Still with the price of Opterons (and speed too) it may be a better idea to go for a Socket 939 AMD Athlon 64 at 2.6Ghz+ or 64FX-55 at 2.6Ghz as by the time the funds are up, these CPUs will be out on a new 90mm die which should (if desired) overclock well into the 3.6Ghz range (which for an Athlon 64 would kick sweet ass) The Athlon 64FX-55 is expected to have more L2 cache and Dual Channel Memory which would perform 10% (or more) faster than the Intel i875p chipset, which I suspect (overclocked) holds the record for MFCTI performance. (unless someone has a high end SiS mobo out there and is keeping quiet, see http://www.sis.com/products/index.htm#chipset for details on their cost effective chipsets, which deliver 10.6gb/sec peak memory performance and are available 'today', while remaining cost effective depending on the application) Cheapest, fastest solution today: ~~~~~~~~~~~~~~~~~~~~ (A P4 running a FSB synced to the dual DDR2, thus overclocked, even with the PreScotts long pipeline, running at 3.8Ghz with 10.6gb/sec to memory would likely break a few records) Cost Effective, Fast solution in a few months: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For an Operton you would want to check if one CPU can read/write memory from multiple HyperTransports (which would enable up to 22.4GB/sec memory bandwidth, more than enough for MFCTI on a grand scale) Thanks just my 2c, but it will be 6 months before most of this tech is available, or drops to 'consumer' (cough) prices. Bearing in mind that it may take 6 months 'to get the funds up' for such a server, the above would be worth, at least looking into, at that time (Q3 / Q4, 2004, maybe Q1 / 2005) Intel's roadmap only shows Xeons with 1066 FSB, and the expected ramping up of the PreScott core, as well as the Pentium-M CPU being migrated to desktop because its so efficient at lower clock-speeds, just like the Athlon XP/64 is. Also shorter pipelines means that higher memory bandwidth is better utilised; (less cache misses means more available memory bandwidth); which would be a nice change from the 30+ stage PreScott core. (esp with systems that have over 12.8 Gb/sec of peak memory performance, should really ramp up the MFCTI scalability, esp for 'home' servers) If a single Operton CPU really can read & write from multiple HyperTransport memory buses at once, then I would consider getting on and just running 1 single CPU in it (so lnog as this is an available configuration option) The technology to do this is similar to the memory on the Radeon 9800, switched between 4 x 64 bit channels, for well over 16 Gb/sec, and we all fork out for video cards, we may as well use the same concepts for system memory, to scale performance far beyond the 6.4 Gb/sec of a non overclocked P4 server. The only 2 platforms that come close to doing this are from Sun Microsystems (not x86) and the AMD Opteron, Intel are 'working on' a scalable architecture, but still no news on that. (Post still undergoing edit, may truncate)
-
The way I see it (getting MFCTI smooth as possible) is a balancing act; Between CPU load - To get as many fps, or 'simulation cycles' as possible And Bandwidth load - so ping and datarate remain good to all players By keeping the above settings around the 1mbps to 6mbps with only slighty tweaked packet sizes and max.packets.per.sec, as well as experimentation with MinErrorToSend, yields in excellent server performance, as players will (after testing is complete) get 0 desync, unless they have an actual connection problem, eg: ADSL retrains, and server fps (simulation cycles) should remain above 20fps (if not above 25 or even 35fps) at all times, as the server CPU is not being hammered. This way the AI are more responsive (as server fps are higher from doing less work), players get a good gaming experience as server fps are good, and their desync should remain 0 unless they have an isolated issue. When a server is under such load that 'random' players get desync even over a LAN connection you know you are pushing the server CPU too hard, under 15fps (simulation cycles) on the server can break MFCTI scripting, cause the AI to respond very slowly (they seam dimwitted or stop moving) and players will start to desync. Since MFCTI uses 'workers' you don't want under 20fps ever. I wouldn't mind seeing a server configured to use between 1mbps and 15mbps, to see where the server decides to sit to give the best performance.
-
I thought true 'hex/bin' numbers where better ? Eg: net with 15 mbit line MaxMsgSend=512; //256; MinBandwidth=12582912; Â // (12000000) MaxBandwidth=15728640; // (15000000) on a lan you could try these lol MaxMsgSend=512; //256; MinBandwidth=31457280; // (76800000) ** MaxBandwidth=41943040; // (100000000) ** As anything near 100mbps will saturate a LAN link, switch or no switch. I tried the above settings (the 75mb-100mbps) using a 3 Ghz Pentium 4 (Northwood 2.4) on 1 Ghz FSB and it was lag city, and I own a D-Link DGS-1008 Gigabit Switch, the server also had a cat6 link at 1Gbps full duplex using the Intel CSA (northbridge) Gigabit Ethernet with TCP checksums for both transmit and receive offloaded to the Intel CSA. (Btw: I can help get this gear cheap if anyone wants Gigabit backbones for LANs, etc, ACT, Australia area only !) I would never recommend anyone on a 100mbps LAN (even with a high end backbone in place) try those settings, anything over 40mbps is folly. (note my numbers round into hex/bin perfectly aswell) Even my above editing example with a minimum of 30mbps and maximum of 40mbps is simply overkill, the CPU (a 3Ghz P4) would be working overtime, and the clients don't use anywhere near that bandwidth for 0 desync, <5ms pings. Heck if you go over 40mbps pings start climbing (if it uses it) and at 100mbps everyones ping (on a standard LAN) would be over 300ms, which for a LAN setup looks plain unprofessional, esp when your backbone is 1gbps and people want to share files at a LAN, *while* gaming. Heck a max bandwidth of 20mbps is more than enough. I doubt even the *single* fastest CPU on the planet (prob a 5Ghz P4 liquid nitrogen cooled) would be able to process 100mbps of gaming packets in MFCTI. Correct me if I am wrong but if you are limiting the server to 512 packets per simulation cycle, then even at 50 simulation cycles per second that is 25600 packets per second. And above the default MaxPacketSize is used (as the variable was omitted) And to anyone not using the full feature list available in DS-ADMIN.RTF here it is. These commands also go in FLASHPOINT.CFG, not SERVER.CFG, even though many Server Admins may try and convince you it is otherwise, I have been assured it is FLASHPOINT.CFG for the FOLLOWING COMMANDS ONLY: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MaxMsgSend=<limit>; Maximum number of messages that can be sent in one simulation cycle. Increasing this value can decrease lag on high upload bandwidth servers. Default: 128 MaxSizeGuaranteed=<limit>; Maximum size of guaranteed packet in bytes (without headers). Small messages are packed to larger frames. Guaranteed messages are used for non-repetitive events like shooting. Default: 512 MaxSizeNonguaranteed =<limit>; Maximum size of non-guaranteed packet in bytes (without headers). Non-guaranteed messages are used for repetitive updates like soldier or vehicle position. Increasing this value may improve bandwidth requirement, but it may increase lag. Default: 256 MinBandwidth =<bottom_limit>; Bandwidth the server is guaranteed to have (in bps). This value helps server to estimate bandwidth available. Increasing it to too optimistic values can increase lag and CPU load, as too many messages will be sent but discarded. Default: 131072 MaxBandwidth=<top_limit> Bandwidth the server is guaranteed to never have. This value helps server to estimate bandwidth available. MinErrorToSend=<limit> Minimal error to send updates across network. Default value is 0.01. Using smaller value can make units observed by binoculars or sniper rifle to move smoother. MaxCustomFileSize=<size_in_bytes> Users with custom face or custom sound larger than this size are kicked when trying to connect. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MinErrorToSend - was mentioned in the change history of the older (pre 1.91) versions and appears to have been forgotten about long ago, it can also have the most affect, used correctly, on server CPU performance.