Jump to content
Sign in to follow this  
RN Malboeuf

Server now at 50 FPS compared to 32

Recommended Posts

as you all know we have allot of servers running max 32 FPS, the Win OS servers usually have this probelem

take a look at this

fps.jpg

by changing our settings and recording the results in the last year i finaly found some interesting #s

MaxMsgSend=512;

MaxSizeGuaranteed=1024;

MaxSizeNonguaranteed=512;

MinBandwidth=10000000;

MaxBandwidth=13000000;

compared to our old setup

MaxMsgSend=768;

MinBandwidth=10000000;

MaxBandwidth=13000000;

what is odd is the maxsize commands we usually left out, and as soon as i double thier #'s we got 50 FPS on our Win2kAS server

some really wierd changes took place as well, the AI player skipping (when IA seems to lag) totally disapeared, but the out put per player went from 30-80 kb to 170kbs, i watched the out put on the server and compared it to the Input on my machine and it was 170kb, so now in a major CTI game 8vs8 we should break 4000-5000 kbs in theory, but OFP has all ways choked it self

we'll be runnign a few 9vs9 CTIs on our server to test all day today and will post the out puts from the server

Share this post


Link to post
Share on other sites

all day today we have been able to play 3-4 hours of CTI 9vs9 with 0% lag or player skipping smile_o.gif

so these new settings should help some of the other servers

just adjust your bandwith to your servers

as for the 50 fps that showed up on our server today - im still at a loss for it

the results tho are tooooo sexy and we have never seen our server work better

Share this post


Link to post
Share on other sites

Yes, the other day I experienced the same hosting a server on a LAN, using Windows XP Pro after changing the same (and more) settings in the .cfg file.

It was 50 fps for one game, then 32 fps again

It was MUCH smoother at 50 fps

Can the server please be edited (recompiled) to limit at 50 fps (or better yet at a user definable fps set in the .cfg file ?

Surely this is not a hard change to make ?

1.96b anyone ?

Maybe a recompile to support newer CPUs better (not a total rewrite for HyperThreading and SSE2 support, etc)

Those 2 basic changes will keep this game alive another 12+ months.

Also, how 'hard' is Linux hosting, I am not clueless (hell I was thinking SuSe 9.1) but I am no linux guru either.

Just looking for quick ways to get max performance from 2.4 ghz Opteron (Single CPU Opteron 150), or a high end P4

Share this post


Link to post
Share on other sites

Here is a question ?

Does ANYBODY using a Linux OFPR 1.96 server have the cap at under 50fps ? (eg: 32fps like some bugged Windows servers)

Share this post


Link to post
Share on other sites

I may be wrong, but I dont think there is a big point in acquiring 50 FPS in the lobby or for 3 or 4 ppl playing. My windows XP laptop give 50 FPS when used as server. Not the deddi but when using -server param. Our game server FX-53 normaly never goes over 32. My laptop is crap, the FX not.

For old discussuons with topic this param

http://www.flashpoint1985.com/cgi-bin....8;st=15

"Much smooter" and no placebo effect I guess rock.gif

Share this post


Link to post
Share on other sites

I am not talking about the loby, as most the decent servers todat could do 50fps IN GAME without hitting 100% load, however they are artificially capped at 32fps for some reason.

All they need to do is raise the bar 56.25% to get 50fps again.

Better yet if someone knows the hex offset in the OFPR_Server.exe of the FPS limiter, it could be 'changed' to 255fps peak.

My point is;

A heavy map on a low end server will run under 32fps anyway.

However, a heavy map on a high end server COULD run over 32fps (up to 50fps as it should).

eg: Atrhlon 64 FX-53 - CPU is at 50% load, it gives 32fps.

Obviously if it was not limited to 32fps, it would give 50fps while the CPU was at 78% load.

So why limit it to 32fps, why not release a patch that lets the server admin decide the maximum fps ?

This would be esp useful on larger scale maps, like RTS3 and MFCTI as a high end server could do 40+fps on those missions if the software let it.

UPDATE:

I ALSO JUST NOTICED THAT 50 IN DECIMAL IS 32 IN HEXADECIMAL. COULD THIS JUST BE A BUG ?

Share this post


Link to post
Share on other sites

Yes CS servers runs at 600-800FPS, I really were thinking about asking someone to hack ofpserver.exe too. This would give CQB maps good preformance.

But is there a diffrence between, 32 and 50 fps? What i notice is that OFP servers can go down to 10-8 FPS before the lag starts comming. But when the Server hits 6 fps it will create desync. This has to be due to the ofp netcode. DK server ran 38 Players with no lag and 7-9 FPS on BF1985. We tried RNs settings and tried playing 44 on BF, Fps went down to 5-7 and an average desync of 200 on each guys. Somtimes the server ran it perfectly. I belive the OFP servers should use diffrent configs to diffrent Maps.

2 year ago we pulled a 20vs20 Game on Swec Server who had a 2.4 @ 2.8 CPU with 512. The server fps was around 10 and it really handle the game really good. We also pulled a 50ppl game on a CTF Everon with nearly no desync. This proves that OFP can support big battles with no lag. Its when u start adding scripts to the maps when you get the lag.

Too bad theres no one who wants to play 20vs20 battlefield games. I Guess a limit to reach would be 30 slots CTI version, too bad all ofp islands are too small to support that many players on CTI. Maybe in OFP we can get enough players and maps to run big battles. ATM theres no point in having a powerfull server more then to brag.

Share this post


Link to post
Share on other sites

Bear in mind if the server is running at 10fps (or simulation cycles per second) then the time between each cycle is:

1000ms divided by

10fps

= 100ms

So thats 100ms between simulation cycles, it would not matter if everyone was on a LAN that is still a full 100ms of 'lag' (or rather delay) between server simulation cycles.

--------------------------------------------------

Server delay table: (made in Excel)

fps delay in ms for a full server simulation cycle

--------------------------------------------------

1 1000

2 500.00

3 333.33

4 250.00

5 200.00

6 166.67

7 142.86

8 125.00

9 111.11

10 100.00 <- server is cause of alot of desync

11 90.91

12 83.33

13 76.92

14 71.43

15 66.67 <- server becomes the cause of some desync

16 62.50

17 58.82

18 55.56

19 52.63

20 50.00 <- with server physics taking 50ms+ players notice

25 40.00

30 33.33

32 31.25 <- A bug causes many servers to peak at 32fps

35 28.57

40 25.00

45 22.22

50 20.00 <- Servers should ideally remain at 50fps*

100 10.00 **

200 5.00 **

* - This would require that the server never reach 100% CPU load

** - Would require that they let server admins decide where the server should limit its fps, using 32fps or 50fps is no longer an ideal system.

Note: The time taken for a complete server simulation cycle is not added to the players ping figure (as far as I am aware), thus the ping time is just that, realistically a 2nd row of figures would be below the pings adding the server simulation cycle time.

Thus it becomes quite clear where the desync is coming from.

If a server is running under 8fps then that would create alot of desync (obviously), However no amount of netcode changes will help a server that is taking over 125ms for each simulation cycle to occur it just simply is not possible.

The solution would be to optimize the map, and upgrade the server so it can cope (20fps+ so simulation cycles only take 50ms)

Of course if a player has a comms problem then the server needs to do additional work to process 'where in time' all the players are, thus the requirement for desync (otherwise players would remain out of sync, much like 'Magic Carpet' on LAN with vastly different speed PCs, this was a common occurance)

Hopefully this table will help some admins understand what is going on.

Many of us CTI players have no issue getting a high end server (Currently: P4 with 2MB L3 cache or AMD64 1MB L2 cache with Dual DDR, as Single DDR servers don't cut it for CTI over about 5-8 players), so limiting the fps of the server is hurting the community in a very bad way.

Share this post


Link to post
Share on other sites

Better yet instead of using an integer fps in #MONITOR output use the time it takes the server to complete a full simulation cycle in ms.

This way a far more accurate figure is given and server admins will have a more meaningful figure (I find some server admins compare fps of server to fps in game which is a rather pointless comparison to make)

Share this post


Link to post
Share on other sites

i run net stat live to watch the out put of the the server it shows a more acurate picture of what is going on and it keeps track of in coming and out going monthly totals

i then also watch the CPU usage and ram - the FPS do help by giving you a good idea of the limit of your server

it's been three years and very few really understand what really goes threw the mind of a OFP server exe

Share this post


Link to post
Share on other sites

Yes, I run NetLimiter on my own server, and the CPU usage spikes if just one players becomes a bottleneck, it also is not reported on the 'P' screen until after they start coming back into sync.

I am sure not every admin is compeltely clueless.

Using NetLimiter you can also test 'what if' scenarios and see what happens in MFCTI when someone on single channel (64kbps) ISDN (or less) joins a MFCTI game, and other such bandwidth controlled tests.

Anyone can simply draw a direct line from lack of bandwidth to server CPU load (and thus low fps), it really is very simple to do.

However on a LAN server capping at 32fps (or 50fps as they claim) is totally pointless.

The same is true for very high upload internet servers (assuming all players have 512/128 connection that never drops out).

Thus my own server runs at 32fps constantly in MFCTI, and does not use 100% of the CPU (even though they claim it should run at 'up to' 50fps)

I paid for the game, and a very nice LAN server, however I get 0 support, so I am considering just chucking in the towel.

Sad to say it, but in some ways Counter-Strike (yuk) is superior to Operation Flashpoint and VBS1. (in that the author does not limit what the server side software can do even if your hardware is not at 95%+ load, which is total rubbish)

Share this post


Link to post
Share on other sites

The server code use OS thread shedulling services to cap the fps. It depends on the granularity of this services what fps you can reach. If the granularity is 20 ms, you should be able to run 50 fps.

You can find some more Win32 information about this on

http://www.sysinternals.com/ntw2k/info/nt5.shtml

It seems quantum on W2K Server defaults to 36 ms, while it defaults below 20 ms on W2K Pro. When you set Quantum to Variable/Short, you may be able to get higher server fps.

Edit: Interesting reading is also http://www.wideman-one.com/gw/tech/dataacq/skedgran/skedgran.htm - this shows how can timeBeginPeriod (which is a system-wide call) be used to improve the precision of the Sleep API.

Share this post


Link to post
Share on other sites

What is the smallest thread length that OFPR_Server.exe can use ?

< 1ms / 5ms / 10ms / 20ms / etc ?

I run Windows XP Pro (SP1 on Athlon XP 2083mhz=PR2800+ & SP2 on a Pentium 4 3000), both servers exhibit the 32fps limitation.

I am thinking of moving to SuSu Linux 9.1 Professional (I am getting it for training/work purposes anyway), would this 'fix' the problem ?

Also does using 31.25ms length threads affect CPU load negatively ? (eg: does it rise and yield no extra performance gain)

Would using 18ms length threads boost the peak server fps to 55.55fps and using 10ms boost the peak server fps to 100fps ?

(I am sure that it would, assuming stability could be reached)

It would be very nice to get the most out of the Flashpoint server hardware, esp since over 32fps (and soon over 50fps) would be possible in MFCTI with the hardware that is becoming affordable to more people now smile_o.gif

biggrin_o.gif

Share this post


Link to post
Share on other sites
The server code use OS thread shedulling services to cap the fps. It depends on the granularity of this services what fps you can reach. If the granularity is 20 ms, you should be able to run 50 fps.

You can find some more Win32 information about this on

http://www.sysinternals.com/ntw2k/info/nt5.shtml

It seems quantum on W2K Server defaults to 36 ms, while it defaults below 20 ms on W2K Pro. When you set Quantum to Variable/Short, you may be able to get higher server fps.

finally made some time to test this, this is what we are looking for but the problem is that tool is out dated and works for nt4 systems sad_o.gif

same problem with win2k, im still wondering why the one time we had 50 fps, not running it as a service may shed some light to it

Share this post


Link to post
Share on other sites

You can try my second suggestion as well:

Quote[/b] ]... how can timeBeginPeriod (which is a system-wide call) be used to improve the precision of the Sleep API.

Share this post


Link to post
Share on other sites

as far as i can tell that program just shows the timings, I will start playing with the Win32PrioritySeparation

Share this post


Link to post
Share on other sites

none of the changes I tested to got the server past 32 fps

and that raised another question, do you run it as a normal console or a Fire Deamon Service? iether way the test failed

Im tempted to raise the 38 bit value and see if the world ends

wish us luck lol

Share this post


Link to post
Share on other sites

yep nothing, FPS remain the same 32

i tried winNT defaults and even raised it to 50 (32hex) and still nothing, i went higher and still nothing

wooooooooooooooooooo

learned allot to day

i left it on 50 in hopes the gods do give use more power lol

Share this post


Link to post
Share on other sites

That old speed cheat for Half-Life wouldn't help in this respect would it ?

I would not know as I do not have it, nor have I ever tried it.

I do recall it modifying such, or similar 'variables' (cough) in regards to the way the Windows OS handles sleep time.

Just an idea thats all  biggrin_o.gif

PS: The quantum in WinXP Pro 'appears' to be 15.625 ms, which permit 64 such timeslices to occur each second.

However it seams to get one 'and a bit' which appears to be rounding to 2 x 15.625 (or 31.25ms, which permits only 32 such 'combined timeslices' to occur each second)

Also in Windows XP I have seen it do 50fps (on 1.96) yet it was only once, and normally it peaks at 32fps.

This is tested on several CPUs all fresh installs, incase the quantum differed on some platforms (eg: HyperThreading, Dual and Quad CPU setups, Single CPU, P4s, Athlons, etc, and multiple core types aswell)

I am thinking of trying older versions of the server aswell, 1.91, 1.85 and 1.75 (and 1.46 even), to see the results, as I swear I've seen the older versions do 50fps (at least more frequently or consistantly anyway, in Windows on same server hardware)

Share this post


Link to post
Share on other sites

hi people , i think you will all become crazy when you will try this

Go to your server and install The all seeing eye

www.udpsoft.com/eye

use wizard to configure it (auto detect connection settings)

NOW YOU GOT 50 FPS

48 ATM i speak to you

All Seeing eye must be runing to get 50 FPS , when you close it you go back to 32 FPS

Isn't it amazing ?

Can someone explain it ?

I find the tip of the century tounge_o.gif

----------------------------

other question for me

what is command line to log server console ?

can someone give me good settings for a p4 3GHZ + 1Go DDR2700 and UL 1500 Ko/s and DL 1500 Ko/s ?

Share this post


Link to post
Share on other sites

huh? we have ASE on our server (it is used for packet sending and network testing using the wizard), if this is true that may explain the 50 fps i posted above, i may have had it running the one odd time i saw 50 fsp

I will test this when i get home, but if it work's like i posted b4 i doubt high end servers will see a difference - mabey the mid ranged ones like 2.8s but i doubt, and if it does, then what is causing it..................................

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×