Jump to content
Sign in to follow this  
Mojo

Dedicated Server Latency Issue

Recommended Posts

Sorry if there is a thread for this, I looked but didn't see one that applied.

We ran a server today with about 54 clients connected. Here is the thing, the server performance was beautiful, but the lag blew chunks.

FPS = never dropped below 48

BW = 5 to 8Mb/s at the peaks

CPU = 24-30%

Mem = 31% (total, including all MS services, 2 TS Servers, etc.)

The server had 2 gigabit NICs running bridged. When we pulled one NIC out of the bridge and disabled it, our latency issue went completely away.

If you have a dedicated server and you are seeing weird freezing, desync and lag spikes, look to see if you have multiple NICs running, drop it to 1 NIC and see if that doesn't clear things up.

Edited by Mojo

Share this post


Link to post
Share on other sites

did you test your packet loss to the server? An improper dual NIC setup will lead to atrocious packet loss and performance whatever the application running

Share this post


Link to post
Share on other sites

Here is an update on our attempts to get a large scale battle to run as best as possible. We can now support 80 to 90 players pretty easily, in a PvP type of environment. I think that the mode is important, the server behaves differently in COOP than in PvP.

By implementing the verifySignatures, onUnsignedData and doubleIdDetected in the server.cfg, we have significantly decreased the number of errors in the RPT. This was the first step to stable play. We followed this up with serious arma2.cfg testing.

By significantly increasing the MaxMsgSend, MaxSizeGuaranteed and MaxSizeNonguaranteed, we see a large improvement in the smoothness of the game (as you would expect when you go from a straw to a fire hose...) Reports say there is very little to no rubberbanding throughout the 3 hour mission cycle.

Admins need to be attentive, the bandwidth usage is significant. We see spikes as high as 75 Mb/s when we have clients JiP.

Keep an eye on your server's BW usage (at least #monitor 1, if not a monitor on the server itself) as this can create a bit of a surge when you have more than one connecting at a time. I monitor the server stats: CPU, BW reported at the server, the number of clients, the FPS, private bytes, I/O bytes, etc. I only allow connections of about 4 or 5 at a time, sometimes as little as 1 at a time, depending on how things are going.

When we are nearing 50 players we have FPS of about 50. At 60 players, we are at 40-45, at 70 we are about 25-30 and 80 we are about 15 -20 and at 90 we are at 10 to 15. I hope we can improve this.

With 80 clients connected, our average BW usage is about 12-15 Mb/s, large paradrops, building destruction, etc. increase us to 20 Mb/s or so for 60 to 120 seconds, with a corresponding drop in FPS (decrease of 1 to 2 Frames) I client joining at ~80 clients in the server, increases us to about 30 or so Mb/s for the duration of the connection process, this varies from client to client and with the length the mission has been running. When we see multiple conncetions, or when we see a very poor client connection, out bound BW spike to as much as 75 Mb/s for the duration, at this point we see small amounts of DeSync, less than 100 on all but the worst connected clients (the really bad client connections always have desync).

We have several tools to combat the desync, first is the #lock command. Locking the server keeps new players from joining (duh), but more importantly, it changes the status on the server browser to locked and so people stop trying to join, temporarily. the second tool we can employ is a bit drastic, we have a script that is able to disable mouse and keyboard input, (we can run the script and an alert is displayed to all players that tells them a pause will go into effect in 10 seconds, stop all vehicles and autohover all choppers, not much can be done for planes, but, this is a final and drastic measure if desync starts to climb above 1000 (1 thousand) on most clients and to be fair, we haven't needed to use this in several days).

Over all, our server has the ability to run 80-90 players, so long as there is an admin to keep things on an even keel. We can run 60 with no admin.

Edited by Mojo

Share this post


Link to post
Share on other sites

We tried to re-enable the network bridge, this lead to the same failure as we have seen at the start.

Share this post


Link to post
Share on other sites

CPU = 24-30%

Thanks for your detailed reports and configs. May I ask what coop maps you are playing? Anything heavy like Domination or Evolution? (*ducks*)

Also, concerning CPU usage, we have not seen it use multiple cores yet, which was highly disappointing. :mad: Do you see the same thing? Either it is mostly single threaded like A1 dedicated was or I'm doing something wrong, possibly with the way I start it via custom batch file and service handler. Basically it acts just like the A1 dedicated.. sticking to its own cpu/core. (we have a quad core)

Share this post


Link to post
Share on other sites

Nice info, on the bridged NIC's are the ports on the switch bridged as well? Having used Intel NIC's and Cisco switches I had to turn the bridge on - on the switch and the NIC's. Just a thought.

I'm assuming your trying to turn the two 1 Gig nic's into a single 2 gig line? Otherwise I'm confused as to why you would bridge them unless you're trying to use one as an internal nic and the other as an external nic, if so then only one nic should have a gateway(external nic) set.

Share this post


Link to post
Share on other sites

We aren't running any COOPs, Dyslexi's crew over at ShacTac tried our configs and had a lot of desync. The server in COOP performs much different than the server in PvP. For specifics on what works for EVO and DOM, I suggest you contact Dyslexi directly @ http://dslyecxi.com/ Our data has been focused on getting as large a PvP setup as we can.

We use FireDaemon Trinity to handle our instances, and we set the affinity using that software, as to threads, I have been monitoring them for a few days now and as we speak we have 71 players in the server and are running 14 threads (two are ntdll.dll and rpcrt4.dll, the rest are ArmA2Server.exe). As to the cores, what seems to happen is the ArmA2Server seems to ustilize one core at a time, when it reaches about 5- or 60% of one core, it starts to use a second, and so forth. We too have a quad core and I have yet to see it use more than 2. Of course, it is still early and new patches could help. The -cpuCount=X switch seems to be hit or miss.

@ morden: Yes we were using the bridge to try and double our available bandwidth .. Epic Fail in Arma 2, but ArmA never noticed. Go figure :)

Edited by Mojo
morden

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×