Jump to content
Sign in to follow this  
terox

All about MaxBandwidth

Recommended Posts

MaxBandwidth

Important... If you see information within this post that is innacurate, please let me know ASAP so I can edit it

This thread is part of a much wider discussion on server bandwidth optimisation Tutorial: Server Bandwidth & Optimisation

Useful Links

BIS WIKI: basic.cfg

What the BIKI states

MaxBandwidth=<top_limit>;

Bandwidth the server is guaranteed to never have (in bps).

This value helps the server to estimate bandwidth available.

What is known

Is it sensible to set MaxBandwidth to the exact theoretical bandwidth available (e.g. for 100MBit, MaxBandwidth = 104857600 bits)? The example config on the Biki shows a much higher value (10Gbit), but this is not explained.

Yes, it is sensible. That said, you will not see any effect in most circumstances, as the bandwidth estimation is very unlikely to estimated the bandwidth higher than the real available bandwidth.

What we need to know:

  1. How is this value calculated correctly when running multiple ArmA3 servers on 1 box?
    (Is it simply.... (Maximum value) DIVIDED BY (Number of A3 servers that can be run simulatenously)?
  2. How would you stress test or test this in a Benchmark mission scenario?
  3. Instead of using the Theoretical Max value, does setting the realistic max value improve or negatively effect server performance ?

Calculation Formulae

(Not yet defined)

Edited by Terox

Share this post


Link to post
Share on other sites

im confused, I read on some thread, that SUMA said to remove MIN BANDWITH and put the MAXBANDWITH has top speed you can get (i.e. 100 mbs = 104857600 )

but you claim that MINBANDWITH is what its going to always have 104857600 , so shouldnt we just have MINBANDWITH?

Share this post


Link to post
Share on other sites

fyi

1 Kilo bit is excatly 10^3. (1000) bits.

1 Mega bit is excatly 10^6. (1000000) bits.

so..

1 Mbit is 1000000

10 Mbit is 10000000

100 Mbit is 100000000

1 Gbit is 1000000000

converting bits to bytes. (1 byte == 8 bits)

bits / 8 = bytes.

/ 1024 = kilo bytes

/ 1024 = mega bytes

/ 1024 = giga bytes. and so on.

1 Mebibit (Mib) (mega binary bit) on the otherhand is (1 048 576) bits. (2^20)

Edited by nuxil

Share this post


Link to post
Share on other sites
fyi

1 Kilo bit is excatly 10^3. (1000) bits.

1 Mega bit is excatly 10^6. (1000000) bits.

so..

1 Mbit is 1000000

10 Mbit is 10000000

100 Mbit is 100000000

1 Gbit is 1000000000

converting bits to bytes. (1 byte == 8 bits)

bits / 8 = bytes.

/ 1024 = kilo bytes

/ 1024 = mega bytes

/ 1024 = giga bytes. and so on.

1 Mebibit (Mib) (mega binary bit) on the otherhand is (1 048 576) bits. (2^20)

your not understanding my question

the wiki basic.cfg says

The greatest level of optimization can be achieved by setting the MaxMsgSend

and MinBandwidth parameters.

why did suma say REMOVE the minbandwith? If its important for optimization why remove it? If I have a

100Mbps connection , what should my MINBANDWITH be set to?

What should my MAXBANDWIDTH be set to?

does the amount of players you have on a server in my case 75 players factor in at all to the min and max settings?

Edited by piffaroni

Share this post


Link to post
Share on other sites

piffaroni i never answered your question and didnt intend to., i was merly trying to correcting Terox and his value of what 10Mbit/100Mbit/1Gbit is ;)

Share this post


Link to post
Share on other sites

Because ArmA3 runs a 32bit executable, there is a maximum limit of the bandwidth the server can use in the most perfect of environments

This is because a 32 bit exe has a limit to how much it can process and how much ram it can utilise.

Hence why the 1Gb line has a strike through.

The theoretical limit is 100mbps

(Big thanks to Inch for clarifying this)

Edited by Terox

Share this post


Link to post
Share on other sites
Because ArmA3 runs a 32bit executable, there is a maximum limit of the bandwidth the server can use in the most perfect of environments

The theoretical limit is 100mbps

(Big thanks to Inch for clarifying this)

I'd really like to read Inch's clarification, because it's just plain wrong if not absolutely absurd.

The limited 32-bit userspace virtual address space has – if at all – a negligible effect on socket performance.

I coded a quick'n'dirty example to put this to the test, that transmits 1 million MTU-sized UDP-packets over the loopback¹ interface. The test is conducted 10 times to gather AVG±STDEV.

32-bit speed: (3823.4 ± 49.1) Mbps

64-bit speed: (3924.6 ± 52.0) Mbps

Looks like the resulting ~4 Gbps are just slightly above your 100 Mbps theoretical limit.

The relative difference is – just as expected – negligible (~2.6%) and the STDEVs even almost touch.

Please do not post such blatantly false information if you lack an in-depth understanding of the technologies involved.

Some ppl looking for accurate basic.cfg information (which the wiki definitely lacks) might actually believe you and mess up their config even further.

Here comes the source² code, just in case anyone would like to verify my findings.

/*
* Copyright (c) 2014 Actium
* 
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
* 
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
* 
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/

#define _POSIX_C_SOURCE 200809L

#include <errno.h>
#include <netinet/in.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <time.h>
#include <unistd.h>

#define ERR_CODE 1
#define BUFSIZE 1500
#define NUMPKTS 1000000

#define   LIKELY(x) __builtin_expect(!!(x), 1)
#define UNLIKELY(x) __builtin_expect(!!(x), 0)

#define Err(status, format, ...) do { \
   fprintf(stderr, "%s:%d: " format ": %s\n", __FILE__, __LINE__, ##__VA_ARGS__, strerror(errno)); \
   exit(status); \
} while (0)

int main(/*int argc, char *argv[]*/)
{
socklen_t addrlen = sizeof(struct sockaddr_in);
struct sockaddr_in dest_addr = {
	.sin_family = AF_INET,
	.sin_port   = htons(1234),
	.sin_addr   = { htonl(0x7F000001) }
};

int sockfd;
if ((sockfd = socket(AF_INET, SOCK_DGRAM, 0)) == -1)
	Err(ERR_CODE, "socket(AF_INET, SOCK_DGRAM, 0) error");

struct timespec before, after;
if (clock_gettime(CLOCK_MONOTONIC_RAW, &before) == -1)
	Err(ERR_CODE, "clock_gettime(CLOCK_MONOTONIC_RAW, %p) error", &before);

char *buf = (char *) malloc(BUFSIZE);
for (int i = 0; i < NUMPKTS; i ++) {
	if (UNLIKELY(sendto(sockfd, buf, BUFSIZE, 0, (struct sockaddr *) &dest_addr, addrlen) == -1))
		Err(ERR_CODE, "sendto(%d, %p, %u, 0, %p, %u) error", sockfd, buf, BUFSIZE, &dest_addr, (size_t) addrlen);
}

if (clock_gettime(CLOCK_MONOTONIC_RAW, &after) == -1)
	Err(ERR_CODE, "clock_gettime(CLOCK_MONOTONIC_RAW, %p) error", &before);

close(sockfd);

long delta_ns = 1000000000 + (after.tv_nsec - before.tv_nsec) % 1000000000;
double delta_s = after.tv_sec - before.tv_sec + (double) delta_ns / 1000000000;
double mbps = (double) BUFSIZE * NUMPKTS * 8 / delta_s / 1000000;

printf("%.0f Mbps\n", mbps);

return(0);
}

¹) Using a physical NIC is pointless, since both versions easily saturate the link.

²) This is Linux code, since I don't do Windows network programming. But if your theoretical limit were indeed a theoretical limit, it would apply regardless of operating system.

PS: Please also correct your wrong understanding of SI-prefixed bitrates (Gbps, Mbps, etc.). Nuxil's absolutely right. See this or read IEEE 802.3. It might also surprise you, that the actual throughput is even lower. The advertised bitrates only apply to OSI Layer 1.

Edited by Actium
Typo

Share this post


Link to post
Share on other sites

Thanks for the heads up, no problems m8, I stand corrected.

I'ven trying to get to grips with this stuff, which is appallingly documented on the B.I wiki

The aim of the thread, along with other threads on the bandwidth topics was to get folks like yourself to discuss the various optimisation settings and come up with formulas to correctly define the required settings.

The Bandwidth settings have been a mystery to us for over 10 years, the wiki itself as you stated is lacking somewhat and i have yet to see an X,Y,Z step clearly defined as how to set these values.

I get a lot of pointers to information, which some of it is above my head.

Based on your post then, you could help us by creating a formula that will allow an admin the ability to define the min/max bandwidth settings if there was more than 1 arma server running off 1 box

Share this post


Link to post
Share on other sites
I'ven trying to get to grips with this stuff, which is appallingly documented on the B.I wiki

The BIKI's just awesome ... NOT. They managed to run it with an expired SSL certificate a few days ago.

Not too long ago I read a thread assuming, that the BI developers themselves don't have a broad understanding of their engine's network configuration. I wouldn't be surprised if this were close to the truth.

Okay, back to the business at hand. Let' first quote the relevant sections of the basic.cfg BIKI:

Note

The greatest level of optimization can be achieved by setting the MaxMsgSend

and MinBandwidth parameters. For a server with 1024 kbps we recommend the

following values:

MaxMsgSend = 256;

MinBandwidth = 768000;

MinBandwidth=<bottom_limit>;

Bandwidth the server is guaranteed to have (in bps).

This value helps server to estimate bandwidth available.

Increasing it to too optimistic values can increase lag and CPU load, as too many messages will be sent but discarded.

Default: 131072

MaxBandwidth=<top_limit>;

Bandwidth the server is guaranteed to never have (in bps).

This value helps the server to estimate bandwidth available.

The following is my assumption on the topic, based on my technical background knowledge (B.Sc. in information technology and 10+ years of experience administrating dedicated linux severs) and a little common sense. Considering the lack of information from the official side, I cannot and will not guarantee that it's close to perfect or even correct.

MinBandwidth:

Considering the – let's call it very conservative – default value this is indeed the utmost important parameter for optimization, since any decent dedicated server will have at least a 100 Mbps link, although 1 Gbps appears to be de facto standard nowadays. Since the "Bandwidth the server is guaranteed to have" is asked for and I understand it as the available bandwith in a worst-case scenario, I'd use the following formula:

MinBandwith = (MIN($GUARANTEED_DOWNSTREAM_BPS, $GUARANTEED_UPSTREAM_BPS) - $RESERVED_BANDWITH) / $NUM_ARMA_SERVERS;
// $GUARANTEED_{DOWN,UP}STREAM_BPS is the upstream/downstream bandwith guaranteed by your ISP.
//	This need not neccessarily be your link speed¹.
// $RESERVED_BANDWITH is the bandwith reserved for anything else running on the same server
//	(e.g. teamspeak, webserver, other gameservers, etc.).
// $NUM_ARMA_SERVERS is the number of ArmA servers running with these exact settings.
//	If you wanna go pro, you could allocate more bandwith to prioritized servers and less to the rest.
//	Just make sure the sum is always lower than your guaranteed bandwith.

¹) For example my ISP hetzner.de uses 1 Gbps links, but only guarantees 200 Mbps bandwith (although I've never had any trouble maxing out the 1 Gbps during benchmarks).

MaxBandwidth:

This is the best-case scenario bandwith to complement the worst-case above. I assume setting this to anything higher than 100 Mbps will have any noteworthy effect, since a game server should hardly ever need to send that much data to clients, but setting this to your maximum available bandwith shouldn't hurt either.

However I'd really like to know how much bandwith one of these 100+ slots Alltis Life servers usually eats up (average and peak values). The bandwith usage may even be higher if a lot of AI units and human players are close (since in this case the units' positions must the sent to every client).

Regardless of that, it's formula-time again:

MaxBandwidth = MIN($DOWNSTREAM_BPS, $UPSTREAM_BPS);
// ${DOWN,UP}STREAM_BPS is the best-case bandwith available to your server.
//	Use your link speed or some (more or less generously rounded up) benchmark results.

In case anyone is of a different opinion, feel free to discuss it. I'd really appreciate to hear technical arguments or even hard facts (e.g. bandwith usage statistics).

Share this post


Link to post
Share on other sites

Actium, thank-you, very, very much.

I'm sure that I'm not the only one waiting for someone like you to come along it does help clear stuff up a-lot.

I'm sure it'd not hurt if you were to commit this info to the BI Wiki too.

Also thanks to Terox for going out of his way on these questions/issues.

Edited by Inch

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×