Jump to content
Sign in to follow this  
crashdome

Multicasting Network

Recommended Posts

I was just watching a video about going back to P2P networking in multiplayer games. It is a concept derived to address the problem of increased number of players. What they didnt discuss, however, was how much of a pain it is to get working and how it opens up a realm of problems especially with NAT, cheating, etc...

...then I saw a comment from an individual regarding a hardware solution. Since bandwidth is the main issue in regards to the server (incoming/outgoing), perhaps we could address this with specifically designed routers which duplicate packets in the future. In the meantime, could we technically could create an app in which a few lucky people act as routers?

What I am describing is more or less that a few people act as pseudo-NATs in which the server transmits to those few people. In turn, these people transmit to others?

Currently, the server must broadcast 20 packets for 20 players. Imagine for a moment that the server transmits 5 packets, the five receivers in turn transmit these to 4 others to total 20. You have instantly quadrupled your bandwidth.

Simply have people connect to their "sub-server" and it passes traffic to the main server?

Is this possible under current design?

This causes a problem in that it makes for a giant fail-point. If the psuedo-router goes down... all people connected to it fail. Another problem is addressing the fact that information might be different from player to player, but if you listed all sub-servers and provided automatic switching capability... you could designate specific routers as zones.

ok... getting way ahead of myself.

Is this a bad idea?

Share this post


Link to post
Share on other sites

imho the issue is now less about physical bandwidth available for servers (100M access for servers are more and more common now, and even ISP access get higher in upstream) than CPU processing of all on said server. And I'm not sure your solution would help that much, because calculation still needs to be done for the same amount of object

Share this post


Link to post
Share on other sites
imho the issue is now less about physical bandwidth available for servers (100M access for servers are more and more common now, and even ISP access get higher in upstream) than CPU processing of all on said server. And I'm not sure your solution would help that much, because calculation still needs to be done for the same amount of object

Actually, even a 100M server can be overloaded quite easily.

Yes processing becomes an issue also, but not if the information was already consolidated. Alot of processing is in creating these packets. The pipline might be big (100mbps) but that does little good if the PC can't shoot out one hundred 1k packets every second.

[EDIT] Also consider the server must handle 100 people and divide that into subdivisions of 10. The server has to handle 10 packets for 100 people instead of 100 packets for 100 people. Agreed that the server still needs to process more objects, but that is a realm otuside of the scope of this problem (bandwidth).

Share this post


Link to post
Share on other sites

I've ran a linux server for OFP for quite a long time, and believe me, apart from MMO games for which I've no clue, the game that will overload a 100Mb interface is yet to be born.

30+ players on OFP barely ever get over 10Mb/s. And that was OFP "crappy" netcode.

I had a BF1942 server on the same box, even the 2 together barely ever reached the 10Mb/s. This, not counting the TS servers.

If CPU had to process even 50Mb/s of OFP data, it would be dead in no time. And not because of the IP stack on the server. I've seen FreeBSD handling 600Mb/s packets (with processing on them, but less than OFP coding, ofc) on Ge interfaces without that much CPU issues.

I think the real stress is deciding what to send, to who. Suma stated something along these lines a while back. I'm not sure at all OFP server duplicates all information to every player.

Share this post


Link to post
Share on other sites

First of all your title is slightly misleading as this would not be multicasting (according to the standard).

Secondly this would cause big syncronization problems. It would also increase the latency severly and you would need a TCP control-protocol that would increase the total bandwidth useage. A server is more likely to have higher upstream bandwidth than most DSL-clients anyways.

Then we have the whole process of deciding who would be "super-clients" or "re-destribution centers" if you want. Add to that that upstream bandwidth is the least of the problems. You are referring to PPS (Packets per second) but these are not likely to be a problem in a game like Arma. Also unless you have a 486 with the included network card, running MS-Dos 3.11 there's no problems with the IP-stack either.

The biggest improvement would be achieved by the DS-software supporting multi-cores. Also a DS programmed for unix would to some extent lower the CPU usage although I'm not familiar with how much calculation is needed by the DS.

To my understanding unix architecture are better at handling network-traffic, stacks and so on, but I'm no linux guru so I can't back it up.

Although p2p is great for utilizing bandwidth from peers with low upstream it's not very latency friendly. I know that Skype are experimenting with it but I'm not familiar with the results or progress. Either way, I don't see any gain in pioneering with this in latency-sensitive settings with an unproved and immature technology just to combat a problem that is not critical.

Share this post


Link to post
Share on other sites

such hw offloading system can works only between nodes where are most users

e.g. EU - USA transit , server - ISP exchange point, server - national neutral exchange point, etc.

, yet again it will need some serious coding to avoid any packetloss or delays ...

Share this post


Link to post
Share on other sites
The biggest improvement would be achieved by the DS-software supporting multi-cores. Also a DS programmed for unix would to some extent lower the CPU usage although I'm not familiar with how much calculation is needed by the DS.

A multi-threaded Linux dedicated code, now, THAT would be absolutely insane smile_o.gifnotworthy.gif

Share this post


Link to post
Share on other sites

Forgive my laziness... from wikipedia:

Multicast is the delivery of information to a group of destinations simultaneously using the most efficient strategy to deliver the messages over each link of the network only once

...

Once the receivers joins a particular IP Multicast group, a multicast distribution tree is constructed for that group.

Ok, a client/server setup is basically a one-node multicast already. What I am proposing is making it expandable.

Quote[/b] ]yet again it will need some serious coding to avoid any packetloss or delays

Does not OFP/ArmA already have code for the clients to direct group-level information to other clients (albeit through the server)? Now imagine there is a single dedicated machine which handles 5 AI grooups like a player handles his own group. Rather, imagine a single sub-server handling a consolidated 5 player's groups WITH 5 AI groups.

The only syncronization needed are the relays to the client from other servers.

Quote[/b] ]Although p2p is great for utilizing bandwidth from peers with low upstream it's not very latency friendly. I know that Skype are experimenting with it but I'm not familiar with the results or progress. Either way, I don't see any gain in pioneering with this in latency-sensitive settings with an unproved and immature technology just to combat a problem that is not critical.

I am not discussing JUST bandwidth here. I am talking about CPU load. If multi-core is the magic, then why hasn't it been implemented? It's because of the difficulty. What I am merely suggesting is that the comm framework is established. The sync framework for multi-core processing is not. It is also extremely difficult to handle effectively. Even if it was multi-threaded, so how do you break it up? Put AI on it's own thread and break out physics and other sub-systems? That still does not address the problem that 300+ AI are running in a single thread. Granted in multi-core it is now all by itself, it just increases the limit to maybe 2x or 3x depending on the number of threads. Even then, it may have to wait for the others to finish to show any viable results.

Share this post


Link to post
Share on other sites

Multicast is used for IPTV. It cannot be used for games etc as multicast doesn't use IPs but multicast groups and thus cannot carry across ISPs. Furthermore most ISP doesn't allow subscribers to launch their own multicast streams.

When it comes to distributed CPU usage, it will be more difficult to code than multicore-support. In a game so latency sensitive as an FPS, I can to some extent buy the idea for a serverpark handling the load if it's in the same location but not for clients all over the world. Also with the client CPU idea you open up a barndoor for cheaters if you allow them to control AIs.

Share this post


Link to post
Share on other sites

Forget about pure multicast. Multicast interconnexion between ISPs on the Internet isn't done yet. It is not something easy and automatic, the processes used behind are quite difficult to set up.

Share this post


Link to post
Share on other sites

I'm not speaking of hardware multicasting...

banghead.gif

I only mentioned I got the IDEA from it.

Share this post


Link to post
Share on other sites

Rgr that, but stop quoting Wikipedia about it, then wink_o.gif (if what you call "hardware multicasting" is the IP multicast definition)

I still think the added transit time for every node you need to go through, on the Internet, will lead to more synchro issues than solve CPU problems, if really the amount of pckts per second at game layer is what is hitting CPU the most.

I've not seen how Skype does it, but I guess the main use of the P2P network in it is for VoIP Signaling, which has no real time contraint.

Share this post


Link to post
Share on other sites
Rgr that, but stop quoting Wikipedia about it, then  wink_o.gif (if what you call "hardware multicasting" is the IP multicast definition)

I still think the added transit time for every node you need to go through, on the Internet, will lead to more synchro issues than solve CPU problems, if really the amount of pckts per second at game layer is what is hitting CPU the most.

I've not seen how Skype does it, but I guess the main use of the P2P network in it is for VoIP Signaling, which has no real time contraint.

Well consider that a player, who is handling 12 units, must broadcast to server and allow server to respond to another 30 players.

Now imagine 30 players leading groups of 12 all broadcasting information regarding these units to the server, which in-turn, must respond these back to all the others.

We've established that bandwidth is not the main problem

Now imagine that there are intermediate servers which receive information for 5 players each. The SAME information that the main server would normally get. In turn, it is broadcast to all the other intermediate servers like a peer-to-peer network.

OK, no real use yet.. still sending same amount.. no real noticable improvement in bandwidth.

Now lets go one step further.

Lets take the 500 AI units accounted for by the server and distribute them to the intermediate servers by attaching a few leader AI units to the intermediate server as if they were a "client" controlling the leader and the sub-units.

We could say, get about 100 units plus forward information on from other actual clients through one intermediary.

The reason I think it will not make things worse, is because it does off-load the server a small bit in terms of networking BUT ALSO has the potential to act as another client in control of several groups of AI (only without the human). The latter removes a large portion of CPU usage.

In terms of multi-core support, people forget that (multi-core != more than one server). Sure, we can perform calculations in parallel, but it's limited to scope, access to objects, and potential idle threads because it is waiting on others (physics thread waiting on network thread to finish, etc..)

Think of an intermediate server not as a server really, but rather a dedicated "client". The bonus is the ability to consolidate several packets into one and forward to other intermediate servers as would occur in a multicast network.

It's complicated yes, but I would imagine multi-core support to be more complicated and less beneficial. That is my own opinion of course...

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×