darkpeace
Member-
Content Count
141 -
Joined
-
Last visited
-
Medals
Everything posted by darkpeace
-
Looking at that survey it seems not to take into account the number of processors with 64bit extensions. Yes, but it does report the version of Windows people are running, and provide enough information to figure out who has Athlon 64's, even if they ain't running WinXP x64 :P.
-
I doubt a x64 version of Armed Assault will be released, at least in the medium term, due to the sheer lack of AMD64 and Intel EM64T systems running Windows® XP Professional x64 Edition or Windows® Vista*. * A requirement for the extra features, registers, etc. Ala 64-bit mode. Otherwise your 64 bit processor is just running in 32 bit Protected Mode. (still with other nice features though :P) A quick look over some statistics (albeit from another gaming company) reflects this: http://www.steampowered.com/status/survey.html OFP2 (or ArmA 2, whatever it gets named) has a release date when the mass consumer will likely be transitioning to 64 bit platforms (including Operation System, unlike from 2003 - today).
-
Fully agree Malboeuf, getting a Quad Opteron server in coming weeks (2 x deposits down) ready and hopefully can add hosting large scale Armed Assault LANs to its list of 'jobs'. 35,608 MIPS and 12,408 MFLOPS* over 4 cores Heck, it'll host 4 OFP CTI games over LAN fine (Thats 35.6 GIPS and 12.4 GFLOPS) (dude it is gonna be freaking awesome ! *Those are real MFLOPS, not BS super array vectorised SSE2 MFLOPS like certain gaming consoles 'advertise' themselves as. (Certainly not using extra 'multithreading' per core optimizations on those figures either) Good times ahead I hope (Just got sick of having multiple 'low-end' [cough] boxes and it is SOOOO damn cost effective now)
-
Tool to rip required addons from mission files ? Does one exist ? Then I could just index which missions need which addons in a large database. From there I could say which addons come in which addon packs (build a list) Then there would be no more (or less) issues with new players and finding those addons. Since BAS died, its been hard times Any ideas ?, Is one out there ?, That can dump to text output ?, can someone skilled enough with reading the mission files code one ? Also the ability to find missions made for older versions would also be a damn useful tool. If anyone from development is reading, please at least include this in OFP2. (improve on it if possible)
-
Problem: MPmissions keep downloading Cause: Either due to different file or timestamps Is there any chance of the "latest" original offical MPmissions being released in a small fix pack ? Of course if the server files are outdated they will need updating. The same for the clients. One "fix" is just to remove the original MPmission files from all the clients so they download and are in sync with 'fav' server. I am running a CRC32/MD5 comparison over 2 sets of files and will post the differences. It is confusing alot of new players, and just plan wasting time having to wait for some people to download the original MPmissions.
-
These are the 19 MPMissions I am aware of that commonly differ from client to server, and get downloaded by some clients every single time. NOTE: IF THERE ARE ANY MORE ORIGINAL MISSIONS THAT DIFFER LET ME KNOW ! FAILED CRC32 1-10_T_TeamFlagFight.Abel.pbo FAILED MD5 1-10_T_TeamFlagFight.Abel.pbo FAILED CRC32 1-16_Cooperative.Noe.pbo FAILED MD5 1-16_Cooperative.Noe.pbo FAILED CRC32 1-4_C_ShadowKiller.ABEL.pbo FAILED MD5 1-4_C_ShadowKiller.ABEL.pbo FAILED CRC32 1-6_C_LostSquad.ABEL.pbo FAILED MD5 1-6_C_LostSquad.ABEL.pbo FAILED CRC32 1-7_C_OilWar.EDEN.pbo FAILED MD5 1-7_C_OilWar.EDEN.pbo FAILED CRC32 1-8_C_DesertAmbush.ABEL.pbo FAILED MD5 1-8_C_DesertAmbush.ABEL.pbo FAILED CRC32 1-8_T_DemolitionSquad.NOE.pbo FAILED MD5 1-8_T_DemolitionSquad.NOE.pbo FAILED CRC32 1-9_T_Conquerors.cain.pbo FAILED MD5 1-9_T_Conquerors.cain.pbo FAILED CRC32 2-10_C_WarCry.Noe.pbo FAILED MD5 2-10_C_WarCry.Noe.pbo FAILED CRC32 2-11_T_HoldCastle.Noe.pbo FAILED MD5 2-11_T_HoldCastle.Noe.pbo FAILED CRC32 2-12_T_CaptureTheFlag4.Noe.pbo FAILED MD5 2-12_T_CaptureTheFlag4.Noe.pbo FAILED CRC32 2-5_Cooperative.Eden.pbo FAILED MD5 2-5_Cooperative.Eden.pbo FAILED CRC32 2-8_HoldCity.Cain.pbo FAILED MD5 2-8_HoldCity.Cain.pbo FAILED CRC32 2-8_T_CaptureTheFlag1.EDEN.pbo FAILED MD5 2-8_T_CaptureTheFlag1.EDEN.pbo FAILED CRC32 2-8_T_CaptureTheFlag2.CAIN.pbo FAILED MD5 2-8_T_CaptureTheFlag2.CAIN.pbo FAILED CRC32 2-8_T_CastleConflict.Noe.pbo FAILED MD5 2-8_T_CastleConflict.Noe.pbo FAILED CRC32 2-8_T_CityConflict.ABEL.pbo FAILED MD5 2-8_T_CityConflict.ABEL.pbo FAILED CRC32 2-8_T_RealPaintball.Intro.pbo FAILED MD5 2-8_T_RealPaintball.Intro.pbo FAILED CRC32 3-9_C_ReturnToEden.EDEN.pbo FAILED MD5 3-9_C_ReturnToEden.EDEN.pbo 38 checksums failed Which version of the files should we be using on both server and client ? I know there are "various" ways to patch to 1.96, depending on what you start with, and which patches you choose to run, but I do not think the patches modify the MPmissions. Can anyone clarify this ? If a fix pack is not going to be made, just let me know which files should be used (on both client and server), how to identify them (pref by CRC32/MD5) and I will create my own fix pack for this long standing 'problem' then notify the OFPwatch authors so it can be implemented mass scale automatically after we test it on Australian servers for awhile.
-
Not that it matters, but a voted in admin can use the #MONITOR <time in seconds to average over> command. Eg: #MONITOR 1 gives very fast readouts and can see the spikes if any, #MONITOR 15-60 are good for load checking during game, #MONITOR 300 is if you want averages over 5 minutes, if you get 15fps or less using #MONITOR 300, then your pushing the server wayyyy to hard with the current map, players, bandwidth, (flashpoint.cfg) settings as per DS-ADMIN.RTF, or a player is desyncing badly and you may want to consider banning them. When they say SEVER FPS, what is really meant it "simulation cycles" on the server. DS-ADMIN and DS-USER relate (if you have the server download packages). They are .RTF (RichTech Format) files, so any decent document reader (MS-Word) can view them. A "simulation cycle" on the server does not draw any video at all, it only really does physics calculations based on information that players send the server. If load on server increases this is basically what happens: Server gives 50fps when at 99% CPU load or less. Server hits 100% CPU, it gives under 50fps. Lets say it would hit 200% load if possible, thus it gives 25fps. If server would hit 333.33% load (if possible) it would give 15 fps. If the server is running under 25 fps, I would start looking for an upgrade, a map optimization, checking players are not the cause of desync (there are other causes), and try again, failing that the server CPU lacks grunt. (Thus I use 3ghz P4, with 1ghz FSB as MFCTI server). If server CPU is hitting (even just deep spikes to 15fps) you need a faster CPU (and Dual DDR memory interface) in your server. So long as the physics engine (on the server) is running at 25fps (so 200% load so to speak) no player will notice in general, esp if their 'client' frame is 40,50,100, or even higher. If one player has a dodgy link, and causes desync, the server needs to work in 4 dimentions, not just 3, as it needs to keep track of where everyone is "in time" relative to one another. This of course does not use much bandwidth at all, but will cause the server CPU to hit 100% load, and thus the server gives under 50 fps. On high end servers this can be 'worked around' just via having a 3 or 4 ghz CPU. Personally I would have made it more like Quake III and just lagged the fool out of the game. This is why entering vehicles is delayed if the DRIVER, GUNNER or COMMANDER has a slow link. The slow link players insist they use no bandwidth and don't affect performance - THEY ARE WRONG, as no CPU can process what we do in 4 dimentions and somehow keep the slow link players from dropping forever, eventually it has to reach a "break point". Which I am sure many people have seen on low end MFCTI servers with 5 players on dial-up. As you can start to understand, server FPS has little bearing on client (player) FPS. Heck if my server ran (or even had long spikes) at 160fps during MFCTI like my client sometimes can I would by damn happy indead. There is a point where the server load becomes far higher than the client load. This is why the old "Battlefeilds" single player map required a 1ghz PC for its day, or better yet host it on a dedicated server and join it. The server takes the load of the clients, so long as there is never any client related desync. Thus admins like all players to have sub 85ms pings with 512kbps downstream, and 128kbps (if not higher) upstream to the servers. It really does not work like other games, and is FAR, FAR, FAR to friendly towards those on dial-up, or playing internationally IMHO. Heck, it was made for LANs as far as I can see, just with an excellent (too excellent) system for keeping players in sync. (for its day anyway).
-
If your interested just email UDPSoft. They ain't stupid people, they will know what is going on.
-
I still don't believe it. You might want to try emailing the guys over at UDPSoft and ask if it does anything strange to the timers.
-
WTF ? Your saying if All Seeing Eye [ASE] is installed the server 'runs' at 50fps ? Is this consistant ? Is the client also running ASE ? Being from UDPSoft it might do 'strange' things with the timing functions. I am aware of their work with Counter-Strike (during the late Betas, before it went Retail and downhill fast) Running Linux Servers now anyway so resource usage is lower, security is (generally speaking) better and server FPS is 50 unless running a heavy mission, even in CTI it can stay around 25 (20-30 fps) so she does well now. Needs some minor tweaks here and there, but she does well.
-
SuSe and RedHat/Fedora are closer to Solaris than Linux IMHO. Sort of a hybrid ground, it has pros and cons, one con of which was the Flashpoint server 'issues' with setup. I agree the 1.96a (Win32) server (extra long id protection) would be equally useful to a Linux server. Also regarding the tolower program wrecking a distro, I personally just use "sh ofp-server-1.96.shar" each time to convert to lower case, as it only 'runs' from the server folder. I was considering VMware, and run Linux, under Linux :P even (like Gentoo or an older Red Hat, etc) just to get it working.... Using the LIBS trick really does screw over some parts of SuSe, but only temporarily, and nothing serious gets broken. Now to just slowly undo some other (suggested here) changes I've made and see if it breaks OFPR Server. Also without using -nomap the server does initially start by taking a large chunk of memroy, but 'Info Centre / Memory' only shows; 363.8 mb used for 'Application Data' (Linux OS + OFPR Server without -nomap) 600.5 mb Disk Cache (overkill) 50.5 mb Disk Buffers 9.2 mb Free Memory (this is normal btw, disk cache reduces if memory is required) EDIT: Oh, and Swap partition usage was only 4 kb, of 800mb or so. You really need to hammer it to make it page. Above Figures are about 4 hours(240 min) into a MFCTI 1.16a Nogova (offical Mike Melvin, aka: mf256) release, so it ain't bad. Longest game we did was 20 hours, with scientific notation for both sides final resource count. I am sure Linux will hold its ground here aswell. (16 fps, incorrectly[?] reports 586 mb usage, but this is on a Athlon [barton] PR2800, so on the real server it will perform far better 4 hours in, once it is setup - see below) I only just got it working on SuSe thanks to many people (Benu and Shorty mainly) (thank yous where sent ) I'll have to try it with -nomap on the server, see how performance and memory usage are affected. I still need to recompile the kernel sometime I think, I doubt it is getting the most of the CPU / system. Also need to compile some D-link DGE-500T drivers (currently using nForce2 oboard 10/100 LAN) There is no way to recompile the OFPR Server though is there ? (even with the above limitations) Besides the lower case 'issue' anything else you think I may need to be aware of. (eg: Do any addons exist that require uppercase filenames to work, etc ?) Once it is all working I plan to port it over to a very high end (1000 mhz FSB + Dual Geil550@500 CAS2.5, etc) Pentium 4.... thus I am likely to need to know how to lock the process to CPU0 affinity (so it does not try to run over multiple CPUs) - this would be better than turning of HyperThreading, as OS processes, etc (TS2 even perhaps) can utilise virtual CPU1. Damn good way to learn Linux really fast though, I've learned alot of stuff, some of which will aid me (Work certified SuSe for servers, so I figured I may as well learn it, and learnt the problems associated with it) Cheers to the OFP Linux community - Thanks Guys. (About 40% of OFPR Servers are Linux based now ) It works, but it is 'far' from complete, although 80% of the rest I can figure out / read up on. Nice to know that 2x256mb (Dual DDR) is ample for a Linux server aswell, and using -nomap I suspect the 'Application Data' memory usage will be lower. What does -nomap do for servers anyway, It must be quite different as I can't see it doing the same thing it does for clients for the server under Linux..... ? (anyone) I'll keep tuned to forums, Thanks again
-
Yeah I am having issues aswell, even using; ENV LD_ASSUME_KERNEL=2.4.1; (See first few chapters of SuSe 9.1 Professional Admin Guide for information, maybe try Google aswell on LD_ASSUME_KERNEL) Get Segmentation Fault when not using -nomap (and it tries to use around 256mb when loading) Get Sockets error when using -nomap (and it uses exactly 4096 kb when loading) Server name has no spaces or hyphens, just; twilightofp I heavily doubt the DNS / hostname is incorrectly setup, is there anything anyone knows of that might be causing the issue though ? Tried with Firewall off aswell. Might be due do newer Kernels reserving memory (or LD_ASSUME_KERNEL is doing jack all for ofpserver), and only letting it use 4096 kb (without -nomap) and this thus leads to problems. I am getting to the point where I am thinking of installing a 2.4.1 Kernel under a http://www.vmware.com virtual machine ? Any confirmed 100% OFP 1.96 working Linux Distros we know of ? (do I need to dig up Red Hat 7.2 for example) ? I read that someone in these forums was getting 55fps using SuSe 9.0 (or 9.1) Professional... might be a load of bull, maybe they know something we don't.
-
Phew, so it is mainly 1.91 and before servers that are affected ? We get bugger all in Australia, but they do join sometimes, personally I don't look into stuff that can break Flashpoint (like cheats or id cracks, etc) I even have 2 legal copies of the game (GOTY and Gold) and 2 player IDs.
-
Note: the below tends to go on a bit, I may 'optimize' it later :P The server does not send message indicating which player is the 'cause' of desync. FACT: If one players link suffers or drops the server starts hitting 100% CPU load very fast, until their link settles or they drop after 90sec of "Lossing connection" (which initially takes too long to appear IMHO) (I have screenshots to prove this, where a player with 6kb/sec joins, starts desyncing, and the server CPU load rises, thus pushing server fps performance way down, to the point where it affects other players.... very bad) These settings could eliminate desync (or at least the main cause of it) ============================ It would be nice if the below was configurable by admins: -MinimumClientFPS to play, (weighted average sustained over 10sec) -Minimum Bandwidth to play, -Minimum Bandwidth to join and chat when game in progress. -Minimum/Maximum Server Bandwidth given per player (not just in total as is the current) -Maximum Ping to play -Maximum Ping to join and chat when game in progress. -Delay before "Lossing connection" (it is too long by far, it should appear or flash yellow the instant it is noticed, if it lasts over 3sec alert other players of the player with a potentially bad link) -Time (in ms) a Players ping / bandwidth can be 'out of limits set above' before being kicked (setting to 0 just drops the player with a "fix link" message straight away) Obviosully the ability to detect a resyncing ADSL/Cable would be useful, but not as difficult as it sounds. Being that a player who has sustained over 256kbps (or whatever minimum bandwidth is set to) for 10sec could trigger a flag on the server and thus be excluded from the 'If link drops then kick this player' since they are not on dial-up obviouslly (using 256kbps+ as example) and within 5-120sec thier link should resync. The option to overide this would thus be useful As in: "Time (in ms) a Players ping / bandwidth can be 'out of limits set above' before being kicked (setting to 0 just drops the player with a "fix link" message straight away)" above. I am sure there would be other useful settings to add to the above list. Note that many of the above are similar to Punk Buster in RTCW. This would also stop bandwidth cheating using NetLimiter (sonuds dumb, but it can cause a server to desync out and wreck a game). (Some players cap their upload/download speed to 'cheat', and yes it does work to some degree, the above could combat this) The above might sound ruthless to some but they would (used correctly) reduce server CPU load (and thus boost server performance), so everyone wins, except the poor bastard causing desync, that should not be playing on your server anyway. It's 'funny' watching CPU load on a 2nd monitor when a Dial Up or ISDN (single channel 64kbps) player joins a big battle server and within 5 min (usually much sooner) the server CPU hits 100% and stays there, there is initially 'some' desync on that player, and server fps drops like a stone. (This can be monitored with NetLimiter and Task Manager on a 2nd monitor) Same is true during an ADSL or Cable 'line resync' sometimes lasting over 90sec (which is the bane of most ADSL owners) However a simple firmware upgrade (or downgrade) can fix this issue straight away (eg: Recent D-Link DSL-504 firmwares sometimes drop the link to retrain, and sometimes this takes 90sec as it occurs twice for an unknown reason) Server Performance dives, and in big battles it goes below 8fps, to the "point of no return"..... All because *one player* thought they wouldn't do any harm by joining, or lying about their connection type. Once at 8fps the server it taking 125ms per simulation cycle, and this starts to 'carry over' the desync to all other players, at this point a #reassign or #shutdown is usually done by the server admin (who is surely sick of burning money on a server that some idiot desyncs) Surely admins need more controll over their assets, some pay AU$3000 a year just to host a box, we don't want people wrecking that for us. The upside servers can be made more (or less) tolerant of low bandwidth / high ping players to suit the desires of the admin (depending what size battles they wish to host, what drain on CPU they can handle, etc) Q1) Why does a desyncing player cause OFPR_Server.exe process to start hamming the CPU (100% load) and thus lead to server fps reduction ? (sometimes to the point where it can not recover usually around or under 8fps) Likely because the server is keeping track of where everyone is in time 'relative' to everyone else, so the load increases exponentially Using NetLimter to graph bandwidth to/from players it can be noted that some players who lack bandwidth to play larger battles do not drop, they just keep getting "Lossing Connection" over and over until the game ends. (and the game is just one big lagfest because of it) A simple message in game to a desyncing player (with the cause being a slow link, not ADSL/cable line resync) should just get a message "You lack the required bandwidth to play here, please upgrade your link to a higher speed before returning". Ideally the ID would then be banned for 5+ minutes to deter them from joining again. Thankfully I have noticed the low (almost 0 at times) upload requirement of players, and I commend the effect put into this. This would be fantastic at LANs and online alike (I have seen LAN players with dodgy NICs, chipset drivers, NIC drivers, cable, etc, have their connection drop in/out sometimes, ideally they should be warned beforehand, and if they fail to comply then punished by disconnection from the server) Same online, admins want the above features, badly, to put an end to the main cause of desync. (ignorant players on slow links, or from other countries with stupidly high pings) The only other causes of desync are non optimized missions, low performance servers being pushed too hard by some missions, and players on max detail getting under 15fps which can cause some desync.* * - Join a client with only 3-8fps to a high load server and see if your server performance suffers.
-
Speaking of which, is there like a giant black list of dodgy or constantly abusive players / idiots you can subscribe too ? Would be nice on a few local servers, sure it would not stop the determined asshole, but it would stop the average fool. Of course checks would be made against local player id, in case a duplicate is found (because they can pick their player id these days)
-
Good to here, bastards like that should be shot (in real life)
-
That old speed cheat for Half-Life wouldn't help in this respect would it ? I would not know as I do not have it, nor have I ever tried it. I do recall it modifying such, or similar 'variables' (cough) in regards to the way the Windows OS handles sleep time. Just an idea thats all  PS: The quantum in WinXP Pro 'appears' to be 15.625 ms, which permit 64 such timeslices to occur each second. However it seams to get one 'and a bit' which appears to be rounding to 2 x 15.625 (or 31.25ms, which permits only 32 such 'combined timeslices' to occur each second) Also in Windows XP I have seen it do 50fps (on 1.96) yet it was only once, and normally it peaks at 32fps. This is tested on several CPUs all fresh installs, incase the quantum differed on some platforms (eg: HyperThreading, Dual and Quad CPU setups, Single CPU, P4s, Athlons, etc, and multiple core types aswell) I am thinking of trying older versions of the server aswell, 1.91, 1.85 and 1.75 (and 1.46 even), to see the results, as I swear I've seen the older versions do 50fps (at least more frequently or consistantly anyway, in Windows on same server hardware)
-
Imagine the server capabilities if they were combined lol Anyways when it boils down to it BIS should release the source code, or at least part of it (although with VBS1 I don't know if contractual agreements would permit this). Then some really neat changes, improvements could be made
-
Dude I joined your server, get over it. My IP is not static You don't have my player ID You don't have my player name Isn't using admin privlidge to obtain an IP needlessly an abuse of permissions ? Anyways, if your using a decent browser this shouldn't get sampled down: I've removed any information that might identify me for privacy concerns. eg: time joined, all player names, etc JPEG is progressive, at factor 30, 1024x768 progressive made it smaller in this case :P I've been there, you just don't know who or when.
-
Incorrect. I logged onto a few days ago when it was empty #VOTE ADMIN 1 #MONITOR 1 Checked peak fps while idle in lobby Alt+F4'd out You don't have my player ID My player name is not "DarkPeace" Your server peaked at 32fps as I suspected it would. My server is on a LAN, and has 1000fsb with RAM to match, thus it does not desync as you claim it does, you are VERY CONFUSED. Why you keep saying it does I have no ****ing idea, I really don't, this thread has nothing to do with what you consistantly bring up. As outlined above: ------------------ I went out and built an Athlon XP PR2800+ server for lab testing, as it is similar to another server that is online that some other people I know can sustain to afford / justify. (they are saving for Athlon 64 2.4ghz+). They got 2U rack space so a overclocked box was *not* an option (duh). Your recommendation of getting a 4ghz CPU will not work in thier scenario (ever try cooling a 4ghz CPU in a 2U rack ?) (Don't say yes even if you have, it is so off topic I can't believe you bring it up in every one of your replies) Point is different servers need different configurations for MFCTI, as I am sure you would be aware, however you seam most unco-operative in sharing any meaningful advise to any other parties when requested. How does saying a PR2800+ Athlon (built to match another server for testing here, so server can sit 800km away, during lab tests, while players use the real one) is slower than a 4ghz Pentium 4 Prescott help any one ? .... It doesn't, in fact most people are aware it is slower, I only built the server to match an already existing one. Then you say my LAN server (which is a 3ghz/1000fsb P4 Northwood) lags ? Did you even read the above ? or do you just have a macro that replies the same stuff every time ? Better change it for when the P4 Gallatin core and Athlon 64FX 2.6ghz+ servers are common, as they will be faster than your current server (which I assume is in a server tower or perhaps a 4U rack in a farm) I am trying to help people with a lower end server, so I went out and build a similar speced server with my own cash (I made it my new fallback PC to learn SuSe Linux 9.1 on for work, and put a R9800 Pro in, so its not exactly the same, but I had to justify forking out the dollars myself for testing you know) Now that I have pointed out there are 3 different servers clearly do you follow me yet ? Just to make it 100% clear, the 3ghz server does not desync, the PR2800+ does desync a tiny, insignificant amount on LAN during lab testing. I noticed immediately that when locking the bandwidth to match Dual ISDN or below for the client (over LAN so ping was low, but datarate was set to all all: GigaEthernet, Fast Ethernet, Ethernet, 2048/2048, 1024/1024, 1536/256, 512/512, 512/128, 384/96, 256/64, 128/128, 96/96, 64/64, 56/48, 56/33.6, 56/28.8, 48/33.6, 48/28.8), that at lower speeds the server CPU usage rose heaps, even in 128/128 when the going got tough, the server 'hiccuped' as you put it. Thus obviously not identical to real world, however ping does not affect OFP as you say, since players from the other side of the globe with high data rate, and moderate (cable) pings get no desync on your server (useful information I can deduce) Only problem with lab tests what the ping remained near zero, but datarate locked, so its OK, so it was not 100% representative of the lower speeds (as most dial-up uers get over 125ms pings). I did notice as you said, the CPU load and bandwidth monitors during the test, and the CPU load spikes (which pushes server fps down if it hits 100%), the lower speeds (below 384/96) caused the serer CPU to spike in MFCTI testing. My point is, the settings I requested can help the fight against desync in a manner different to yours (not everyone get a 4ghz server hosted in a 2U or 4U rack, which are our only options in Australia, and may other admins may benefit from the above settings). With them you could streamline the player base to a server, and stop the clients that hit the server CPU the hardest from playing on a given server. RTCW already has the above features (to a degree) Now I am sure anyone with a server hitting 100% load would benefit from the features in the 1st post (going back to the topic of the thread, which is not that a certain 4ghz server has no desync, or that a giga ethnernet server does, which you are mistaken on). I bet I can guess your reply already. Same *incorrect* assumptions that any non 4ghz server desyncs. (oh and that its the fastest CTI server online) Yet the fact remains it has nothing to do with the requests I suggested in post #1 of this thread. The majority of servers are slower than yours correct ? - True Thus the slower the server, the more it would benefit from the suggestions in post #1 - True Thus if another highly demanding server process came along that hit a 4,5 or even 6ghz CPU really hard, the settings would help - True I've answered my own post, as no-one else helped. No animals where harmed, except the dignity of a cow. Seriously though, BIS should charge you for all the advertising you do for that server.
-
Did the search, and so such comparison was made (as per my request above) 3 years ago is quite awhile, was the comparison you spoke of (far above) a Win98 vs Linux benchmark on the old RN Dual Xeon server ? I did not know Win98 supported Dual CPUs / MPS 1.1 or 1.4 (it doesn't but Linux does) That answers my question (as far as I am concerned the comparison was made before the cache coherency issue was disclosed) Thats all I wanted to know, and it has been dragged out to like 10+ posts, all a waste of time to any reader. Not even worth summarizing, I'll perform my own tests FYI: My 3000mhz (Northwood core Intel Pentium 4) on my 1000 fsb using Dual Geil DDR550+ does not ****ing lag, esp on a 1gbps backbone LAN, plugged into the backbone. with half the players on Gigabit anyway. I mean seriosuly if you think that the above setup lags, you need a reality check (then again you did imply that 2x15.625ms was shorter than 20ms, and also advised the fps issue has nothing to do with timeslice length) I asked a few simple questions You replied saying my server lagged (it is on a LAN and has 0ms desync 100% of the time, the testing server above, the crudgy Athlon XP PR2800+, also rarely hits 100% CPU in MFCTI) You then just kept saying Windows was faster, and beifly explained (in laymans terms) the dual CPU issue (neither of my servers are dual CPU, the comment refered to OFPR2 and BIS progress on it) Then said my server lagged a few more times Basically all he above posts are just adds for your server, which from Australia both Cage and myself get 250ms+ (using 32 byte packets), we also have several bandwidth bottlenecks between here and there via the International links, etc, etc (long explation short we do not get 1536kbps downstream from the states) Thus our desire to setup local servers, and LAN communitys in our own country. Sure beats playing on a server several time zones away, which you keep saying is the fastest in the world. I assure you that it isn't, since it hits 100% CPU and stays there, and PEAKS at 32fps (thus is sustains less) My guess is your Min/Max bandwidth are set really, really high, so high that CPU hits 100% load. But its fast, and doesn't lag for local American players, good for you Thanks for hi-jacking this thread, and boosting your post count. FFS: LAN lagging, god you come up with some crap
-
No mention of large XML pics there, just wanted to delete OGG files dude (wasn't there a dimention limit, like 64x64 or 128x128 ? / / and it only takes JPG and PAA, and highly compressed image formats anyway) I didn't know that OFPR automatically blocked custom files over 65536 bytes (64 kb), I'll have to test this out. Thanks for the tips
-
Please provide the URLs with the explanation when providing a responce, it is just forum ethics Were the benchmarks performed recently where a Windows server and a Linux server both running 1.96 both with CPU0 affinity only, where equally compared, where the Windows server outperformed the Linux one ? I have never seen such a thead in the forums, even when searching, there is little real information available, just word of mouth (usually the loudest most frequent mouth too :P) (Sure there are some dodgy comparisons, but not where the setup was equal on both platforms, mainly regarding the CPU affinity, as I've stated multiple times this is where my interest does lie, you also never provided an example on how the performance scaled down from 50fps as tougher and tougher test missions where loaded for benchmarking) I asked for an explanation of your Linux testing, how it was done, etc, I still have not seen this, if you provide a link / URL to it, then it will be far easier for all. I suspect this scenario currently: -Linux was tried once, with CPU affinity set over all CPUs and as such suffered poor performance. Windows was then tried multiple times with CPU affinity was locked on 1 CPU (that is about as rigged as you can get test wise) -I have several servers I use for testing -They are for LANs -You will notice above I don't boast about a certain server being the fastest in the world, etc -Tried several different setups above using most configurations available today (see: http://www.tweaktown.com/document.php?dType=article&dId=647 for current Pentium 4 CPUs, including the Gallatin, the AMD CPUs most are aware of, and I have yet to try an Athlon 64/FX or Opeteron 1xx series in my own testing) -I also don't use offensive language such as your repeated "Get a clue" comments, at all. -I have provided a testing method, and other ideas, and yet to hear any real feedback regarding comparions -I find a 3ghz/1000fsb (2400/800/HT overclocked) quite ample for MFCTI 0.99 to 1.16a core based missions (sadly many are based on 1.1 yet use their own internal revision such as 1.157, etc). Esp once CPU affinity it set to real CPU0 (HyperThreading is Enabled in BIOS, yey Affinity is locked, it boosts performance slightly, and other tasks can be set to use virtual CPU1, vs having the CPU appear to the OS as just CPU0) I only asked for info, that was not provided over several posts. I am starting to get the feeling I have been doing this longer than you have, and somehow your Linux setup / configuration was flawed (you mentioned 'multy CPU' on the Linux server, yet did not mention he affinity settings used in your benchmarks) I also plan on testing a 2.6ghz Athlon 64 (1mb L2 cache, Dual DDR), at various clock speeds to see where it 'equals' a P4, so I can make better judgements (as the Athlon performs quite well esp considering its vastly slower clock speed). I suspect 2.6ghz+ is where it starts to tread on Intels heals, even with 1mb L2 cache I suspect it may beat out the P4 Gallatin (Northwood + 2MB L3 cache) Besides being uninformative (and taking my time to read, and learning nothing since you provide no details on your test), I am also starting to consider your posting method rather (or very) offensive Considering if the above settings existed your could use them on your own server to make it even better than it is now, then that would be cool I am sure you'd agree (assuming they implement them and you try them out, which prob won't happen)
-
You can also run a loop every minute or so (very low performance impact), that searches for and deleted the user uploaded OGG files, as they never added an option to exclude users customer face and/or sound files, only to limit by size (and yeah 60kb tends to block most user created OGG files, so long as they are larger than 60kb, face files are normally around 16kb though)
-
Good to hear, but you didn't answer my question at all or provide any evidence. You also advised that the timeslice issue has nothing to do with it, when Suma has advised otherwise. In my previous thread Suma advised that the fps will not be configurable by server admin (which would require massive changes to OFPR_Server.exe). So a server that sustains 50fps all the time is faster than one can sustain only 32fps all the time (duh). This holds true so long as CPU load is under 99% (duh) This is esp when **CTI IS NOT CONCERNED**, esp if your CTI is peaking your CPU, which I won't discuss, as it can be set up so that the CPU does not become a bottleneck) I know you will rebut this saying your 4ghz CPU is not a bottleneck, however if it hits 99% load then it must be definition be classes as a bottleneck, and your 4ghz CPU does hit 100% in the examples you are providing, thus the sub 32fps performance, I mean, sure 32fps does not 'lag' as such... not that noticeably anyway ... then thus 50fps constant would 'lag' even less noticeably (20ms sever simulation cycle plus player ping on a 100mbps LAN is not noticeable, esp if desync ms for all players is 0, not 1, not 5, not 100000, just 0 all the time) The 3ghz server I refer to has a 1000 FSB and Dual 550 (at 500mhz) DDR RAM (www.geilusa.com), with faster timings due to lower speed. So your server is 'less than' 25% faster than it, not more, I know how much difference the extra FSB and memory throughput make, otherwise I wouldn't have spent the cash on the RAM.... Of couse compared to a stock 3000/800/HT/Northwood with 512kb L2 cache, it would be more than 25% faster, I do agree with you (you know me better than to build an uber server with under 1000mhz mate ) I am not 'complaining' about lag either, I just made a post regarding a few suggestions to remove desync (its good in some ways for normal maps, however more control would be highly desirable, these requests would be easier to implement than multiple CPU support, or the ability to limit the server fps to whatever you want (255fps anyone ?), as this required massive changes as Suma advised. But, BIS being BIS, I doubt they will incorperate even 1 of the above requests (I mean if they don't help you then I don't really care, esp if they help me help 200+ other admins, on more normal servers, and benefit the LAN community for more fair play / comps / etc) My thread was then 'hi-jacked' (no offence intended), and spiralled out of control, mostly free adverising for a certain 4ghz Server with only 961 FSB, with no real evidence or proof about Linux been slower and just a repeated comment "everyone knows Win2K is faster". Q) Is that Win2K Server, Adv Server, Workstation ? (each have different timeslice lengths enforced by the Kernal, and the RoughNeck server still peaks at 32fps, prob averages around 28fps), so I can estimate your timeslices are 15.625ms, thus you are likely using Win2K Workstation. I did say I was deliberately testing on a lower end server (to help other admins with a similar server). It is obvious that an Athlon XP PR2800+ could never have a 1000 FSB, and using high speed (500+) medium latency (CAS 2.5) memory would be folly on an Athlon XP anyway, esp an nForce 2 (Abit NF7-S) varient. (Strange how I got 50 - 26fps out of such a low end machine, perhaps other admins on more of a budget than ourselves could benefit from this) What was this you mention about multiple (not multy) CPUs on a Linux server ? (after adivising that multiple CPUs do not work well for OFPR_Server, due to cache coherency issues, and it been limited to less than 50% on each CPU, as in [1/(# of CPUs)]% load on each CPU, which is pointless, but it was never designed around it I guess) You could just lock the Linux OFPR_Server process (affinity) to CPU0 and watch performance shoot up like a rocket to 50-55fps (I think its really 50fps, just the 'calibration' tends to 'wobble' at the start) Using multiple CPUs on Linux and Windows = Roughtly Equal if under 32fps, otherwise Linux is faster Locking to CPU0 in Linux and Windows = Linux faster Locking CPU0 in Windows, and using multiple CPUs in Linux = Windows faster (but benchmark is not equal since CPU affinity was only performed on Windows, not Linux, which really fudges the results does it not ?) (I assume you do *fair* benchmark comparisons, and not lock affinity to CPU0 in Windows, then in Linux run the server process over 2 or 4 CPUs, which would degrade performace, as we both agree) I am well aware of the cache coherency issues over multiple CPUs, as above. Just in your last post you mentioned multiple CPUs and Linux (basically throwing a spanner in the works for one set of benchmarks, and then streamlining Windows, while using Linux defaults, so benchmark = unfair = not really all that useful a comparison between the two Operating Systems) I still have yet to see an explanation of your Linux Testing, regarding how they scaled (both locked CPU0 affinity) once the went under 50fps (or 32fps in Win2K) , or rather once they hit 100% CPU, did the Linux server suddenly drop to sub 32fps, as well as the Windows server, or did they both scale more evenly (as expected) from 50fps (or 32fps in Win2K) down to 3fps as tougher and tougher test missions where used ? I ask as I am really interested in the results of your test (which I suspect was rigged as explained above, but I will know soon enough)