Jump to content

darkpeace

Member
  • Content Count

    141
  • Joined

  • Last visited

  • Medals

Posts posted by darkpeace


  1. Looking at that survey it seems not to take into account the number of processors with 64bit extensions.

    Yes, but it does report the version of Windows people are running, and provide enough information to figure out who has Athlon 64's, even if they ain't running WinXP x64 :P.


  2. I doubt a x64 version of Armed Assault will be released, at least in the medium term, due to the sheer lack of AMD64 and Intel EM64T systems running Windows® XP Professional x64 Edition or Windows® Vista*.

    * A requirement for the extra features, registers, etc. Ala 64-bit mode. Otherwise your 64 bit processor is just running in 32 bit Protected Mode. (still with other nice features though :P)

    A quick look over some statistics (albeit from another gaming company) reflects this:

    http://www.steampowered.com/status/survey.html

    OFP2 (or ArmA 2, whatever it gets named) has a release date when the mass consumer will likely be transitioning to 64 bit platforms (including Operation System, unlike from 2003 - today).


  3. Fully agree Malboeuf, getting a Quad Opteron server in coming weeks (2 x deposits down) ready and hopefully can add hosting large scale Armed Assault LANs to its list of 'jobs'.

    35,608 MIPS and 12,408 MFLOPS* over 4 cores

    Heck, it'll host 4 OFP CTI games over LAN fine smile_o.gif

    (Thats 35.6 GIPS and 12.4 GFLOPS) smile_o.gif (dude it is gonna be freaking awesome !wink_o.gif

    *Those are real MFLOPS, not BS super array vectorised SSE2 MFLOPS like certain gaming consoles 'advertise' themselves as. (Certainly not using extra 'multithreading' per core optimizations on those figures either)

    Good times ahead I hope

    (Just got sick of having multiple 'low-end' [cough] boxes and it is SOOOO damn cost effective now)


  4. These are the 19 MPMissions I am aware of that commonly differ from client to server, and get downloaded by some clients every single time.

    NOTE: IF THERE ARE ANY MORE ORIGINAL MISSIONS THAT DIFFER LET ME KNOW !

    FAILED CRC32 1-10_T_TeamFlagFight.Abel.pbo

    FAILED MD5 1-10_T_TeamFlagFight.Abel.pbo

    FAILED CRC32 1-16_Cooperative.Noe.pbo

    FAILED MD5 1-16_Cooperative.Noe.pbo

    FAILED CRC32 1-4_C_ShadowKiller.ABEL.pbo

    FAILED MD5 1-4_C_ShadowKiller.ABEL.pbo

    FAILED CRC32 1-6_C_LostSquad.ABEL.pbo

    FAILED MD5 1-6_C_LostSquad.ABEL.pbo

    FAILED CRC32 1-7_C_OilWar.EDEN.pbo

    FAILED MD5 1-7_C_OilWar.EDEN.pbo

    FAILED CRC32 1-8_C_DesertAmbush.ABEL.pbo

    FAILED MD5 1-8_C_DesertAmbush.ABEL.pbo

    FAILED CRC32 1-8_T_DemolitionSquad.NOE.pbo

    FAILED MD5 1-8_T_DemolitionSquad.NOE.pbo

    FAILED CRC32 1-9_T_Conquerors.cain.pbo

    FAILED MD5 1-9_T_Conquerors.cain.pbo

    FAILED CRC32 2-10_C_WarCry.Noe.pbo

    FAILED MD5 2-10_C_WarCry.Noe.pbo

    FAILED CRC32 2-11_T_HoldCastle.Noe.pbo

    FAILED MD5 2-11_T_HoldCastle.Noe.pbo

    FAILED CRC32 2-12_T_CaptureTheFlag4.Noe.pbo

    FAILED MD5 2-12_T_CaptureTheFlag4.Noe.pbo

    FAILED CRC32 2-5_Cooperative.Eden.pbo

    FAILED MD5 2-5_Cooperative.Eden.pbo

    FAILED CRC32 2-8_HoldCity.Cain.pbo

    FAILED MD5 2-8_HoldCity.Cain.pbo

    FAILED CRC32 2-8_T_CaptureTheFlag1.EDEN.pbo

    FAILED MD5 2-8_T_CaptureTheFlag1.EDEN.pbo

    FAILED CRC32 2-8_T_CaptureTheFlag2.CAIN.pbo

    FAILED MD5 2-8_T_CaptureTheFlag2.CAIN.pbo

    FAILED CRC32 2-8_T_CastleConflict.Noe.pbo

    FAILED MD5 2-8_T_CastleConflict.Noe.pbo

    FAILED CRC32 2-8_T_CityConflict.ABEL.pbo

    FAILED MD5 2-8_T_CityConflict.ABEL.pbo

    FAILED CRC32 2-8_T_RealPaintball.Intro.pbo

    FAILED MD5 2-8_T_RealPaintball.Intro.pbo

    FAILED CRC32 3-9_C_ReturnToEden.EDEN.pbo

    FAILED MD5 3-9_C_ReturnToEden.EDEN.pbo

    38 checksums failed

    Which version of the files should we be using on both server and client ?

    I know there are "various" ways to patch to 1.96, depending on what you start with, and which patches you choose to run, but I do not think the patches modify the MPmissions.

    Can anyone clarify this ?

    If a fix pack is not going to be made, just let me know which files should be used (on both client and server), how to identify them (pref by CRC32/MD5) and I will create my own fix pack for this long standing 'problem' then notify the OFPwatch authors so it can be implemented mass scale automatically after we test it on Australian servers for awhile.


  5. Problem: MPmissions keep downloading

    Cause: Either due to different file or timestamps

    Is there any chance of the "latest" original offical MPmissions being released in a small fix pack ?

    Of course if the server files are outdated they will need updating. The same for the clients.

    One "fix" is just to remove the original MPmission files from all the clients so they download and are in sync with 'fav' server.

    I am running a CRC32/MD5 comparison over 2 sets of files and will post the differences.

    It is confusing alot of new players, and just plan wasting time having to wait for some people to download the original MPmissions.


  6. Not that it matters, but a voted in admin can use the #MONITOR <time in seconds to average over> command.

    Eg: #MONITOR 1 gives very fast readouts and can see the spikes if any, #MONITOR 15-60 are good for load checking during game, #MONITOR 300 is if you want averages over 5 minutes, if you get 15fps or less using #MONITOR 300, then your pushing the server wayyyy to hard with the current map, players, bandwidth, (flashpoint.cfg) settings as per DS-ADMIN.RTF, or a player is desyncing badly and you may want to consider banning them.

    When they say SEVER FPS, what is really meant it "simulation cycles" on the server.

    DS-ADMIN and DS-USER relate (if you have the server download packages).

    They are .RTF (RichTech Format) files, so any decent document reader (MS-Word) can view them.

    A "simulation cycle" on the server does not draw any video at all, it only really does physics calculations based on information that players send the server.

    If load on server increases this is basically what happens:

    Server gives 50fps when at 99% CPU load or less.

    Server hits 100% CPU, it gives under 50fps.

    Lets say it would hit 200% load if possible, thus it gives 25fps.

    If server would hit 333.33% load (if possible) it would give 15 fps.

    If the server is running under 25 fps, I would start looking for an upgrade, a map optimization, checking players are not the cause of desync (there are other causes), and try again, failing that the server CPU lacks grunt. (Thus I use 3ghz P4, with 1ghz FSB as MFCTI server).

    If server CPU is hitting (even just deep spikes to 15fps) you need a faster CPU (and Dual DDR memory interface) in your server.

    So long as the physics engine (on the server) is running at 25fps (so 200% load so to speak) no player will notice in general, esp if their 'client' frame is 40,50,100, or even higher.

    If one player has a dodgy link, and causes desync, the server needs to work in 4 dimentions, not just 3, as it needs to keep track of where everyone is "in time" relative to one another. This of course does not use much bandwidth at all, but will cause the server CPU to hit 100% load, and thus the server gives under 50 fps.

    On high end servers this can be 'worked around' just via having a 3 or 4 ghz CPU. Personally I would have made it more like Quake III and just lagged the fool out of the game.

    This is why entering vehicles is delayed if the DRIVER, GUNNER or COMMANDER has a slow link.

    The slow link players insist they use no bandwidth and don't affect performance - THEY ARE WRONG, as no CPU can process what we do in 4 dimentions and somehow keep the slow link players from dropping forever, eventually it has to reach a "break point". Which I am sure many people have seen on low end MFCTI servers with 5 players on dial-up.

    As you can start to understand, server FPS has little bearing on client (player) FPS.

    Heck if my server ran (or even had long spikes) at 160fps during MFCTI like my client sometimes can I would by damn happy indead.

    There is a point where the server load becomes far higher than the client load. This is why the old "Battlefeilds" single player map required a 1ghz PC for its day, or better yet host it on a dedicated server and join it.

    The server takes the load of the clients, so long as there is never any client related desync.

    Thus admins like all players to have sub 85ms pings with 512kbps downstream, and 128kbps (if not higher) upstream to the servers. It really does not work like other games, and is FAR, FAR, FAR to friendly towards those on dial-up, or playing internationally IMHO.

    Heck, it was made for LANs as far as I can see, just with an excellent (too excellent) system for keeping players in sync. (for its day anyway).


  7. WTF ?

    Your saying if All Seeing Eye [ASE] is installed the server 'runs' at 50fps ?

    Is this consistant ?

    Is the client also running ASE ?

    Being from UDPSoft it might do 'strange' things with the timing functions.

    I am aware of their work with Counter-Strike (during the late Betas, before it went Retail and downhill fast)

    Running Linux Servers now anyway so resource usage is lower, security is (generally speaking) better and server FPS is 50 unless running a heavy mission, even in CTI it can stay around 25 (20-30 fps) so she does well now.

    Needs some minor tweaks here and there, but she does well.


  8. SuSe and RedHat/Fedora are closer to Solaris than Linux IMHO.

    Sort of a hybrid ground, it has pros and cons, one con of which was the Flashpoint server 'issues' with setup.

    I agree the 1.96a (Win32) server (extra long id protection) would be equally useful to a Linux server.

    Also regarding the tolower program wrecking a distro, I personally just use "sh ofp-server-1.96.shar" each time to convert to lower case, as it only 'runs' from the server folder.

    I was considering VMware, and run Linux, under Linux :P even (like Gentoo or an older Red Hat, etc) just to get it working....

    Using the LIBS trick really does screw over some parts of SuSe, but only temporarily, and nothing serious gets broken.

    Now to just slowly undo some other (suggested here) changes I've made and see if it breaks OFPR Server.

    Also without using -nomap the server does initially start by taking a large chunk of memroy, but 'Info Centre / Memory' only shows;

    363.8 mb used for 'Application Data' (Linux OS + OFPR Server without -nomap)

    600.5 mb Disk Cache (overkill)

    50.5 mb Disk Buffers

    9.2 mb Free Memory (this is normal btw, disk cache reduces if memory is required)

    EDIT: Oh, and Swap partition usage was only 4 kb, of 800mb or so. You really need to hammer it to make it page.

    Above Figures are about 4 hours(240 min) into a MFCTI 1.16a Nogova (offical Mike Melvin, aka: mf256) release, so it ain't bad.

    Longest game we did was 20 hours, with scientific notation for both sides final resource count. I am sure Linux will hold its ground here aswell. (16 fps, incorrectly[?] reports 586 mb usage, but this is on a Athlon [barton] PR2800, so on the real server it will perform far better 4 hours in, once it is setup - see below)

    I only just got it working on SuSe thanks to many people (Benu and Shorty mainly) (thank yous where sent smile_o.gif )

    I'll have to try it with -nomap on the server, see how performance and memory usage are affected.

    I still need to recompile the kernel sometime I think, I doubt it is getting the most of the CPU / system.

    Also need to compile some D-link DGE-500T drivers (currently using nForce2 oboard 10/100 LAN)

    There is no way to recompile the OFPR Server though is there ? (even with the above limitations)

    Besides the lower case 'issue' anything else you think I may need to be aware of. (eg: Do any addons exist that require uppercase filenames to work, etc ?)

    Once it is all working I plan to port it over to a very high end (1000 mhz FSB + Dual Geil550@500 CAS2.5, etc) Pentium 4.... thus I am likely to need to know how to lock the process to CPU0 affinity (so it does not try to run over multiple CPUs) - this would be better than turning of HyperThreading, as OS processes, etc (TS2 even perhaps) can utilise virtual CPU1.

    Damn good way to learn Linux really fast though, I've learned alot of stuff, some of which will aid me (Work certified SuSe for servers, so I figured I may as well learn it, and learnt the problems associated with it)

    Cheers to the OFP Linux community - Thanks Guys.

    (About 40% of OFPR Servers are Linux based now smile_o.gif )

    It works, but it is 'far' from complete, although 80% of the rest I can figure out / read up on.

    Nice to know that 2x256mb (Dual DDR) is ample for a Linux server aswell, and using -nomap I suspect the 'Application Data' memory usage will be lower.

    What does -nomap do for servers anyway, It must be quite different as I can't see it doing the same thing it does for clients for the server under Linux..... ? (anyone)

    I'll keep tuned to forums,

    Thanks again smile_o.gif


  9. Yeah I am having issues aswell, even using; ENV LD_ASSUME_KERNEL=2.4.1; (See first few chapters of SuSe 9.1 Professional Admin Guide for information, maybe try Google aswell on LD_ASSUME_KERNEL)

    Get Segmentation Fault when not using -nomap (and it tries to use around 256mb when loading)

    Get Sockets error when using -nomap (and it uses exactly 4096 kb when loading)

    Server name has no spaces or hyphens, just; twilightofp

    I heavily doubt the DNS / hostname is incorrectly setup, is there anything anyone knows of that might be causing the issue though ?

    Tried with Firewall off aswell.

    Might be due do newer Kernels reserving memory (or LD_ASSUME_KERNEL is doing jack all for ofpserver), and only letting it use 4096 kb (without -nomap) and this thus leads to problems.

    I am getting to the point where I am thinking of installing a 2.4.1 Kernel under a http://www.vmware.com virtual machine ?

    Any confirmed 100% OFP 1.96 working Linux Distros we know of ? (do I need to dig up Red Hat 7.2 for example) ?

    I read that someone in these forums was getting 55fps using SuSe 9.0 (or 9.1) Professional... might be a load of bull, maybe they know something we don't.


  10. Phew, so it is mainly 1.91 and before servers that are affected ?

    We get bugger all in Australia, but they do join sometimes, personally I don't look into stuff that can break Flashpoint (like cheats or id cracks, etc)

    I even have 2 legal copies of the game (GOTY and Gold) and 2 player IDs.


  11. Speaking of which, is there like a giant black list of dodgy or constantly abusive players / idiots you can subscribe too ?

    Would be nice on a few local servers, sure it would not stop the determined asshole, but it would stop the average fool.

    Of course checks would be made against local player id, in case a duplicate is found (because they can pick their player id these days)


  12. That old speed cheat for Half-Life wouldn't help in this respect would it ?

    I would not know as I do not have it, nor have I ever tried it.

    I do recall it modifying such, or similar 'variables' (cough) in regards to the way the Windows OS handles sleep time.

    Just an idea thats all  biggrin_o.gif

    PS: The quantum in WinXP Pro 'appears' to be 15.625 ms, which permit 64 such timeslices to occur each second.

    However it seams to get one 'and a bit' which appears to be rounding to 2 x 15.625 (or 31.25ms, which permits only 32 such 'combined timeslices' to occur each second)

    Also in Windows XP I have seen it do 50fps (on 1.96) yet it was only once, and normally it peaks at 32fps.

    This is tested on several CPUs all fresh installs, incase the quantum differed on some platforms (eg: HyperThreading, Dual and Quad CPU setups, Single CPU, P4s, Athlons, etc, and multiple core types aswell)

    I am thinking of trying older versions of the server aswell, 1.91, 1.85 and 1.75 (and 1.46 even), to see the results, as I swear I've seen the older versions do 50fps (at least more frequently or consistantly anyway, in Windows on same server hardware)


  13. wow 2 malbo's hahaha look at all that typing hehe crazy_o.gif

    Imagine the server capabilities if they were combined crazy_o.gif

    lol

    Anyways when it boils down to it BIS should release the source code, or at least part of it (although with VBS1 I don't know if contractual agreements would permit this).

    Then some really neat changes, improvements could be made smile_o.gif


  14. Dude I joined your server, get over it.

    My IP is not static

    You don't have my player ID

    You don't have my player name

    Isn't using admin privlidge to obtain an IP needlessly an abuse of permissions ?

    Anyways, if your using a decent browser this shouldn't get sampled down:

    RNjoin.jpg

    I've removed any information that might identify me for privacy concerns.

    eg: time joined, all player names, etc

    JPEG is progressive, at factor 30, 1024x768

    progressive made it smaller in this case

    :P

    I've been there, you just don't know who or when.


  15. Incorrect. I logged onto a few days ago when it was empty

    #VOTE ADMIN 1

    #MONITOR 1

    Checked peak fps while idle in lobby

    Alt+F4'd out

    You don't have my player ID rock.gif

    My player name is not "DarkPeace" tounge_o.gif

    Your server peaked at 32fps as I suspected it would.

    My server is on a LAN, and has 1000fsb with RAM to match, thus it does not desync as you claim it does, you are VERY CONFUSED. Why you keep saying it does I have no ****ing idea, I really don't, this thread has nothing to do with what you consistantly bring up.

    As outlined above:

    ------------------

    I went out and built an Athlon XP PR2800+ server for lab testing, as it is similar to another server that is online that some other people I know can sustain to afford / justify. (they are saving for Athlon 64 2.4ghz+). They got 2U rack space so a overclocked box was *not* an option (duh).

    Your recommendation of getting a 4ghz CPU will not work in thier scenario (ever try cooling a 4ghz CPU in a 2U rack ?) (Don't say yes even if you have, it is so off topic I can't believe you bring it up in every one of your replies)

    Point is different servers need different configurations for MFCTI, as I am sure you would be aware, however you seam most unco-operative in sharing any meaningful advise to any other parties when requested.

    How does saying a PR2800+ Athlon (built to match another server for testing here, so server can sit 800km away, during lab tests, while players use the real one) is slower than a 4ghz Pentium 4 Prescott help any one ? ....

    It doesn't, in fact most people are aware it is slower, I only built the server to match an already existing one.

    Then you say my LAN server (which is a 3ghz/1000fsb P4 Northwood) lags ?

    Did you even read the above ? or do you just have a macro that replies the same stuff every time ?

    Better change it for when the P4 Gallatin core and Athlon 64FX 2.6ghz+ servers are common, as they will be faster than your current server (which I assume is in a server tower or perhaps a 4U rack in a farm)

    I am trying to help people with a lower end server, so I went out and build a similar speced server with my own cash (I made it my new fallback PC to learn SuSe Linux 9.1 on for work, and put a R9800 Pro in, so its not exactly the same, but I had to justify forking out the dollars myself for testing you know)

    Now that I have pointed out there are 3 different servers clearly do you follow me yet ?

    Just to make it 100% clear, the 3ghz server does not desync, the PR2800+ does desync a tiny, insignificant amount on LAN during lab testing.

    I noticed immediately that when locking the bandwidth to match Dual ISDN or below for the client (over LAN so ping was low, but datarate was set to all all: GigaEthernet, Fast Ethernet, Ethernet, 2048/2048, 1024/1024, 1536/256, 512/512, 512/128, 384/96, 256/64, 128/128, 96/96, 64/64, 56/48, 56/33.6, 56/28.8, 48/33.6, 48/28.8), that at lower speeds the server CPU usage rose heaps, even in 128/128 when the going got tough, the server 'hiccuped' as you put it.

    Thus obviously not identical to real world, however ping does not affect OFP as you say, since players from the other side of the globe with high data rate, and moderate (cable) pings get no desync on your server smile_o.gif (useful information I can deduce)

    Only problem with lab tests what the ping remained near zero, but datarate locked, so its OK, so it was not 100% representative of the lower speeds (as most dial-up uers get over 125ms pings).

    I did notice as you said, the CPU load and bandwidth monitors during the test, and the CPU load spikes (which pushes server fps down if it hits 100%), the lower speeds (below 384/96) caused the serer CPU to spike in MFCTI testing.

    My point is, the settings I requested can help the fight against desync in a manner different to yours (not everyone get a 4ghz server hosted in a 2U or 4U rack, which are our only options in Australia, and may other admins may benefit from the above settings).

    With them you could streamline the player base to a server, and stop the clients that hit the server CPU the hardest from playing on a given server.

    RTCW already has the above features (to a degree)

    Now I am sure anyone with a server hitting 100% load would benefit from the features in the 1st post (going back to the topic of the thread, which is not that a certain 4ghz server has no desync, or that a giga ethnernet server does, which you are mistaken on).

    I bet I can guess your reply already. Same *incorrect* assumptions that any non 4ghz server desyncs. (oh and that its the fastest CTI server online)

    Yet the fact remains it has nothing to do with the requests I suggested in post #1 of this thread.

    The majority of servers are slower than yours correct ? - True

    Thus the slower the server, the more it would benefit from the suggestions in post #1 - True

    Thus if another highly demanding server process came along that hit a 4,5 or even 6ghz CPU really hard, the settings would help - True

    I've answered my own post, as no-one else helped.

    No animals where harmed, except the dignity of a cow.

    Seriously though, BIS should charge you for all the advertising you do for that server.


  16. Did the search, and so such comparison was made (as per my request above)

    3 years ago is quite awhile, was the comparison you spoke of (far above) a Win98 vs Linux benchmark on the old RN Dual Xeon server ?

    I did not know Win98 supported Dual CPUs / MPS 1.1 or 1.4 (it doesn't but Linux does)

    That answers my question (as far as I am concerned the comparison was made before the cache coherency issue was disclosed)

    Thats all I wanted to know, and it has been dragged out to like 10+ posts, all a waste of time to any reader.

    Not even worth summarizing, I'll perform my own tests

    FYI: My 3000mhz (Northwood core Intel Pentium 4) on my 1000 fsb using Dual Geil DDR550+ does not ****ing lag, esp on a 1gbps backbone LAN, plugged into the backbone. with half the players on Gigabit anyway.

    I mean seriosuly if you think that the above setup lags, you need a reality check (then again you did imply that 2x15.625ms was shorter than 20ms, and also advised the fps issue has nothing to do with timeslice length)

    I asked a few simple questions

    You replied saying my server lagged (it is on a LAN and has 0ms desync 100% of the time, the testing server above, the crudgy Athlon XP PR2800+, also rarely hits 100% CPU in MFCTI)

    You then just kept saying Windows was faster, and beifly explained (in laymans terms) the dual CPU issue (neither of my servers are dual CPU, the comment refered to OFPR2 and BIS progress on it)

    Then said my server lagged a few more times

    Basically all he above posts are just adds for your server, which from Australia both Cage and myself get 250ms+ (using 32 byte packets), we also have several bandwidth bottlenecks between here and there via the International links, etc, etc (long explation short we do not get 1536kbps downstream from the states)

    Thus our desire to setup local servers, and LAN communitys in our own country.

    Sure beats playing on a server several time zones away, which you keep saying is the fastest in the world. I assure you that it isn't, since it hits 100% CPU and stays there, and PEAKS at 32fps (thus is sustains less)

    My guess is your Min/Max bandwidth are set really, really high, so high that CPU hits 100% load.

    But its fast, and doesn't lag for local American players, good for you

    Thanks for hi-jacking this thread, and boosting your post count.

    FFS: LAN lagging, god you come up with some crap


  17. We started our dedi-server but I noticed that other servers can check on large ogg files and kick those players etc etc.

    Please reply with suggestions on how to config our server to make it as pleasant as possible.

    No mention of large XML pics there, just wanted to delete OGG files dude (wasn't there a dimention limit, like 64x64 or 128x128 ? / / and it only takes JPG and PAA, and highly compressed image formats anyway)

    I didn't know that OFPR automatically blocked custom files over 65536 bytes (64 kb), I'll have to test this out.

    Thanks for the tips smile_o.gif


  18. Please provide the URLs with the explanation when providing a responce, it is just forum ethics

    Were the benchmarks performed recently where a Windows server and a Linux server both running 1.96 both with CPU0 affinity only, where equally compared, where the Windows server outperformed the Linux one ?

    I have never seen such a thead in the forums, even when searching, there is little real information available, just word of mouth (usually the loudest most frequent mouth too :P)

    (Sure there are some dodgy comparisons, but not where the setup was equal on both platforms, mainly regarding the CPU affinity, as I've stated multiple times this is where my interest does lie, you also never provided an example on how the performance scaled down from 50fps as tougher and tougher test missions where loaded for benchmarking)

    I asked for an explanation of your Linux testing, how it was done, etc, I still have not seen this, if you provide a link / URL to it, then it will be far easier for all.

    I suspect this scenario currently:

    -Linux was tried once, with CPU affinity set over all CPUs and as such suffered poor performance. Windows was then tried multiple times with CPU affinity was locked on 1 CPU (that is about as rigged as you can get test wise)

    -I have several servers I use for testing

    -They are for LANs

    -You will notice above I don't boast about a certain server being the fastest in the world, etc

    -Tried several different setups above using most configurations available today (see: http://www.tweaktown.com/document.php?dType=article&dId=647 for current Pentium 4 CPUs, including the Gallatin, the AMD CPUs most are aware of, and I have yet to try an Athlon 64/FX or Opeteron 1xx series in my own testing)

    -I also don't use offensive language such as your repeated "Get a clue" comments, at all.

    -I have provided a testing method, and other ideas, and yet to hear any real feedback regarding comparions

    -I find a 3ghz/1000fsb (2400/800/HT overclocked) quite ample for MFCTI 0.99 to 1.16a core based missions (sadly many are based on 1.1 yet use their own internal revision such as 1.157, etc).

    Esp once CPU affinity it set to real CPU0 (HyperThreading is Enabled in BIOS, yey Affinity is locked, it boosts performance slightly, and other tasks can be set to use virtual CPU1, vs having the CPU appear to the OS as just CPU0)

    I only asked for info, that was not provided over several posts. I am starting to get the feeling I have been doing this longer than you have, and somehow your Linux setup / configuration was flawed (you mentioned 'multy CPU' on the Linux server, yet did not mention he affinity settings used in your benchmarks)

    I also plan on testing a 2.6ghz Athlon 64 (1mb L2 cache, Dual DDR), at various clock speeds to see where it 'equals' a P4, so I can make better judgements (as the Athlon performs quite well esp considering its vastly slower clock speed). I suspect 2.6ghz+ is where it starts to tread on Intels heals, even with 1mb L2 cache I suspect it may beat out the P4 Gallatin (Northwood + 2MB L3 cache)

    Besides being uninformative (and taking my time to read, and learning nothing since you provide no details on your test), I am also starting to consider your posting method rather (or very) offensive

    Considering if the above settings existed your could use them on your own server to make it even better than it is now, then that would be cool I am sure you'd agree (assuming they implement them and you try them out, which prob won't happen) smile_o.gif


  19. You can also run a loop every minute or so (very low performance impact), that searches for and deleted the user uploaded OGG files, as they never added an option to exclude users customer face and/or sound files, only to limit by size (and yeah 60kb tends to block most user created OGG files, so long as they are larger than 60kb, face files are normally around 16kb though)


  20. Good to hear, but you didn't answer my question at all or provide any evidence. You also advised that the timeslice issue has nothing to do with it, when Suma has advised otherwise.

    In my previous thread Suma advised that the fps will not be configurable by server admin (which would require massive changes to OFPR_Server.exe).

    So a server that sustains 50fps all the time is faster than one can sustain only 32fps all the time (duh). This holds true so long as CPU load is under 99% (duh)

    This is esp when **CTI IS NOT CONCERNED**, esp if your CTI is peaking your CPU, which I won't discuss, as it can be set up so that the CPU does not become a bottleneck)

    I know you will rebut this saying your 4ghz CPU is not a bottleneck, however if it hits 99% load then it must be definition be classes as a bottleneck, and your 4ghz CPU does hit 100% in the examples you are providing, thus the sub 32fps performance, I mean, sure 32fps does not 'lag' as such... not that noticeably anyway ... then thus 50fps constant would 'lag' even less noticeably (20ms sever simulation cycle plus player ping on a 100mbps LAN is not noticeable, esp if desync ms for all players is 0, not 1, not 5, not 100000, just 0 all the time)

    The 3ghz server I refer to has a 1000 FSB and Dual 550 (at 500mhz) DDR RAM (www.geilusa.com), with faster timings due to lower speed. So your server is 'less than' 25% faster than it, not more, I know how much difference the extra FSB and memory throughput make, otherwise I wouldn't have spent the cash on the RAM....

    Of couse compared to a stock 3000/800/HT/Northwood with 512kb L2 cache, it would be more than 25% faster, I do agree with you (you know me better than to build an uber server with under 1000mhz mate smile_o.gif )

    I am not 'complaining' about lag either, I just made a post regarding a few suggestions to remove desync (its good in some ways for normal maps, however more control would be highly desirable, these requests would be easier to implement than multiple CPU support, or the ability to limit the server fps to whatever you want (255fps anyone ?), as this required massive changes as Suma advised.

    But, BIS being BIS, I doubt they will incorperate even 1 of the above requests (I mean if they don't help you then I don't really care, esp if they help me help 200+ other admins, on more normal servers, and benefit the LAN community for more fair play / comps / etc)

    My thread was then 'hi-jacked' (no offence intended), and spiralled out of control, mostly free adverising for a certain 4ghz Server with only 961 FSB, with no real evidence or proof about Linux been slower and just a repeated comment "everyone knows Win2K is faster".

    Q) Is that Win2K Server, Adv Server, Workstation ?

    (each have different timeslice lengths enforced by the Kernal, and the RoughNeck server still peaks at 32fps, prob averages around 28fps), so I can estimate your timeslices are 15.625ms, thus you are likely using Win2K Workstation.

    I did say I was deliberately testing on a lower end server (to help other admins with a similar server). It is obvious that an Athlon XP PR2800+ could never have a 1000 FSB, and using high speed (500+) medium latency (CAS 2.5) memory would be folly on an Athlon XP anyway, esp an nForce 2 (Abit NF7-S) varient.

    (Strange how I got 50 - 26fps out of such a low end machine, perhaps other admins on more of a budget than ourselves could benefit from this)

    What was this you mention about multiple (not multy) CPUs on a Linux server ? (after adivising that multiple CPUs do not work well for OFPR_Server, due to cache coherency issues, and it been limited to less than 50% on each CPU, as in [1/(# of CPUs)]% load on each CPU, which is pointless, but it was never designed around it I guess)

    You could just lock the Linux OFPR_Server process (affinity) to CPU0 and watch performance shoot up like a rocket to 50-55fps (I think its really 50fps, just the 'calibration' tends to 'wobble' at the start)

    Using multiple CPUs on Linux and Windows = Roughtly Equal if under 32fps, otherwise Linux is faster

    Locking to CPU0 in Linux and Windows = Linux faster

    Locking CPU0 in Windows, and using multiple CPUs in Linux = Windows faster (but benchmark is not equal since CPU affinity was only performed on Windows, not Linux, which really fudges the results does it not ?)

    (I assume you do *fair* benchmark comparisons, and not lock affinity to CPU0 in Windows, then in Linux run the server process over 2 or 4 CPUs, which would degrade performace, as we both agree)

    I am well aware of the cache coherency issues over multiple CPUs, as above.

    Just in your last post you mentioned multiple CPUs and Linux (basically throwing a spanner in the works for one set of benchmarks, and then streamlining Windows, while using Linux defaults, so benchmark = unfair = not really all that useful a comparison between the two Operating Systems)

    I still have yet to see an explanation of your Linux Testing, regarding how they scaled (both locked CPU0 affinity) once the went under 50fps (or 32fps in Win2K) , or rather once they hit 100% CPU, did the Linux server suddenly drop to sub 32fps, as well as the Windows server, or did they both scale more evenly (as expected) from 50fps (or 32fps in Win2K) down to 3fps as tougher and tougher test missions where used ?

    I ask as I am really interested in the results of your test (which I suspect was rigged as explained above, but I will know soon enough)


  21. Here is a question for you Mr RN Malboeuf:

    Does a Linux server peaking at 50fps (at 99% or less CPU load) scale the same as a Windows server peaking at 32fps load ?

    eg:

    -At 50% load they will both run at peak fps (50 vs 32)

    -At 99% load they will both run at peak fps (50 vs 32)

    -At 100%+ load they will start to slow down

    *Now permitting 110% load in (for simplicity sake, have it just mean *below peak fps*, where 100% is peak fps, and more means it runs slower as documented)

    -At 200%* load would the Linux Server run at 25fps, while the Windows server ran at 16fps ?

    -At 400%* load would the Linux Server run at 12.5fps while the Windows server ran at 8fps ?

    If not, (as in they scale the same past 100% load / aka: both do 16fps at 200%* normal load) then please explain why, since you obviously understand this far better than anyone else.

    If so, (as in Windows/Linux scale differently past the 100% normal load) then please explain your answer aswell.

    I would be most interested in your reply to this.

    Please don't use the 'you can't have more than 100% load' as I defined it above, where Under 100% CPU load = peak fps (duh), but once you hit 100% the server 'slows down its outputs per second' to compensate,...

    So past 100% CPU load do Windows and Linux scale the same, or differently. Only you have the knowledge to answer this question, as you say you have extensivly tested it.

    I look forward to your reply / explanation. smile_o.gif


  22. This is going nowhere fast.

    Cage, like me, I suspect is one of the few with 1536/256 ADSL, I would call him my equal in any local OFPR game, he is an exemplary soldier. How he performs on an overseas server however I remain to see.

    Frankly I am surprised he gets 100ms to an American server (considering the speed of light, electrity, and other laws of physics, he should getting around 250 - 300 ms)

    The servers several TimeZones away look empty to me too, and less are in use, wow, that makes perfect sense, this would be because you are around 6 or so hours behind them. While we are 10 hours ahead of them. I doubt you even play on any non American servers, let alone servers in another time zone.

    Eg: The US servers look empty to me now mate (different TimeZones would be the obvious reason), but I am sure if I didn't sleep for 24 hours and recorded the results every 60 sec then the servers (in any country) would be far, far from idle.

    Using the same logic you applied above that means only 6 USA servers are active out of 57, the other 51 should be decommissioned, you can not eb seriously thinking the Europe servers (like the homeland of Flashpoint) are idle so often.

    Do you honestly think people would put resources into hosting idle / empty servers ?, the ISPs would deactivate them within 2 months of almost no use for sure.

    I assure you, I have little (if any) catching up to do, and am yet to see even one shred of proof that Windows servers are faster, (esp those stuck peaking at 32fps)

    If a server peaks at 50fps (20ms timeslices), and another server peaks at 32fps (2x15.625ms timeslices), it is very obvious which one will be faster, It isn't really a question of Operating System, however statistically speaking almost 100% of Windows servers peak at 32fps, and almost 100% of Linux servers peak at 50fps.

    When 31.25ms < 20ms your logic alone is flawed

    Thus when 50fps gives same performance as 32fps the same basic fundamentals of math apply.

    For the same reason reducing floating point accuracy over distance may improve performance (or decrease CPU load, and thus provide a smoother MFCTI experience)

    Now forgetting your server can do anything, as it is only 25% faster than a 3ghz server, and only has 1mb cache.

    MinErrorToSend=0.02 (double default) might help in CTI/RTS missions, Where as MinErrorToSend=0.005 (half default) might help in smaller Close Quarter combat on fast servers.

    Thus having mutliple server/flashpoint cfg files is way to go, esp at LANs, where bandwidth is rarely an issue.

    Now you can say this does not matter on your server, well thats 1 of 276 servers, and the other 275 server admins no doubt are after settings they can also experiement with to record results, and their servers may only be 2.4ghz give or take.

    I am currently test this (deliberately) on an Athlon XP PR2800+ (2083mhz, Barton, 512 kb L2, 333fsb), and 2 hours into "MFCTI 1.16A Nogova, Heavy Resistance, High Income, Weather" (out since the 17th from Mike Melvin) the server CPU sometimes spikes to 100%, but sits at a rather nice 97%, thus giving 30-32fps on a Windows server.

    Now the same test on a Linux server, with CPU at 97% load, would yeild 30 - 50fps (30fps during the spikes and 50fps when under 100% load).

    So keeping the server around 97% lets the peak fps occur more often (duh), thus raising the average, and noticable (player perceived) performce.

    Consider this is on an older spec server, 2hours into a heavy game.

    When I am shown a Win2K server peaking at 50fps I will want to know how it was done.

    Ideally I think the trick is running around 97% average, so on a decent server (far far faster than the above, an almost constant 40-50fps can be acheived in MFCTI)

    Also, I still have yet to see the proof / formula you speak of that indicates that 32fps =(same performance as)= 50fps.

    Considering your 4ghz server is running under 32fps (you say it hits 100% load often, thus it can't be at 32fps in MFCTI),and almost all your players are on high speed American Cable, with low ping and high bandwidth, then I would conclude, based on the facts above, that Linux (can) be faster when/if configured.

    Just like a Windows server it takes ages to setup, tweak, record results (#MONITOR 1, 60, 120, 300 & 1800 used, over several games) to balance settings and CPU to that 'magic' 97%.

    So as is obvious to anyone, there is usually more than one solution to a problem, some are effecient and scale well (over a range of CPU grades), others throw raw CPU power, bandwidth, mass advertising (it helps convince the masses I'll give it that much), resources, etc.

    Both are perfectly acceptable responces to a problem (neither party is ignoring it, which would be the worst thing to do)

    smile_o.gifsmile_o.gifsmile_o.gif

×