Jump to content

Leon86

Member
  • Content Count

    2263
  • Joined

  • Last visited

  • Medals

Posts posted by Leon86


  1. The dpi war with gaming mice is really pointless and crazy. Even the highest sens players will never benefit from anything above 2000 dpi. The highest dpi's are always faked, mouse outputting more counts than the sensor reads, or even worse, created in the driver. This is why high dpi always gets you jitter.

    If you're using a normal sensitivity it's best to set the dpi low, 400/800 range so the mouse has a clean output and the counts dont oversaturate the usb, otherwise you'll have jitter agian.


  2. hey pls i need a fast answer ,

    arma 3 is on sale at steam and i rlly want to play altis life

    my pc has

    amd athlon II x4 640

    4gb ram

    nvidia geforce gts450 1gb

    have i a chance to play it online with low graphics without much lags? like 30fps will be rlly good

    pls need to know it this weekend its on sale

    thx for answers

    That depends on altis life, not sure how cpu heavy it is, might vary from server to server.

    see if you can find some info on performance on altis life specifically, and what cpu's they're using.


  3. You usually set your in-game Mouse-Sensitivity to a very low level and put your DPI's up.

    I play mainly FPS-Shooters since quite some years and plenty Comp-Player's prefer to be able to do a 360-turn within ~20cm Mouse-Movement, for better Aiming-Accuracy, but if you play a fast-paced Shooter, then you have reduce that amount of Movement-Distance.

    I play for an example all my FPS-Games with 5600DPI, which is not common.

    5600 dpi is ridiculous, well beyond what any mouse sensor will accurately register. I recommend finding out what the dpi of the mouse sensor actually is, then setting it to that or half and ajust game sensitivity until you get your preferred cm/360.


  4. Regarding the GCN availability, I don't think that is a true reason. You can't play ARMA3 on much below that level anyway.

    you don't need gcn/kepler to play arma, runs fine on my gtx 470, it should run similar on a 5870 or 6870, more than fine on a 6970. I've even seen someone on this forum claim he runs it on the 3770K onboard video, if that's tolerable it'll probably still be very playable on gtx280 and 4890.

    Anyway, supposedly they'll have a new round of devoper invites soon-ish with mantle, public sdk "later this year" is not very specific unfortunately.


  5. Really? that would be a no. Or do you mean two cores as "multiple"? Drawcalls are the number one issue for stall on the CPU. There are several tools out there you can check it out yourself. Even Frostbite, and Crytec quit ther optimization for multi-threading, due the the ever increasing issue of Calls(Draw and Cull) when using more cores...Wasnt worth the effort, so they only took it so far, and started up Mantle. It would be awesome if BIS went with some lower API just for cooler effects. Use the vidcard power we have fully. Basically lighten the load.

    Meh, I get 60% with peaks to 80% cpu use on my quadcore in agia marina without ai, about 80 fps. With a big battle you can get it to go under 30 all the time, but if you then press escape, freezing the simulation you get something like 60 fps, even though the draw calls remain the same. While it would be cool to have some lowlevel api for massive viewdistance, but I'd rather have the simulation go faster first (although some people here have already lost hope this will ever happen).


  6. And why does the performance in the Star Swarm Benchmark really explode with Mantle? It also is a battle simulation, similar to Arma in many aspects.

    They probably have a fully multithreaded simulation, so every bit of cpu time saved on the directx overhead can be put to use in making the simulation go faster. Or they have a simpler simulation, meaning the rendering overhead is more of a bottleneck, or both, which is most likely.


  7. the Framerate of ArmA3 on these systems would probably increase by 30-50% which makes it playable.

    Unlikely. the battle simulation is what's holding back performance, not the rendering overhead. On an empty map performance is fine. Or, make a big battle in the editor, get low fps, press escape, freezing the battle simulation and watch the fps go up dramatically.

    It'll only allow for higher viewdistance at roughly the same performance, not increase performance.


  8. I have another idea maybe this one is good too, be able to plug and unplug types of chips that hold bios and or motherboard driver information. So no need to use a disc after you buy a new mobo, plus you can easily install a new mobo without having to format the damn computer, life saving time for people who have tons of data on their pc's. That way with pluging your data, run separately from hard drive

    Offtopic, but you really dont have to reinstall for a new mobo. Changed a winxp system from pentium4 to core2duo, changed my own system from p5k to p5e to p55-gd65 all with the same windows install.

    Well there we have it. BI is officaly not giving a damn about and the whole talk about old AMD cards not supporting mantle is absolutely laughable. BI youre not even giving a damn about current hardware utilization, why do use AMD HD5xxx cards as an excuse? But thanks anyway for admitting that you simply DO NOT CARE.

    Mantle is not the only way to reduce cpu overhead on rendering. and it's not just the 5000 cards, also the 6000 and all nvidia cards are not supported. Could also use dx11.1, would make more sense since win8 has a higher market share than amd gcn.


  9. It will be interesting to see ArmA 3 performance with mantle support.

    According comparing Bf4 engine with Space Swarm, which demand for hardware is more comparable with ArmA then with Bf4, if ArmA 3 implement support for mantle will gain much more than Bf4 dose.

    I will like to hear some official devs thinking about mantle and possible usage.

    from http://forums.bistudio.com/showthread.php?147533-Low-CPU-utilization-amp-Low-FPS/page253&highlight=mantle

    missed my point, atm it's available for like 3% of DX11.1> GPUs, hence unimportant for deployment unless you paid to do so ...

    now if it's available for all AMD DX11.x cards or hell even DX10.x then it will become relevant faster

×