Jump to content
Sign in to follow this  
nephilim

Addon Optimization

Recommended Posts

it doesn’t see to matter if you use lots of textures or just a few big ones as long as the total area is the same.

It does matter, and in fact I think this is one of the reasons why OFP originally had so inconsistent performance, as there are a big number of small textures used.

The reason why it matters is simply because of the way direct3D works, by using many textures instead of a few (or just one) you will get much more vertex/indexbuffer and texture switching when the model is drawn, as it needs to be drawn in separate parts. For example:

To draw a tree model with three textures two times we would need to do something like this (pseudocode):

<table border="0" align="center" width="95%" cellpadding="0" cellspacing="0"><tr><td>Code Sample </td></tr><tr><td id="CODE">

// Draw first tree

Transform()

SetTexture(0)

SetVertexBuffer(0)

DrawTriangles()

SetTexture(1)

SetVertexBuffer(1)

DrawTriangles()

SetTexture(2)

SetVertexBuffer(2)

DrawTriangles()

// Draw second tree

Transform()

SetTexture(0)

SetVertexBuffer(0)

DrawTriangles()

SetTexture(1)

SetVertexBuffer(1)

DrawTriangles()

SetTexture(2)

SetVertexBuffer(2)

DrawTriangles()

If the model would only have one texture, we would only need to do:

<table border="0" align="center" width="95%" cellpadding="0" cellspacing="0"><tr><td>Code Sample </td></tr><tr><td id="CODE">

SetTexture(0)

SetVertexBuffer(0)

// Draw first tree

Transform()

DrawTriangles()

// Draw second tree

Transform()

DrawTriangles()

As you can see from the amount of steps necessary to draw the same thing, there is much more work done when you have many textures in your model. With the speed of todays processors you wont notice a difference with just a few models, but when you have alot of things to draw (trees, houses, units, terrain) then the amount of texture and vertexbuffer switching can increase massively if you do not combine the textures, and that can kill performance.

Share this post


Link to post
Share on other sites

In regard to binarize problems, make sure that you do NOT have #include preprocessor command anywhere in the config of the addon you're trying to binarize. If you do, the model selections will end up screwed up.

Share this post


Link to post
Share on other sites

As much as we all want to have simple rules for optimising models, it's really not that easy to come up with such simple rules.

The reason for this is that the process of rendering depends on many things and it's all those things together that make it run smooth or not.

Take the discussion about whether vertice or face count is more important.

On one hand it seems that most of the calculations need to be done on vertices, but on the other hand this work is mostly done in the GPU which normally has enough power to handle this. As Vectorboson pointed out, usually the fragment/pixel shaders are the bottleneck. But then OFP isn't really using fragment shaders that could make up a bottleneck.

In addition, it's not just the GPU that needs to deal with vertices as it's the CPU that needs to grab them and send them to the GPU.

So if you have many vertices it could either be the CPU not being able to send them fast enough to the GPU, it could be the bandwith of your graphics port that doesn't let enough vertices throug or it could be the vertex shaders of the GPU that can't cope with the ammount of vertices to process.

To setup a test that isolates those factors is hard to do just by throwing some models into OFP, because you don't see what is really going on with your CPU, bandwith and GPU with all the other work that those parts need to cope with.

Now lets look at polygons. Polygons aren't that easy to handle either.

As Kegetys explained, there is a penalty in switching textures, so what you want to do is to sort your faces according to what texture they're using and then render them all out in one go. In addition you need to sort your faces for transparency as some of them need to be drawn before others to not screw things up. In many cases there is no optimal way to render faces if you have several textures and transparency on your model.

So just looking at the poly count doesn't help much there.

Textures.

Those aren't that easy either.

First you have the mentioned penalty for switching textures so you might want to have less textures. But then comes the transparency and for smoe models it might be better to have the transparent parts on a different texture, because of problems with visibility (I think Suma explained those problems some time ago), so you might be better off having the transparent parts on a seperate texture.

In addition, the different texture formats with their different bit depths for the color channels pose another difficulty to decide whether to use one texture with transparency or split things up.

And if that wouldn't be enough, there's the problem with mapping, when sometimes your faces fit better on a single texture while the other time you have more texture space wasted that way.

The conclusion of all this is that trying to come up with simple tests that lead to simple rules for vertices, polygons and textures are flawed from the beginning.

The only way to handle this is to really understand all the parameters that are influencing your performance and then try to see which one is the bottleneck in a particular case.

But there's one single and really simple rule that should be allways valid:

Too much of anything is never a good thing.

Share this post


Link to post
Share on other sites
Guest RKSL-Rock
Again, the whole "lag" terminology isn't quite right here.

The addons themselves do not cause "lag" (latency) that is down to the quality of the network connection.

If having an un-optimised addon in the game causes greater latency when playing a networked session then there is something critically wrong with the net code.

What you were most probably experiencing was slowdown (reduction in the FPS count)

Well i'm I think you are right about the use of "lag" as a definition.  In this description, which perhaps isn’t the best defined example - it is still true.  Especially as not everyone has a good connection and OFP's netcode does seem to have its odd moments.  To explain myself a it more:

When we did the testing, yes we experienced FPS drop but we also noticed that the aircraft were jumping around a bit, even hanging and then 'warping' to a totally new position in space half a second later.  Which is what i would define as lag/desyncing.

Regardless of definitions or terms it is fact that some addons out there do cause the OFP engine and MP a lot of problems, especially when you get a group of them together.  Optimising the mesh and removing any unseen surfaces etc does seem to improve performance as Nephilim has already stated.   Models with a higher number of vertices logically and practically take longer to draw and manipulate and when you have a number of different spec machines and network lines at varying latencies then its going to get “laggyâ€.

It does matter, and in fact I think this is one of the reasons why OFP originally had so inconsistent performance, as there are a big number of small textures used.

The reason why it matters is simply because of the way direct3D …

As you can see from the amount of steps necessary to draw the same thing…

OK, I retract that comment and perhaps I can restate it a different way:

When we tested it we didn’t notice any difference between using large texture tiles and lots of smaller textures.

…as I said before none of our “tests†were definitive but we couldn’t see any difference in the way our machines or connections performed.  We have carefully monitored bandwidth usage and CPU loading with benchmarking tools but never really saw an issue with the texture tests.

<edit for DM's quote>

Share this post


Link to post
Share on other sites
When we tested it we didn’t notice any difference between using large texture tiles and lots of smaller textures.

…as I said before none of our “tests†were definitive but we couldn’t see any difference in the way our machines or connections performed. We have carefully monitored bandwidth usage and CPU loading with benchmarking tools but never really saw an issue with the texture tests.

Testing should allways be done in a realistic environment.

If you just throw several instances of the same model on a default BIS island, then that's not what I call a realistic environment.

If you are using several high resolution textures, then you should expect others to do the same and then you might end up with several different addons with high resolution textures as a realistic environment for testing. And that test will probably look a bit different than when you only test several instances of the same model (like I explained above in response to Nephilims original post).

Also to determin the maximal verttice/poly count or the maximum size of textures for an addon it's not sufficent to test your addon in a 'clean' environment. It might be easier to see what parts make more difference in a 'clean' environment, but things can look verry different when the other factors also play in.

Test your addons on a custom island with other units and vehicles that have comparable quality in vertices, faces, textures and scripts, in a mission that uses scripts and everything that you would expect and with a config modification that is commonly used. Also don't make the mistake and only use mid-end addons, because if you push the limits with your addon, I will guarantee you that other will too, so in the end you will have much more to deal with than just your single addon that allready pushes the limits.

In the end it's the playability that matters and not how pretty an addon is.

There's not much use for a pretty addon if you can't do more with it than just looking at it or taking screenshots. And there are allready more than enough of such addons.

It's not the screenshot threads or addon makers that keep this community alive, but the people who actually play the game!

And those get fewer and fewer with every addon they can't use.

Share this post


Link to post
Share on other sites
-cut-

As you can see from the amount of steps necessary to draw the same thing, there is much more work done when you have many textures in your model. With the speed of todays processors you wont notice a difference with just a few models, but when you have alot of things to draw (trees, houses, units, terrain) then the amount of texture and vertexbuffer switching can increase massively if you do not combine the textures, and that can kill performance.

Ah, nice to know. I bet this can add a couple of fps.

Share this post


Link to post
Share on other sites
Guest RKSL-Rock

Testing was done in a realistic environment.

Para-dropping units from the hercs into a ground battle.

The same test scenario was used each time. BIS T80s, BMP vs Abrams and M2 Bradleys. 4 infantry squads a side on Nogova.

The same routes flown each time

Everyone had the same view distance and settings on their machines.

2nd Round testing was on the "laggiest" island we could agree on - Tonal. Same scenario just merged onto the Tonal map.

3rd Round FDF Malwotsitcalled with FDF and CSLA units and armour.

Whilst not the most rigorous testing plan it wasn’t meant to be. 3 out of the 4, including me have software/game testing experience and are fully aware of the impact of environments on performance. You know my background (i've told you all about it on BAS IRC) and also my views on proper addons and testing. The point of our tests (as you know because i told you at the time of the argument when i left BAS) was about testing for playability and the viability of using multiple large textures on large models. While they were never going to be definitive we did establish some information and limits that let us set a top end for vertices/polys and texture sizes.

What may have true 2 years ago about performance and poly size isn’t necessarily true any more. Kit has moved on a lot and while not everyone has an expensive superfast PC the average spec has increased making it easier to run “pretty†addons en masse. While I do agree with you about playability being extremely important, we are unlikely to agree on the limits you would like to impose on models and textures, both you and I went down this road before and it wasn’t the ‘prettiest’ conclusion.

Share this post


Link to post
Share on other sites

Actually you're contradicting yourself a bit when you first state that you did your testing in a realistic environment by using 2 year old addons to determin a top end for addons and then you say that it's easier to run "prettier" addons en masse. Why not use a test environment that reflects exactly that?

Even if there aren't any addons of that quality around now, it's easy to add some more polys and higher resolution textures to existing addons or even setup some quick dummy addons for the tests.

But as you said, we probably won't agree there and I don't want to drag this any further.

Thanks for mentioning the circumstances of your tests as this makes it much better to draw conclusions from the results you mentioned.

The main reason why I mentioned realistic test environments is that most times, addon makers don't seem to borther about testing performance at all or choose unrealistic scenarios where they end up with unrealistic results, so I thought it would be worth mentioning it in this thread.

PS

I know that Tonal is "lagging", but if I remember right then it was the first project of that dimension at that time, so it's only natural that it has flaws. No excuse there smile_o.gif

What Tonal doesn't have is a huge ammount of high resolution textures, which would be more of a limiting factor when it comes to testing texture sizes for new addons.

Share this post


Link to post
Share on other sites
Guest RKSL-Rock
Actually you're contradicting yourself a bit when you first state that you did your testing in a realistic environment by using 2 year old addons to determin a top end for addons and then you say that it's easier to run "prettier" addons en masse. Why not use a test environment that reflects exactly that?

It’s not a contradiction at all; it’s called establishing a base line.  By using BIS addons we set a performance datum with our models.  We record the results and then change one factor at a time.  First the number of planes, then the island and repeat the test and check for results.  Then the units on the ground and so on, that way we built up a “relative scale†of performance for any events.

So not a contradiction just good methodology.  The reason the testing was cut short was more due to lack of time than anything else.  The feeling was that we were seeing a trend in the results.  To continue testing with the models at the level they were at seemed pointless.  

Even if there aren't any addons of that quality around now, it's easy to add some more polys and higher resolution textures to existing addons or even setup some quick dummy addons for the tests.

Why dummy something up when you have something real to test.  At the time of testing a lot of people were discussing polys vs vertices and textures vs performance.  Considering the stage I was at with the Hercules it made perfect sense to wait until we had the addon closer to the finished state before continuing testing.

Thanks for mentioning the circumstances of your tests as this makes it much better to draw conclusions from the results you mentioned.

The main reason why I mentioned realistic test environments is that most times, addon makers don't seem to borther about testing performance at all or choose unrealistic scenarios where they end up with unrealistic results, so I thought it would be worth mentioning it in this thread.

Fair comment

PS

I know that Tonal is "lagging", but if I remember right then it was the first project of that dimension at that time, so it's only natural that it has flaws. No excuse there smile_o.gif

What Tonal doesn't have is a huge ammount of high resolution textures, which would be more of a limiting factor when it comes to testing texture sizes for new addons.

It wasn’t intended as a go at BAS just as a statement of fact – personally its one of my favourite islands but some people seem to have more than their fair share of problems running it but that makes it ideal for ‘stress tests’

Share this post


Link to post
Share on other sites

I thought I'll explain a bit more about what I meant, since after reading my post again, I think I didn't get my point across well.

I'm not doing this for the sake of having a personal argument with you Rock, but because I think it can add to the overall discussion of this thread. So please don't get me wrong here.

My point is actually that it doesn't make much sense to set a new standard for parts of OFP without taking an overall look at it.

I'm not denying that people have better hardware now than they had a year or two before and that it's possible to have higher vertex/poly counts and higher resolution textures today.

Neither am I argumenting about where exactly this new limits are!

My argument is only about what methods to use to determin a new standard that isn't flawed.

Let's take the case where one addon maker is doing some in depth tests for a new addon he is planing to determin what poly count or texture size is possible with todays hardware in OFP.

For those tests he is using the highest quality addons available today and finds out that there is still headroom to push the limits.

Therefor he decides to take those results and create his addons according to his findings.

Another addon maker, inspired by the work of the first one, is doing exactly the same with the same in-depth tests and obviously gets to the same conclusion as the first one.

He then also creates his addon according to his findings.

A third one does exactly the same and if everyone is doing all those in-depth tests and create their addons to fill out the headroom they found while testing, then we have a problem.

Because suddenly we have vehicles, units, islands and config mods that even if they were done with really good effort in testing things, won't work together properly and cause performance problems. Because each of those addons was build with its own standard that completely filled up the previously existing headroom.

Therefor any attemp to create a new standard for poly count or texture size limits that is tested to meassure out the headroom with current addons is definately flawed if you don't take the results and distribute the headroom to everything in OFP.

For establishing a new standard you have to take the same approach as if you would create a new game. Look at the available hardware and then judge what poly counts and texture size would be reasonable for each asset you are having and not just concentrating on one single part of the whole thing.

Only then you will come up with with a new standard that works.

Now the other problem is to make others accept this new standard. Which I think might even be the bigger problem.

One (rather easy) method to counter this is to leave enough headroom in your own work for others to use so that things don's get screwed up too fast smile_o.gif

A sacrifice of your own quality for the sake of future playability.

Share this post


Link to post
Share on other sites
Why dummy something up when you have something real to test. At the time of testing a lot of people were discussing polys vs vertices and textures vs performance. Considering the stage I was at with the Hercules it made perfect sense to wait until we had the addon closer to the finished state before continuing testing.

The reason why I would prefer dummy assets is to eliminate unknown variables.

If you take BIS vehicles or other addons, you would have to examine those first to be able to judge what factors they are adding to your tests.

As we established in this thread, it's not a single reason why an addon can cause performance problems, but a whole bunch of possible things that can go wrong.

By using dummy assets you know exactly what you're dealing with and therefor your results will be more accurate.

Setting up those dummys shouldn't take longer than examining existing assets.

That's exactly the reason why people do proof of concept with dummies rather than using finished assets. Because they're faster to do and limit the variables for testing.

Share this post


Link to post
Share on other sites
Guest RKSL-Rock
My point is actually that it doesn't make much sense to set a new standard for parts of OFP without taking an overall look at it.

I'm not denying that people have better hardware now than they had a year or two before and that it's possible to have higher vertex/poly counts and higher resolution textures today.

Neither am I argumenting about where exactly this new limits are!

My argument is only about what methods to use to determin a new standard that isn't flawed.

I don’t think anyone is out to set the ‘new standard’ just establish what will work best.  Personally I have 4 machines that run OFP/VBS:

My main PC – AMD X2 4200+, 4Gb RAM &  ATI X800GTO2 256mb

Laptop – Toshiba Portégé S100  with P4M 760 1Gb RAM & Nivida GO 6200

My Backup – P3 700, 1Gb RAM & ATI 9200 128mb

Dedicated Server – P4 2.4 2Gb RAM & ATI 9600Pro 256mb.

I think those 4 give a ‘fair’ cross section of specs and I have the ability to test on lots different kit before releasing anything.

Let's take the case where one addon maker is doing some in depth tests for a new addon he is planing to determin what poly count or texture size is possible with todays hardware in OFP.

For those tests he is using the highest quality addons available today and finds out that there is still headroom to push the limits.

Therefor he decides to take those results and create his addons according to his findings.

Another addon maker, inspired by the work of the first one, is doing exactly the same with the same in-depth tests and obviously gets to the same conclusion as the first one.

He then also creates his addon according to his findings.

A third one does exactly the same and if everyone is doing all those in-depth tests and create their addons to fill out the headroom they found while testing, then we have a problem.

Because suddenly we have vehicles, units, islands and config mods that even if they were done with really good effort in testing things, won't work together properly and cause performance problems. Because each of those addons was build with its own standard that completely filled up the previously existing headroom.

I understand what you are saying I just don’t believe it really happens like that.  While most of the community makes loud noises about how wonderful they are they really don’t get used in large numbers in any environment.

If you look at some of the really high end models that have been made recently they really do push the performance limits – OWP’s Mi-8 High Res pack, Franze’s AH-64 and others.  These are addons that when used in missions do cause problems for low spec machines and in MP games (for varying reasons) but the fact is that while they are huge models with huge amounts of detail they aren’t used like soldiers. There are very few of them in the game and are often seen from distance – now if they are ‘optimised’ and lod’d properly it shouldn’t make a difference.  

However I will concede that some addons run so many scripts it wouldn’t make a difference if the model was only 100 polys/verts it would still increase the server load and cause ‘lag’ and desync or whatever you describe it as.

Now if we’re talking about units and small arms then I’ll agree it could easily get out of hand.  I think we’ve seen weapons go from ~800 polys to over 4000 in some cases in the last 2 years – but on release most of the community point out that its too much and they get reworked or ignored.  I really don’t think its that much of a problem as you think.

Your point does however make the case for optimising addons properly, as Nephilim kindly created a quick guide for. Rather than setting a new standard we should be looking at optimising the new addons, both scripts and models.

Therefor any attemp to create a new standard for poly count or texture size limits that is tested to meassure out the headroom with current addons is definately flawed if you don't take the results and distribute the headroom to everything in OFP.

For establishing a new standard you have to take the same approach as if you would create a new game. Look at the available hardware and then judge what poly counts and texture  size would be reasonable for each asset you are having and not just concentrating on one single part of the whole thing.

Only then you will come up with with a new standard that works.

The ‘Standard’ is set to change with ArmA and again with Game 2.  Without being rude I think you are worrying about arranging deck chairs on the Titanic.  At this point its better to encourage and teach “best practices†than try and restrict addon makers to an old standard.

Now the other problem is to make others accept this new standard. Which I think might even be the bigger problem.

One (rather easy) method to counter this is to leave enough headroom in your own work for others to use so that things don's get screwed up too fast smile_o.gif

A sacrifice of your own quality for the sake of future playability.

Judging from the response of this community to Standardising anything I think you’ll be in for a seriously hard battle to win over the majority as you say.  But I doubt people will want to dial back the “quality†or detail of their addons  to conform to someone else’s idea of a playable addon.   If anything, its one of the reasons this community is still here after 4 years.  Som many people in this group have refused to accept the standards and we’ve seen the quality, playability and features of addons shoot through the roof in the lat 2 years.

Personally as long as the addons we make can be used by the majority I don’t think you should try and limit anyone.

The reason why I would prefer dummy assets is to eliminate unknown variables.

If you take BIS vehicles or other addons, you would have to examine those first to be able to judge what factors they are adding to your tests.

As we established in this thread, it's not a single reason why an addon can cause performance problems, but a whole bunch of possible things that can go wrong.

By using dummy assets you know exactly what you're dealing with and therefor your results will be more accurate.

Setting up those dummys shouldn't take longer than examining existing assets.

That's exactly the reason why people do proof of concept with dummies rather than using finished assets. Because they're faster to do and limit the variables for testing.

The main difference between me and you is that I wasn’t trying to prove the concept. I was testing a product at the midpoint of its development.  The issue of performance hadn’t even come up when I started making the C-130.  It was only later that texture size and poly/vert count became a possible issue.  At which time there was little point creating a dummy model when we had a real project to test. It would have only been a waste of resources.

As for factors, if you really want to setup a fair test i agree you need to eliminate some factors to establish a baseline eg: remove all scripts use plain dummy textures - then add a script at a time back to see how it affects the performance etc.  But that is rather long winded an unlikely to happen with most Addon makers unless they are hunting for a bug.

Anyway i think we've gone off the optimizing point into testing for too long now - lets get back on topic.

Share this post


Link to post
Share on other sites

Personally, I think we are still right on the topic of this thread.

people were asking and suggesting optimal numbers for various things and I think we established that it's not easy to come up with such easy guidelines if they really need to have a meaning, because performance problems can be the result of lots of different things.

So how do we find meaningfull numbers or establish whether our addons are well on track? Right, by testing them. That's how we got there smile_o.gif

Then we discussed testing and found that it's not even easy to test things properly with all the mess of addon making and the little knowledge we have about the OFP engine.

The whole point of this excercise was to show that we need to be carefull with any number floating around about poly count and texture size and question the reasons behind them. That instead of just blindly following some number paradigm, we should rather try to get behind how things work to determin ourself how accurate those paradigms are.

I also think you pretty much nailed it with your comment about testing when you say that it doesn't really happen like that.

I'm also not "arranging deck chairs on the Titanic" here. (I allmost fell off my chair from laughing, when I read it and pictured it in my mind, thanks for that biggrin_o.gif )

The reason why I didn't do those test myself yet is because I'm too lazy to do it while I don't really think that creating more addons will bring this game any further. At least not anymore.

So I'm not actually trying to get people to do extensive testing on their addons and come up with a new standard for addons that everyone agrees on.

But there's no other way to show poeple that the numbers and guidelines are not more than just rough guesstimates without going into such a lengthy discussion like we just did.

I actually do think that most people are willing to trade in some polygons and high-resolition textures for more playability and better gameplay, because that's probably why most of us bought OFP in the first place!

When OFP came out, it wasn't the prettiest game. What really made the difference was the gameplay it offered.

Right after the demo was released and people found out how much could actually be done with is by editing, many saw how much potential this game had.

I don't know whether it was on purpose that BIS released this demo with editable files, but I think beside offering an amazing gameplay for those times, it was one of the main reasons why OFP became so popular.

It was just wonderfull to see how everyone was running around like kids on christmas in Lustypoohs forum and getting excited with every new thing that was found to be editable.

Not for the sake of editing just another game, but because people could imagine what could be done with that dimond of a game.

Then after playing the campaign and getting over the fact that OFP wasn't really editable anymore with the pbos that were in the final game, the first addons were created and if you can remember, those were pretty crappy at that time smile_o.gif

But they were made to make the missions that everyone was trying to do since the demo but couldn't do because there were units and other things missing.

Addons were made because people wanted to have them for specific missions and missions were at the top of their time back then.

But soon people found out that creating addons was more rewarding than making missions. If you're making a high quality mission then chances are quite low that you are going to play it yourself after you're finished. So you're making such high quality mission mainly for others than for yourself.

Also addons are much easier to comment on than missions and while many people played those missions that were made by others, the makers didn't really get that much feedback. While if you made addons there was a ton of feedback and recognition and after all you could even play with your own addon when it was finished without instantly getting bored.

(And I dare to say that from my own experience, making high quality missions and campaigns that offer a wealth of gameplay is harder than creating similar quality addons)

That was where I think the turning point for OFP was. It was actually quite early when those effects set in, and it took quite a while to get to such extremes than we have today, but it was noticable.

Today we have still alot of addon makers, but there are verry few left who create high-quality missions.

The majority of people in the forums are either making addons themself or are here to spend their time toying around with new addons and taking screenshots.

There are still a few who actually play the game, but you don't see them posting that much.

Anyone remember the last "war story" thread that was posted for example?

Instead there's tons of things going on in the addon related forums and the screenshot threads.

From my point of view the discussion about optimising addons is quite a bit academic these days, as I think it would have much more effect if the creating community would more focus on missions and gameplay than creating even more addons in even higher quality that allmost no one uses.

I personally know not a single person that isn't making addons but is still playing OFP. Everyone I know was pretty much excited when playing the campaign, but they got fewer and fewer when missions also got fewer and fewer.

And lets not kid ourself here. The few of us in the addon making community are not the reason why BIS is still working on Arma and Game2, but because they think that other people would like to buy such a game for playing it.

Maybe one of the admins can post some numbers of how much visitors they have on the OFP site and the forums compared to the number of people regularely posting here. It hink there's a huge difference in numbers.

Another indicator for why I think people would like to trade in some poly count and texture resolution for gameplay is that I had the impression that it wasn't nescessary the addons with the highest quality that were a huge success, but the ones that offered more and better gameplay.

Nam Pack, FDF, CSLA,...(sorry for everyone I missed, but you get the point) they all offered more than just addons. Other addons were better or equal in quality, but they weren't really that successfull.

Why are config replacements so popular?

Because they let you play existing mission with better addons.

People could just download the addons and make missions themself, but they seem to prefer to download the same addons with a config mod where they can play all the old missions.

A ton of typing, just to show that there might indeed be a chance that addons aren't really that important and that many people who are more lurking around than posting in the forums might actually prefer to trade in some polygons and textures for actual gameplay.

So my impression is that it's not just me who is arranging deck chairs on the Titanic smile_o.gif

Arma poses a great chance to change things for the better, but we have to do somehting to make that happen.

I personally saw with the VBS community that if addon maker just continue as they are used to, nothing changes and in no time you're exactly at the same point with tons of addons and no one left to play.

Don't get me wrong, I think optimising addons is important, but as shown in this thread it's either based on more or less inaccurate assumptions or it requires quite an ammount of work to do it right which not many of us are ready to do in their free time just for fun.

So my plea is to think more about gameplay and less about how to cram more polygons and more textures into addons smile_o.gif

But coming back to optimising addons, what I would like to see is some more discussions about how to create decent models for people (soldiers, civilians,...) that don't just fold when run with the original animations.

That's one major area where I think huge improovements could be done in terms of quality.

There are some nice efforts from DMA with creating new animations, but it's allways both the model and the animation that makes the quality. And models generally seem to be the worse part so far.

(I promise to make shorter posts from now on wink_o.gif )

Share this post


Link to post
Share on other sites

Btw, it is possible also to post the arma/vbs1 lod chart for trees? smile_o.gif I would like to see how they set the lods too maybe that could help a bit aswell.

Share this post


Link to post
Share on other sites

Romolus knows the deal wink_o.gif

Share this post


Link to post
Share on other sites

Some more remarks to the LODs and their numbers:

The system of numbers has to be seen in a relatively way.

Inside the basic rules there is a specific range one can mooving arround with the values.

It's higly important that you NOT consider the numbers as a unit like feet or meters! Its just a relative system to handle the LODs.

The task of switching itselv is depending on the distance, the load of the CPU and GPU (intensive scripts also can lead to poor graphic) the size of the object and so on.

It's more helpful to look at the range between two values of LODs. My personally method is like following:

First one should create the LODs in that way:

- 1st LOD: High detail for closest view

- 2nd LOD: Best view with lowest posible polycount (half polycount if possible)

- 3rd LOD: Half polycount from above (some alpha-channel textures)

- 4th LOD: Half polycount again with best possible view from far (more alpha textures)

- 5th LOD: Ultra low polycount and/or alpha texture shapes

Ok, in first case let's assume that the object is a static like a building. For that we should start with 1.000 if the polycount of the first LOD is like or near a comparably BIS-Hous. If the 1st LOD should more detailed we should start with 0.500. But why?

Just easy that. The higher the value the more far the 1st LOD could stay visible but also he is longer gnawing the power of the Engine. Therefore we had created the also good looking but optimised 2nd LOD. That is the one we should bring up asap. To do this we give him the value of 1.000 or 2.000 in case of a moderate polycount of the 1st LOD. The 3rd LOD should have the doubled value of the LOD before - just easy that.

But now we have to think about the results of view when the 4th LOD will appear. He's maybe no more as good looking that we want to see him into our close view.

Better to keep him more distant in that case OFP "thinks" the load would be ok to draw the better 3rd LOD for a longer time. Ok, for that we can give a 4 times higher value to the 4th LOD to get this result. Don't worry, if OFP is in the opinion that the CPU/GPU is getting too hot it will switch to the next LOD earlier.

The very last LOD you can also set 3-4 times higher than the LOD before. That's the LOD which OFP can draw in very far distance so we never will see that he is so uggly. wink_o.gif

Here the list of possibly values:

- 1.000

- 2.000

- 4.000

- 12.000 (you can try higher but you have to test)

- 30.000 ( see like above)

Ok, just before we start the second case I will say another important thing: Please always keep in mind that the amount of objects will increase dramatically the more far you get out from the players position! Thats one of the reasons why LODs exists. It's not only the fact that we could see only some pixels far away from a object.

It's also fact that many thousands of objects far around us that OFP have to calculate and draw. That's why we always should create a very, very low detailed LOD into each model because of their huge amount on the map and inside the far range of view.

Just imagine what OFP would have to do if there would exist only the 1st LOD?! But believe me when I say that there are many models existing like this! confused_o.gif

Ok, lets get ready with this. Now lets guess that our model is a pistol-proxy or something other like this small one. And let's assume, too, that the polycount is similar to the house we talked about.

But now it's higly recommended to start with much lower values around 0.250. The next ones would be 0.500; 1.000; 2.000 and least 4.000.

Well, now we have a completely different structure and it should be clear why. What do you think you will see from a pistole into the hand or the holster when its more then 5 or 10 meters away? Right - just some pixels. So it doesn't matters when the LODs will switch fast to the simpliest model.

Of course you can play a little bit around the values but not too much. If you are too far out of the borders you will get crappy values after binarizing your model. You always should check the results with ODOL-Explorer if any values like that ones you give to your model in OË›.

In any case the LODs and their numbers are going dramatically wrong you will earn a picture show or a lousy graphic. In that case all your effords to bring up a good looking addon totally have been in vain.

To avoid this a good method to test is multiplayer testing.

Another method to test the LODS and their values is to create a simply cube above each LOD into the model and to colourize them with different colours. So you can see easily when the LODs are switching thru even it happens far far away.

Ok, let's stop here. I hope I could make you to understand a little bit how the system works and how it is so higly important to OFP and later on for ArmA, too.

Share this post


Link to post
Share on other sites

Maybe a stupid question, but since lods are beeing discused atm.

I was wondering if an extra lod (latest lod) without ANY faces/points is worth adding? Might sound strange but was just wondering (as i'm not sure) if the latest lod keeps getting drawn/calculated OR does the view distance cuts the remaining objects (lod)? So depending on latest lod nr and set view distance, their might be still a 'zone' that maybe can be covered by adding an empty lod so it defines a 'clear zone, until viewdistance cuts remaining objects (i suppose it does).

Depending on object but last lod keeps having several faces/points. Offcourse a very low amount, but depending on the amount of objects on a map, i still think it is important enough to know.

Hope my question was clear enough. Anyway, just asking this in relation with XXX TP3 island pack that features some insane cities and is still able to coop with it (ok, playable at viewdistance of +-900m, but that is fair and good enough for city fights). Something made me wonder, maybe they included empty (last) lods for the buildings aswell.

Maybe an other (strange and hard to explain) question:

In what relation (optimalization vs model looks) does it pay of to keep vertex point on original places over the lods. Meaning, (no time to add pics..but if wanted i can) if you have a corner on an object, based on an 4 point corner setup and in a reduced face-count lod you have the option to leave out the corner, you can do this in several ways.

1. Simply remove the 2 outer point of corner , so you keep those 2 remaining points on the original loactions (x,y,z)

or

2. Remove/merg them all and relocate that vertex so it fits the original shape (=different x,y location).

Maybe not the best example (2 vs 1 corner..in the end that is not what i was looking for). But what i was trying to know is(since apperently more people are defending the vertex over poly issue) if difference x,y,z locations have its influence in the overall calculation. As pointed out, vertex location controles a lot of different things (mapping, lightning..ext).

So i was wondering if it might be better to keep vertex locations on original loc. as much as possible. If it has inpack in more (re)calculations, i suppose we can talk about an other optimize option..if you know what i mean.

Small details, but this is what optimization is all about i think smile_o.gif.

Share this post


Link to post
Share on other sites

From my experience, you've to be very careful when making empty LODs like you suggested. A short case study:

Grass object, size 50x50 meters.

Version 1 LODs:

0.500 - 1920 faces

1.000 - 960 faces

2.000 - 554 faces

4.000 - 202 faces

6.000 - 0 faces

I have observed (using LOD markers) that it will NEVER switch to the last LOD, probably because of the object size.

Solution:

LODs 0.500 - 4.000 same as above

6.000 - 4 faces.

Now it will switch correctly to the last lot at certain distance, thus saving a lot of processing power.

However, small objects like furniture proxied inside BIS houses, does have last empty LOD and apparently it works. whistle.gif

Share this post


Link to post
Share on other sites

Thanks ag_smith for that small/quick case study.

I'm glad you more or less understood what i was meaning with using an empty lod and why (grey zone between last lod and viewdistance).

Also thank you for finding an official sample that make use of what i meant, but strange it works for BIS and not you.

Carry on thumbs-up.gif .

(since the empty one didn't worked for you...maybe 1 vertex or 1 face will do the trick? Just to give the engine something to work and reduce calculations to the minimum....)

Share this post


Link to post
Share on other sites

One sentence to the empty LODs:

Sometimes (especially when creatings small plants) I do add emty LODs. BUT: In any case I put a single Vertice into the LOD! In this case OFP WILL switch to the last LOD. wink_o.gif

That's also the trick when creating empty funktion LODs. By making little plants often a fire geo isn't needed but if you would add simply a geo LOD he would working as a fire geo, too. Adding a empty fire geo with only a single vertice will prevent this.

....

Maybe an other (strange and hard to explain) question:

In what relation (optimalization vs model looks) does it pay of to keep vertex point on original places over the lods. Meaning, (no time to add pics..but if wanted i can) if you have a corner on an object, based on an 4 point corner setup and in a reduced face-count lod you have the option to leave out the corner, you can do this in several ways.

1. Simply remove the 2 outer point of corner , so you keep those 2 remaining points on the original loactions (x,y,z)

or

2. Remove/merg them all and relocate that vertex so it fits the original shape (=different x,y location).

Maybe not the best example (2 vs 1 corner..in the end that is not what i was looking for). But what i was trying to know is(since  apperently more people are defending the vertex over poly issue) if difference x,y,z locations have its influence in the overall calculation. As pointed out, vertex location controles a lot of different things (mapping, lightning..ext).

So i was wondering if it might be better to keep vertex locations on original loc. as much as possible. If it has inpack in more (re)calculations, i suppose we can talk about an other optimize option..if you know what i mean.

Small details, but this is what optimization is all about i think  smile_o.gif.

If I understand you right (I'll hope so) I not pay very much attention to the Vertices. Why?

I always pay attention to reduce the polycount and I do this regular by merging vertices and to push them into the right, new position to get the right shape again.

----------

Let me show some pics to show what I said before abut LODs and their numbers. The example I use here is based on a model from the Nogova Mine LTD by JörgF which I have changed into a car repair.

Don't wonder about the immense polycount, because this building contains a complete (and I mean really complete) inventory.

If you enter this you will find tools, lifter, welder, repair-ditch, kitchen, bureau, scoda, ural-wreck, and so on.

But therefore it's a very good example how to handle the polycounts and teir numbers.

lod17wz.jpg

This first LOD is made for fast machines and best looking. As you can see I set a little number so gamers with lower machines can drop this lod easily by the preferences if needed. He woun't appear if the settings are adjusted for lower machines. To give high detailed models a very low first number is a method to include both: A high res and a low res variant into one model.

lod28hf.jpg

The second LOD only having 50 percent of the faces but though there is given a very good view. Not only one object is missing. Like said before its the low res (1st LOD) model for slow machines. The main method to reach this was merging and deleting faces. As you can see both LODs are having nearly the same appearance though.

lod36wu.jpg

Now it's getting interesting. Take a look at the LOD value, there's a jump. That means that the moderate detailed model before (2nd LOD) will stay a little bit longer to see. But in case OFP means that it would be too heavy, it will switch though! That wouldn't matter anything because the 3rd LOD will show any relevant object you can see from outer side. But look to the polycount: 3 times less! In normal case the model is already in a certain distance like the small picture shows. (Ignore the little "mistakes", because ingame you woun't see them anymore)

lod42py.jpg

Again the LOD value does a big jump to make shure the model will apear in far distance. But if the load would be high it doesn't matter, too. The most important details are still included so there's no problem to see it a bit closer.

Still the main method is merging and deleting faces. But in this model it could be a very good idea to work with alpha textures to get better results. The only reason why I not did this was that this model will set only one times on the whole island. There is really no need to exaggerate the optimizing. But in case it would be a shanty it would make sense, in deed.

lod51yi.jpg

Now the very last LOD. This one I made from the scratch! It wouldn't be a very good idea still to work only with merging and deleting faces. This LOD is made for very far views and for fast planes. Only high contrasting parts are still present to keep the basic character of view.

After all it's important to check all the vertices/faces after merging and deleting (especially the lightning on enterable buildings). Sometimes it's necessary to create a new face to get the right lightning. Thats more important than to spare it.

As a result I not pay attention that every vertice stays in the same position. The only decisive thing to me  is the correct view / properly funktion and best results.

Share this post


Link to post
Share on other sites

Decided to add a few example pictures of my above question, again i know this isn't the best example as end result (amount of point and faces aren't equal in bought different methodes of reducing an object):

example1.jpg

Normal (smooth) corner in 2d.

example2.jpg

Same object, just deleted the outer 2 corner points. All other point have still the same exact x,y,z values.

example3.jpg

Same object but merged corner point, reduced to 1 NEW x,y value.

OK, again, i know last methode will be in the overall big picture be the best reduce methode (1 face/2points less compaired to example 2 and 3, x amount of corners in object) but again this isn't what i'm looking for. The thing i want to know is how it compaired vs the engine (CPU-RAM-GFX) calculation of vertex positions and everything that comes with it.

I don't know how this get calculated. Does the calculation of new vertex positions undergo a new calculation (and theirfor a small inpack on CPU/GFX) as it needs to recalcuted it...or does it 'remembers' vertex previous locations so exmple 2 needs less REcalculations. Ok, again not the best example as offcourse example 3 will need less calculations anyway(less vetexes/faces).

So what im after is does repositioned vertex point undergo a bigger inpack for CPU/GFX then original (non edited) vertex locations.

whistle.gif Sorry for crappy example and my poor postinfo. Think i will need to rethink that example, but hope you guys/girls understand what i'm looking for. CALULATIONS of vertex points (original vs new positioned x,y,z values). As i don't have a clew how it workes (does it always recalculate all vertex positions in every lod, or does it 'see and reconnaise' identical x,y,z values easier) and what is the most intensive for the CPU/GFX. In the end it is all about this kind of calculations, so it is something i would like to know.

Share this post


Link to post
Share on other sites

Imho this woun't make any difference.

In fact the GPU always (means each frame) have to render any face so it's pretty useless to think about that details.

And if it do though it would make a very, very small difference.

Ok, i don't know that exactly, but i personally woun't pay any attention to that.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×