Have a good read if you are interested in AI or the technical side of the project!
- OOP -
Almost all code is written in my OOP own implementation for SQF, OOP-Light. I think this is the first mission of such class written entirely in OOP, correct me if I am wrong. In a few words, the OOP macros create namespaces for individual objects in missionNamespace, by concatenating object reference and variable name in form: objName_varName. It supports basic OOP features such as classes, methods, static members, inheritence. There are more advanced features, like member attributes accessible at run time, a lot of run-time assertions (wrong class names, null-objects, wrong class member names), which can be disabled to release build, public object creation.
Benefit of OOP has been huge. First of all it has allowed us write code by operating with data structures, which happens naturally a lot, whatever kind of component is being written. We were able to organize the code base properly. Class inheritence was hugely benefitial for AI OOP classes (more about them later) and for UI OOP classes. Finally, member variable attributes have let me do automated (de)serialization of objects for transmission across network and for storage of saved games. Although, even with these macros, it is still SQF with all its disadvantages. Looking back, I think I should have gone with Intercept or a similar thing, because IDE support of code base matters a lot for such a big project.
- AI -
Although this is definitely not a general purpose AI addon, and it doesn't aim to be one of those addons, half of all the code of the scenario is actually AI code. You could have seen that most features are AI-related, AI is the main feature of the mission.
- Low Levell AI: Goal-Oriented Action Planning (GOAP) -
Low level AI is composed of all AI levels except for the topmost commander level. There are several logistical unit classes in the mission: Unit(soldier, vehicle, drone, ammo box), Group(several units of any kind), Garrison(several groups and ungrouped vehicles). All of these objects run one of AI classes: AIUnit, AIGroup or AIGarrison, inherited from common AI_GOAP class.
I chose Goal-Oriented Action Planning AI architecture for the low level AI architecture. GOAP AI was first used in F.E.A.R. game in early 2000s. It is much better than traditional FSMs in a lot of ways. It is well described in lots of articles by Jeff Orkin, for instance: (Three States and a Plan: the AI of F.E.A.R., Jeff's page with lots of resources), you can also find F.E.A.R. AI source code in F.E.A.R. SDK. Also, since there is OOP in this project, it helped a lot to implement this architecture.
Now let me describe GOAP briefly.
Step 1. Goal selection
The game world seen by our AI agent (can be unit, group or garrison in our case) is formally described as a world state structure (an just an array of values). On each update a current goal is chosen (each goal is an OOP class), based on the world state and any other factors. We calculate relevance of all potential goals and choose the most relevant one. The goal describes the desired world state in world state terms.
Step 2. Action Planning
All actions are configured to modify one or more world state properties. Actions can be chained together by the planner. The planner uses A* algorithm to choose the proper action plan to achieve the goal world state from the current world state. Generally one would like to run A* to fine a route between nodes on 2D plane for instance. Here it's similar, but we can have 5D or 10D space or whatever we like, depending on the amount of world state properties, and actions connect the 'nodes'. The final generated action plan is also sorted by precedence of actions, to make the plan make more sense in some cases. The planner can also resolve parameters for actions, for instance if a 'move' action is specified to set the 'pos' world state property to desired value, the planner can pass the desired position to this parameter, derived from desired position from the goal.
Actions are actually mini-FSMs which have a few states, such as active, inactive, completed, failed. Action-class objects make agents do something, like moving, getting into vehicles.
The multi-level AI operation is achieved in the following manner. All agents have internal and external goals. Relevance of internal goals is calculated all the time, the amount of such goals includes mainly relax behaviors(low relevance, if there is nothing else to do), self-defence, escaping severe danger (grenades and such things, although not implemented in the mission right now). External goals behave same as internal goals, except that they are generally added by some higher-level entity: Commander (or player commander) sends goals to Garrisons, Garrisons send goals to Groups while performing some action, Groups send goals to Units.
Performance and optimizations
SQF performance leaves extremely low amount of computation, so original GOAP had to be optimized.
Most of goals match uniquely to some actions, for instance GoalUnitRepairVehicle directly matches to ActionUnitRepairVehicle. This is true for most of Unit and Group level goals/actions, therefore there is no need to run the costly A* planner for them(although still possible in the framework). For such cases the action is specified as 'predefined action' for a goal.
It is also possible to bypass the A* planner and plan the actions manually, it is sufficient for most cases, for instance: (1. if in vehicle, get out of vehicle; 2. move to destination).
In the end, planner usage is a fascinating concept, but I have only used it for management of convoys in Garrison AI, both because of low performance (we don't need to update garrisons as often as groups), and because of higher complexity of action plans by garrisons. It helps with making new action plans, but in the end the action plans can be also written manually with almost same amount of time spent as to tweaking costs of actions to achieve the action plan which makes sense.
GOAP is a nice architecture for low-level AI, much better than FSM because new goals and actions can be added easily, avoiding the general clusterduck of 1000 links in FSM. This is mostly due to reevaluation of goal priority on each update, and choosing of the goal with highest priority. In FSM it typically results in links from all states to all states with higher priority.
- Commander AI -
Commander AI has been made by billw.
The Commander AI operates entirely above garrison level, sending goals to garrisons. The core component of the Commander AI is the WorldModel object, which represents the way the game world is perceived by commander. The WorldModel contains GarrisonModels, representing the Garrisons to be commanded, and LocationModels, representing Locations it is aware of.
The planning consists of several steps:
1. Synchronize the WorldModel with the real game world, by iterating all garrison and location models.
2. Make a copy of WorldModel - the Future WorldModel.
3. Project currently active actions onto the Future WorldModel. For instance, if we are currently running an action which captures Location B, then the location B is marked as captured in the future. This way we can prevent generating multiple 'capture' actions for the same location while the current one is active.
4. Generate actions. The planner generates all actions it considerers possible. For instance: if we have 10 garrisons, it can generate a 'reinforce' action from each of them to each of them. This results in hundreds of potential actions typically being generated.
5. For the number of actions we want the Commander to start:
a. Calculate 'score' of all actions. Scoring depends on lots of variables: distance between source and destination, resources available, the types of the locations, known enemy activity, the Commanders active strategy, and so on. Importantly scoring is done using the Future WorldModel, so as to consider the results of all currently in progress actions.
b. Take the top scoring action, activate it, and apply it to the FutureWorldModel. This allows scoring on the next iteration to take into account this new action. This is important in such cases as when two actions take resources from the same location. If the first action uses up the resources, then the second action must be re-scored to take this into account.
c. If there are potential actions left, and we want to activate more, go back to a.
Each commander's action is actually an FSM (finite state machine), not BI's FSM, but an FSM made with our OOP. Each Commander Action consists of several ActionStateTransitions. Examples of such transitions can be: giving a 'move' order to garrison, splitting garrison in two, giving an 'attack' action to garrison, etc. Actions have an important feature - 'action variables', implemented as array with values. They are needed to let one transition provide data to another transition, for instance when we Split a garrison to give it a 'move order', the 'split garrison' transition must provide the new garrison ID to the 'move garrison' transition. This is analogous to the "blackboard" method used in other AI systems, such as behaviour trees.
Performance and results
Although FSMs have some limitations, the simplicity of the garrison actions makes them sufficient in this case. It can manage operation of an arbitrary amount of actions, and we have had a lot of fun observing bots planning and executing attacks on each other. The performance is also fine: it generally takes 0.5...2 minutes (despite the huge amount of potential actions) for one iteration of planning. This is acceptable for the Commander AI, as it deals only in strategic decisions that occur on timescales of minutes to hours.
- Why did I write all that? -
Well, as you can see that I like AI topics, and I hope someone reading this will get inspired to extend the existing set of goals and actions with new ones, or base his own AI mod on this. I am sure that the AI framework we've made is the biggest achievement of the whole project.
- Scheduling -
With the given execution options in SQF (PFH or spawn), it's quite challenging to manage such high amount of computation, multiple steps of which can take more than several milliseconds sometimes. I have decided to base whole mission framework (and AI too) on the SQF scheduler. Several 'threads' are spawned: Commander AI threads (one per each commander), Main thread with Garrison AI and all Garrison operations, Group AI thread, and several other auxiliary threads. AI processing is scheduled in slightly different manner, though. Typically the thread runs 'process' method of one of the assigned AI objects(the object updated longest time ago), then processes up to a certain amount of messages in the queue, then repeats it all again. Comms between threads are done as message queues. It all works good enough, but has a drawback that it is totally asynchronous with the game frame rate, meaning that at low frame rates we get horrible latensies from user input for instance. At 40+ FPS it runs good enough though. At dedicated server I also cheat a bit, and use startLoadingScreen (which increases scheduler limit from 3ms to 50ms) for a while if the queue of threads gets too big. Or alternatively I could have used this SQF command if it was implemented.