bad benson 1733 Posted September 12, 2015 Those are some nice animations and transitions but damn that ai played out pretty bad. Enemy soldiers just standing next to enemies totally unresponsive or very late to act. Wonder why he places them so close together from spawn as that can spazz out ai as its just unnatural. exactly my thoughts. i bet they wouldn't be half bad if he'd make them engage eachother from afar. but who knows what the editor allows you to do. maybe they have a super short spotting distance and you can't place waypoints or something. but like this? looked impressive in some ways but i think there is no AI out there that can handle such chaos gracefully. simply too many priority targets at once. but what i also liked a lot aside from the animations is the screams. anyone remember DSAI for arma 1/2? i loved that so much. arma 3's radio shouts come closer but not enough for my taste. in that video the screams combined with the sound engine made for some great dramatic atmosphere. and all procedural. Share this post Link to post Share on other sites
gammadust 12 Posted September 12, 2015 Maybe. :) Then again, ArmA is no Super Mario (I'm sure you've seen plenty of AI playing 2d-sidescrollers). The environment can't be learned, since it's not static (or deterministic), but highly dynamic. We can't just use a "simple" fitness function optimizing some score (like distance reached in a 2d-level). I disagree here, while you're right that arma is not a sidescroller, attempts in machine learning show success with more complexity than that. Reinforcement learning is not necessarily constrained to one reward vector alone (ie. longest distance). It is not the current limited number of states in the environment either anymore, it is the memory of state > action > reward in past experiences too. Machine learning already addresses infinite state environments. The question is what the AI is supposed to be optimizing/maximizing? Combat effectiveness? Some combined score? Or should some error (deviation to "human behaviour") be minimized? I'm leaning towards the latter, in some supervised fashion. Here is what AI should mimic take this positive reward if successful, take this negative reward if failed. Exactly. But there is no single "human behaviour". There are many different roles that could be trained. Training data thus should be plug'n'play (per unit or group). And there needs to be a streamlined workflow to let mission designers easily train new roles as they see fit (besides having a set of pretrained roles by BIS). Imagine if missions would (partially) come with their customly trained AI? Bananas! :D I did imagine, i can't wait for it. Share this post Link to post Share on other sites