Latinman - I tested your mission examples. I am not sure whether this has been pointed out already (I lost track after the 13th page).
What I noticed from these examples is that the AI engine only seems consider objects either once they moved (a meter is enough) or another -already known object- gets very close by so they get spotted.
To stick to your example mission. Soldier 1 doesn't see Soldier 2. But if you have Soldier 1 move away even further then Soldier 2 will immediately see Soldier 2 and vice versa.
Movement seems to be the key to awareness levels managed by the AI engine. This would also explain why the suicide runner is still running towards the enemy even though he gets shot at. He simply doesn't know of his opponent until he gets really close because his opponent gets prone but never really moves.
It's different if you join the group of either of the Soldiers. You can notice that you youself will announce the presence of the enemy soldier even when he stands further away. For some reason you as a player immediately become part of the "collective consicience" whereas your buddy next to you is not - because he didn't move yet and potential objects he should know about didn't either. So for some reason you as a player know about objects invisble to all others.
Hence my assumption that something needs to be improved in regards to the AI init process. If the AI soldiers rely on the "closestObject" function to count potential opponents then the AI engine must assure that this function knows about all present objects in a given vicinity right after a mission starts - and not only after each object moved or is almost staring in their face.
My 2 cents,