Jump to content
Rydygier

Neural network - working SQF example?

Recommended Posts

Some time ago I decided to try to understand the concepts of neuron networks (out of curiosity mostly, prior to that day knowing nearly nothing particular about them). After rather extensive google'ing in both my native tongue and English, seemed, I gathered enough info to grasp very basics. Even broke through involved math, at least I thought so, thought, I was ready/able to model a neural network in SQF now. Which I did. Problems however started when it came to proper traininig of such network. While I'm interested mostly in unsupervised learning, tried both, un- and supervised (probably didn't 100% grasp the math of backpropagation here though). Results was IMO either partially promising or a failure (depending on network architecture, input values, learning parameters...), in general not desired/expected by me.

 

Anyway, now returned to this. My problem is, I lack working with 100% certainty SQF reference (while not familiar with any other programming language enough to benefit from available in the internet examples), so I don't know, either I'm doing something wrong, either I expect not, what I should/misuse the network somehow. 

 

Is there any good soul, who successfully implemented neural network and trained it with good result using SQF and could share his code here?

 

What I have so far (unsupervised part only, code described mostly in Polish to make learning easier for me):

 

Spoiler


RYD_NN_activationF = //mnożenie przez wagi, sumowanie, funkcja aktywacji
	{
	params ["_neuron","_sumAll"];
	
	private _sum = _neuron select 2;
	private _func = _neuron select 4;
	
	private _act = switch (_func) do
		{
		case ("linear"):
			{
			_sum
			};
		
		case ("unipolar"):
			{
			if (_sum <= 0) then//unipolarna funkcja aktywacji
				{
				0
				}
			else
				{
				1
				}			
			};
			
		case ("bipolar"):
			{
			if (_sum <= 0) then//bipolarna funkcja aktywacji
				{
				-1
				}
			else
				{
				1
				}			
			};
			
		case ("relu"):
			{
			if (_sum <= 0) then//reLU
				{
				0
				}
			else
				{
				_sum
				}				
			};
			
		case ("hard_tanh"):
			{
			switch (true) do//hard tanh
				{
				case (_sum < -1): {-1};
				case ((_sum >= -1) and {(_sum < 1)}): {_sum};
				default {1};
				};			
			};
			
		case ("sigmoid"):
			{
			(1/(1 + exp (-_sum)))
			};
			
		case ("tanh"):
			{
			(((exp (_sum)) - (exp (-_sum)))/((exp (_sum)) + (exp (-_sum))))
			};
			
		case ("softmax"):
			{
			((exp (_sum))/_sumAll)
			}
		};
	
	_neuron set [3,_act];
	};

RYD_NN_propForward = //propagacja wprzód - aktualizacja wartości wyjściowych neuronów
	{
	params ["_network","_bias"];
	
	_iLayer = _network select 0;
	_hLayers = _network select 1;
	_oLayer = _network select 2;
		
		{
		_thisLayer = _x;
		
		_inputs = if (_foreachIndex == 0) then//co na wejściach
			{			
			_iLayer
			}
		else
			{
			_inputs2 = [];
			
				{
				_inputs2 pushBack (_x select 3)
				}
			foreach (_hLayers select (_foreachIndex - 1));		
			
			_inputs2
			};
			
		_sumAll = 0;
			
			{
			_sum = 0;
			_x set [0,_inputs];//zapis aktualnych wartości neuronów wejściowych dla danego neuronu
			_neuron = _x;
			
				{
				_sum = _sum + (_x * ((_neuron select 1) select _foreachIndex));
				}
			foreach _inputs;//sumowanie iloczynów wejść i wag
			_sum = _sum + _bias;
			_x set [2,_sum];//zapis sumy w neuronie docelowym
			_sumAll = _sumAll + (exp _sum);
			}
		foreach _thisLayer;
			
			{			
			[_x,_sumAll] call RYD_NN_activationF;
			}
		foreach _thisLayer;//wejścia przetwarzane na wartość wyjściową neuronu, aktualizacja neuronu
		}
	foreach _hLayers;//aktualizacja wartości wyjściowych neuronów warstw ukrytych
	
	_inputs = [];
	
		{
		_inputs pushBack (_x select 3)
		}
	foreach (_hLayers select ((count _hLayers) - 1));//wartościami wejściowymi dla neuronów warstwy wyjściowej są wartości wyjściowe neuronów ostatniej warstwy ukrytej
	
	_sumAll = 0;
	
		{
		_sum = 0;
		_x set [0,_inputs];//zapis aktualnych wartości neuronów wejściowych dla danego neuronu
		_neuron = _x;
		
			{
			_sum = _sum + (_x * ((_neuron select 1) select _foreachIndex));
			}
		foreach _inputs;//sumowanie iloczynów wejść i wag
		_sum = _sum + _bias;
		_x set [2,_sum];//zapis sumy w neuronie docelowym
		_sumAll = _sumAll + (exp _sum);
		}
	foreach _oLayer;
	
		{		
		[_x,_sumAll] call RYD_NN_activationF;	
		}
	foreach _oLayer//aktualizacja wartości neuronów warstwy wyjściowej
	};
	
RYD_NN_autoLearn = //samouczenie - korekta wag synaps wariant bez nauczyciela (wzmocnieniu ulegają wagi synaps łączących aktywowane, wysokowartościowe neurony - utrwalanie i wzmacnianie pre-randomizowanej specyfiki sieci)
	{
	params ["_network","_learnV","_treshold","_method"];
	
	_iLayer = _network select 0;
	_hLayers = _network select 1;
	_oLayer = _network select 2;
	
	switch (_method) do
		{
		case ("hebb"):
			{
				{
				if (_foreachIndex > -1) then
					{
					_layer = _x;
					
						{
						_nextVal = _x select 3;//wyjście neuronu

						if (_nextVal > _treshold) then
							{
							_prevVals = _x select 0;//wejścia neuronu
							_synapses = _x select 1;//wagi wejść

								{
								if (_x > _treshold) then
									{
									//_add = _learnV * _nextVal * _x;//czysty Hebb
									_add = _learnV * _nextVal * (_x - (_nextVal * (_synapses select _foreachIndex)));//Oja's rule
									//_add = _learnV * (_x - (_synapses select _foreachIndex));
									//_add = 1 + (_learnV/(abs (_synapses select _foreachIndex)));
									_synapses set [_foreachIndex,((_synapses select _foreachIndex) + _add)]
									}
								}
							foreach _prevVals
							}
						/*else
							{
							_prevVals = _x select 0;//wejścia neuronu
							_synapses = _x select 1;//wagi wejść

								{
								if ((_x < _treshold) and {not ((_synapses select _foreachIndex) == 0)})  then
									{
									//_rem = _learnV * _nextVal * _x;
									//_rem = _learnV * (_x - (_synapses select _foreachIndex));
									//_rem = 1 + (_learnV * 0.1/(abs (_synapses select _foreachIndex)));
									//_synapses set [_foreachIndex,((_synapses select _foreachIndex)/_rem))]
									}
								}
							foreach _prevVals
							}*/
						}
					foreach _layer
					}
				}
			foreach (_hLayers + [_oLayer])
			};
			
		case ("wta")://winner takes all
			{
				{
				_layer = _x;
				_max_Ix = 0;
				_max = (_layer select 0) select 2;
				
					{
					if ((_x select 2) > _max) then
						{
						_max = _x select 2;
						_max_Ix = _foreachIndex
						}
					}
				foreach _layer;

				_nextVal = (_layer select 0) select 3;//wyjście neuronu

				_prevVals = (_layer select _max_Ix) select 0;//wejścia neuronu
				_synapses = (_layer select _max_Ix) select 1;//wagi wejść

					{
					//_add = _learnV * _nextVal * _x;
					//_add = _learnV * (_x - (_synapses select _foreachIndex));
					_add = _learnV * _nextVal * (_x - (_nextVal * (_synapses select _foreachIndex)));//Oja's rule
					_synapses set [_foreachIndex,((_synapses select _foreachIndex) + _add)]
					}
				foreach _prevVals;
				}
			foreach (_hLayers + [_oLayer])
			};
		};
	};
	
RYD_NN_NetworkWeaver = 
	{
	params ["_inputN","_arr","_wSpread"];
	
	private _input = [];
	private _hidden = [];
	private _output = [];
	
	for "_i" from 1 to _inputN do
		{
		_input pushBack 0
		};
		
	private _lCnt = (count _arr) - 1;
		
		{
		private _hCol = [];
		private _hRows = _x select 0;
		private _actF = _x select 1;
		
		for "_j" from 1 to _hRows do
			{
			private _weightsN = if (_foreachIndex == 0) then
				{
				_inputN
				}
			else
				{
				(count (_hidden select ((count _hidden) - 1)))
				};
			
			private _weights = [];
			
			for "_k" from 1 to _weightsN do
				{
				_weights pushBack (random _wSpread);
				//_weights pushBack ((random (_wSpread * 2)) - _wSpread);//nie Gauss
				//_weights pushBack ((random [0,_wSpread,_wSpread * 2]) - _wSpread);//Gauss
				};
				
			private _intake = [];
			
			for "_k" from 1 to _weightsN do
				{
				_intake pushBack 0;
				};			
				
			_hCol pushBack [_intake,_weights,0,0,_actF];
			};
		
		if (_foreachIndex == _lCnt) then
			{
			_output = _hCol
			}
		else
			{
			_hidden pushBack _hCol
			}
		}
	foreach _arr;

	[_input,_hidden,_output]
	};

//[RYD_NN_network,[0,0],1] call RYD_NN_useNetwork
RYD_NN_useNetwork = 
	{
	params ["_network","_cFeed","_learnV","_treshold","_bias"];
	
	_network set [0,_cFeed];
	
	[_network,_bias] call RYD_NN_propForward;
	[_network,_learnV,_treshold,"wta"] call RYD_NN_autoLearn;
	
	_out = [];
	
		{
		_out pushBack (_x select 3)
		}
	foreach (_network select 2);

	_out	
	};
	
RYD_NN_printNetwork = 
	{
	params ["_network"];
	
	private _input = _network select 0;
	private _hidden = _network select 1;
	private _output = _network select 2;
	
		{
		diag_log format ["I%1: %2",(_foreachIndex + 1),_x];
		}
	foreach _input;
	
	diag_log "----------";
	diag_log "HIDDEN";
	
		{
		_col = _x;
		_colN = _foreachIndex;
		
		diag_log "";
		diag_log format ["COLUMN %1",_colN];
		
			{
			diag_log format ["N%1%2: %3",_colN,(_foreachIndex + 1),_x];
			}
		foreach _col;
		}
	foreach _hidden;
	
	diag_log "----------";
	diag_log "OUTPUT";
	
		{
		diag_log format ["I%1: %2",(_foreachIndex + 1),_x];
		}
	foreach _output;
	};

	
//SOME TEST EXECUTION	
RYD_NN_network = [2,[[4,"sigmoid"],[4,"sigmoid"]],1] call RYD_NN_NetworkWeaver;
[RYD_NN_network] call RYD_NN_printNetwork;

_learnV = 0.2;
_treshold = 0.5;
_iterations = 100;
_bias = 0;
_feed = []; 

startLoadingscreen ["NN","RscDisplayLoadCustom"];	
for "_i" from 1 to _iterations do
	{
	diag_log "";
	diag_log format ["i: %1",RYD_NN_network select 0];
	//diag_log format ["h1: %1",RYD_NN_network select 1];
	//diag_log format ["h2: %1",RYD_NN_network select 1];
	_out = [];
	
		{
		_out pushBack (_x select 3)
		}
	foreach (RYD_NN_network select 2);
	diag_log format ["o: %1",_out];
	
	//_cFeed = selectRandom _feed;
	//_cFeed = [(selectRandom [-1,1]),(selectRandom [-1,1]),(selectRandom [-1,1]),(selectRandom [-1,1])];
	//_cFeed = [(random 1),(random 1)];
	_cFeed = [-100,100];
	[RYD_NN_network,_cFeed,_learnV,_treshold,_bias] call RYD_NN_useNetwork;

	progressLoadingScreen (_i/_iterations);
	};
	
endLoadingScreen;

[RYD_NN_network] call RYD_NN_printNetwork;

test mission

 

(with the above code I'm for now simply trying to make the nn to properly classify numerical input data (be it 2D coords for example) into few classes by returning distinctive numerical output per category)

  • Like 1

Share this post


Link to post
Share on other sites

AFAIK this was the only thread to touch it somewhat: 

however i doubt you can generate enough data, and you network has been very simplistic to bring meaningful results in the very complex world of Arma

also one is likely lacking enough low level engine access to get meaningful data in the first place

 

anyhow maybe someone with practical experience can provide some more solid judgement

  • Like 4
  • Thanks 1

Share this post


Link to post
Share on other sites

Good points @.kju.

 

In terms of machine learning in arma it would make more sense to do so on higher levels of the AI, think commander level or above,

way too many variables on unit level to take into account, since a unit is able to do basically anything the player can (walk, swim, dive, climb ladders, drive land vehicles, planes, submarines, boats etc.), not to mention various weapon systems.

 

Also its usefulness depends on gamemode/amount of players, I can imagine that a properly trained machine learning AI will wreck any player or coordinated group, even most more advanced FSMs out there do a pretty decent job at that.

 

Cheers

 

  • Like 5

Share this post


Link to post
Share on other sites

Thanks, from what I learned so far about neural network usefulness, I tend to agree, it would be difficult to put nn in any practical use in Arma, at least from SQF level (although in the linked thread, IIRC, there was someone, who achieved that with some external programming). Still, one could think about, simplistic indeed, kind of tactical awareness where code recognizes situation visually, similar, how recognizing letters works. But at this point it is pure fantasy, currently I'm just feeling that special kind of itching curiosity, if I'll be able to actually make it work with any form at all. Arma's SQF is simply the best (the only) tool, I have at hand. 

  • Like 1

Share this post


Link to post
Share on other sites

As it happens, I began fiddling with Machine Learning very recently... ^^

 

The first thing that comes to mind, is that you probably need a network of neural networks if you want to achieve something meaningful in such a complex environment : on its own, a neural network is a super dumb and super specialised machine, often offloading a range of tasks before another one takes over (that's how I understand it anyway).

 

It's tough to even imagine what can be done, but I can a picture an AI going through all SQF commands to gather huge amounts of data (although that alone would be a headache of its own) - which could be fed to another network trained in solving tasks in mission-making or system-design. But in any case, I can't see this being achieved without some amount of external programming.

 

FYI : the Finland government recently released a free course on ML https://www.elementsofai.com/

Google's colab is also a good place to start I reckon. 😉 

 

EDIT : also, as @.kju mentioned, the lack of low level engine access is a big hurdle : it leaves a lot of grey areas as to what should be considered when defining training guidelines.

At best, it is a limiting factor, but there is also the risk of training a network biased by false or vague assumptions (which is becoming an emergent problem already IRL).

  • Like 4
  • Thanks 1

Share this post


Link to post
Share on other sites

Man I wish I could think that deep <..but but but I made jboy finger!...>.  You have some great minds opining here, so if anybody can do it you guys can.  Good luck fellas.

  • Like 2

Share this post


Link to post
Share on other sites

I found this resource very good, takes you from basic algorithms and graphs all the way through to ML and NN.

He also has a playlist talking about vectors, forces, creating a simple physics engine, particles, boids, steering behaviours, agents and automata. Which ultimately ends up leading to the first linked playlist.

 

The guy is rather quirky, but if you can get past that, the resources are a good start.

Been learning general ML/NN theory on and off for the last year. Doubt ill ever incorporate into Arma, although the relatively new vector and matrix commands would be a huge benefit if doing so.

  • Like 4
  • Thanks 1

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×