Hello folks,
first of all: I did not read the whole thread, so if any of my questions are already answered, i would appreciate a small link to the page/post, thank you.
So BI implemented some sort of workers to offload AI calculations onto another process to utilize another machines resources. Of course, this can be executed on the machine which runs the server process and therefor utilize more cores of the CPU. This might be useful, because the original server will only create a single AI thread. You even can have multiple headless clients and make use of multiple machines at once. The point is:
1. You implemented a socket based protocol to do inter-process communication so you can offload AI calculations
2. You have a client (which is basically the normal game, as far as i know) which just acts as a worker
3. It seems to work for most users
And yet you decided not to use that exact same protocol (which would be a bit silly to do, but nevertheless be better than not being able to use multiple threads at all) to just start n threads, communicate like you already can do and run the calculations needed in each thread? This is actually the easy step compared to offloading to other machines, so why did you decide not to do it?
You would instantly be able to utilize all cores if needed, while simplifying the administration of servers. Are there reasons i don't see right now not to do it this way?
Kind regards,
Rene