mr_centipede 31 Posted September 23, 2013 Just wondering for purely academic purpose. The tree that I learned in C++ was created using pointers. Can it be created in sqf? Share this post Link to post Share on other sites
mr_centipede 31 Posted September 24, 2013 Ok, just to answer my own question, I think we can using array structure as below: array = [ [ _key, _value, _leKey, _riKey] ]; The _key will hold the index of that particular array item (probably don't need it, but might be useful for readability), and the _leKey and _riKey will hold the index that points to left array item and right array item respectively. The traversal part I haven't think through it. Is this stuff useful? I don't really know myself... at this point in time Share this post Link to post Share on other sites
Larrow 2779 Posted September 24, 2013 (edited) Ah the fun of multi dimension arrays and recursion ..... Runs away quickly ... good luck Very unlikely to be beneficial in the long run implemented through SQF but as an experiment purely for academic purposes of course, it will definately twist your noodle .... Erm good luck :D Edited September 24, 2013 by Larrow Share this post Link to post Share on other sites
xendance 3 Posted September 24, 2013 Sure you can make binary trees with arrays. http://en.wikipedia.org/wiki/Binary_tree#Arrays Share this post Link to post Share on other sites
rübe 127 Posted July 28, 2014 Just wondering for purely academic purpose. The tree that I learned in C++ was created using pointers. Can it be created in sqf? Yeah, but should you? Or maybe there is a reason why the CBA guys implemented their dictionary (or associative array, or "hash") as indirect indexing (sort of) by means of two arrays; one for the keys and one for the values (and searching is done by the "find" method). Well, I've checked (for science!): First I've implemented a dictionary, pretty much like the CBA guys, with two arrays; one for the keys and one for the values. And we can savely assume that the "find" function has to traverese one item after the other to find something, so O(n). Clearly we should be easily able to do better, hu? Second, I've implemented an AVL tree, which is a strictly balanced binary tree (at the cost of slower insertion, but access/searching should be blazingly fast; O(log n) that is...). The results? Let's have a look. Timings are given in milliseconds (ms) and have been measured with diag_tickTime. The same data (randomly created keys and values of strings; not unique, so some entries might have been overwritten, although longer keys have been used with larger dictionaries...) has been used for the different algorithms. x-n timings are the total for n operations, the x-1 timings then are the average computation time for a single operation. For the dictionary (indirect indexing, 2 arrays): n set-n set-1 get-n get-1 remove-n remove-1 25 0 0 0 0 0.977 0.039 100 0.977 0.01 0.977 0.01 1.953 0.02 500 11.719 0.023 11.719 0.023 8.789 0.018 1000 33.203 0.033 37.109 0.037 24.414 0.024 5000 689.453 0.138 665.039 0.133 399.414 0.08 10000 2592.77 0.259 2533.2 0.253 1400.39 0.14 And the numbers for the AVL tree (recursive search/get, remove not implemented yet): n set-n set-1 get-n get-1 25 2.93 0.117 3.906 0.156 100 105.469 1.055 23.437 0.234 500 209.961 0.42 229.492 0.459 1000 587.891 0.588 548.828 0.549 5000 3715.82 0.743 3614.26 0.723 10000 8400.39 0.84 8161.13 0.816 :rolleyes: oookay, this is not very good (just look at the get-1 numbers, since we're interested in faster access/search times only for now). Not at all. The first idea is to get rid of the recursion; and the search/get method is easily rewritten in an iterative way. What now? n set-n set-1 get-n get-1 25 3.906 0.156 3.906 0.156 100 124.023 1.24 23.437 0.234 500 216.797 0.434 162.109 0.324 1000 562.5 0.562 449.219 0.449 5000 3851.56 0.77 2995.12 0.599 10000 8258.79 0.826 6475.59 0.648 Uhm, yeah. That's obviously better. But overall the result is still underwhelming, if not discouraging. To access/search for an item in a list of size 10'000, the dictionary (based on the find method) needs 0.25 milliseconds on average. With an AVL tree of the same size this is 0.65 milliseconds! I mean, sure, you can see how the AVL tree is catching up as the tree grows; starting with n=100 dividing the get-1s of em, we get: 23.4, 14.1, 12.1, 4.5, and finally the AVL tree is only 2.6 times slower. But that's with 10'000(!) items! Well, a problem here is certainly the comparison method. For the AVL tree I had to write a custom string comparator that first has to convert the strings to arrays... So that's most likely one of the bigger problems/brakes here. So... what have we learned? Yes, we can easily implement pretty much any data structure, certainly binary search trees too. But should you? Most likely not. No. If it's about performance/speed: measure your stuff. No premature optimization, write your code clear, but always have a look at the timings and how things scale (if it's about data structures anyways). Hence, and along those lines: a dictionary based on two arrays/indirect indexing as described is most likely more than adequate for the things you do... or are you working with lists/dictionaries larger than 10'000 items? No? Well, good news then: chances are this is not your problem and your script wastes time somewhere else... go fix that instead. :cool: Recursion is lovely and often code can be written much more readable. But if speed is an issue, try to get rid of recursion/too many function calls. It easily kills your fun. Then again, if speed is no issue, go for the recursive variant if it reads nicer. And last but not least, it would be really nice if BIS could finally implement associative arrays/dictionaries into sqf, backed by a hash table, an RB or AVL tree or what not. TL;DR: stop wasting your time. ;) If you really wanna learn more about data structures, I'd suggest you do it in C (or maybe Java or something). 1 Share this post Link to post Share on other sites