07-30-2020, 03:58 PM
mrraow's experience matches my own. We ran evaluation games for almost all games where both of these AIs are applicable, and store in the AI metadata files which algorithm yielded the best performance. This is how Ludii decides which algorithm to use when users select just the "Ludii AI". There's a decent number of games where that turned out to be AlphaBeta.
Note that so far we didn't really handcraft heuristics for any games (only exceptions being a few variants of Chess, where we only manually assigned some material values). We just have a few different heuristics implemented to cover a decent range of common cases, ran a bunch of evaluation rounds for all these games to figure out which heuristics to start working with, and then tried to train them a bit more (which sometimes yielded improvements, and sometimes didn't). So the time sink and resources aren't really much of a concern here... at least not if we're talking about human time. Of course it does take computation resources, but whenever we happen to have that available and don't need it for other purposes, we might as well use it for this purpose
Note that so far we didn't really handcraft heuristics for any games (only exceptions being a few variants of Chess, where we only manually assigned some material values). We just have a few different heuristics implemented to cover a decent range of common cases, ran a bunch of evaluation rounds for all these games to figure out which heuristics to start working with, and then tried to train them a bit more (which sometimes yielded improvements, and sometimes didn't). So the time sink and resources aren't really much of a concern here... at least not if we're talking about human time. Of course it does take computation resources, but whenever we happen to have that available and don't need it for other purposes, we might as well use it for this purpose