04-09-2020, 09:02 AM
1) I don't think we're really planning to do much with those single-agent RL algorithms ourselves. It happened to work famously well on Backgammon, but in most other adversarial settings they tend to not do too well. So we're focusing on other training approaches like AlphaZero-like training ourselves. We certainly hope that other people will find it relatively easy to set up their own training runs with their own algorithms though! The only complicated part should probably be finding a good state / action representations for general games (though that's also something we're looking into). We'll try to ramp up documentation on doing all kinds of stuff like this soonish, but in the meantime, if anyone already wants to try they can always ask for pointers here!
2) Yes, we're currently training agents for all games. Not with Deep Neural Networks though, because we're literally talking about hundreds of games and that'd get a bit problematic in terms of hardware... we're mostly working with simpler features ourselves. We're also looking at collaborations involving a bit more hardware, though probably not for ALL games.
3) If such agents are easy to re-implement, it would certainly be possible to do so using our API for agents (https://github.com/Ludeme/LudiiExampleAI). I don't know if we'll make it a priority to implement such agents ourselves directly in Ludii, since we're really more interested in general game playing AIs for a variety of research goals in our project. We could consider including them if others implement them though (if they'd like us to "officially" include them -- of course authors of these kinds of AIs could also keep the right of distributing them to themselves if they prefer ). If we're talking about really complex agents (say, Stockfish for Chess)... I do think it would be really nice if we could get the programs communicating without having to re-implement such an agent in our program from scratch, but really not sure at this moment about a timeline to get something like that working.
2) Yes, we're currently training agents for all games. Not with Deep Neural Networks though, because we're literally talking about hundreds of games and that'd get a bit problematic in terms of hardware... we're mostly working with simpler features ourselves. We're also looking at collaborations involving a bit more hardware, though probably not for ALL games.
3) If such agents are easy to re-implement, it would certainly be possible to do so using our API for agents (https://github.com/Ludeme/LudiiExampleAI). I don't know if we'll make it a priority to implement such agents ourselves directly in Ludii, since we're really more interested in general game playing AIs for a variety of research goals in our project. We could consider including them if others implement them though (if they'd like us to "officially" include them -- of course authors of these kinds of AIs could also keep the right of distributing them to themselves if they prefer ). If we're talking about really complex agents (say, Stockfish for Chess)... I do think it would be really nice if we could get the programs communicating without having to re-implement such an agent in our program from scratch, but really not sure at this moment about a timeline to get something like that working.