Capability of acting (and winning) in games is often used in artifcial intelligence as an indicator or measure of more general ability. However, as challenges escalate, notable efforts are forced to compromise due to technical limitations - interfaces of simulated environments can be inconsistently adapted for artifcial agents, which induces uncertainty in comparisons with humans. Review of select works in the feld of deep reinforcement learning in real-time strategy games highlights necessity for a new benchmark environment, which better emphasises the role of strategic elements by enabling more equivalent interfaces and is also suitable for experiments on distributed systems. The latter is realised as a team-based competitive game, in description of which specifc technical and theoretical problems are examined on the cases of imitation and reinforcement learning.
|