You may find additional publications (as well as citation metrics) on my Google Scholar.
- IVCNZEvaluating Learned State Representations for AtariAdam Tupper, and Kourosh NeshatianIn 2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ), Nov 2020
Deep reinforcement learning, the combination of deep learning and reinforcement learning, has enabled the training of agents that can solve complex tasks from visual inputs. However, these methods often require prohibitive amounts of computation to obtain successful results. To improve learning efficiency, there has been a renewed focus on separating state representation and policy learning. In this paper, we investigate the quality of state representations learned by different types of autoencoders, a popular class of neural networks used for representation learning. We assess not only the quality of the representations learned by undercomplete, variational, and disentangled variational autoencoders, but also how the quality of the learned representations is affected by changes in representation size. To accomplish this, we also present a new method for evaluating learned state representations for Atari games using the Atari Annotated RAM Interface. Our findings highlight differences in the quality of state representations learned by different types of autoencoders and their robustness to reduction in representation size. Our results also demonstrate the advantage of using more sophisticated evaluation methods over assessing reconstruction quality.
- GECCOEvolving Neural Network Agents to Play Atari Games with Compact State RepresentationsAdam Tupper, and Kourosh NeshatianIn Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Nov 2020
Recent success in solving hard reinforcement learning problems can be partly credited to the use of deep neural networks, which can extract high-level features and learn compact state representations from high-dimensional inputs, such as images. However, the large networks required to learn both state representation and policy using this approach limit the effectiveness and benefits of neuroevolution methods that have proven effective at solving simpler problems in the past. One potential solution to this problem is to separate state representation and policy learning and only apply neuroevolution to the latter. We extend research following this approach by evolving small policy networks for Atari games using NEAT, that learn from compact state representations provided by the recently released Atari Annotated RAM Interface (Atari ARI). Our results show that it is possible to evolve agents that exceed expert human performance using these compact state representations, and that, for some games, successful policy networks can be evolved that contain only a few or even no hidden nodes.
- Evolutionary Reinforcement Learning for Vision-Based General Video Game PlayingAdam TupperUniversity of Canterbury, Aug 2020
Over the past decade, video games have become increasingly utilised for research in artificial intelligence. Perhaps the most extensive use of video games has been as benchmark problems in the field of reinforcement learning. Part of the reason for this is because video games are designed to challenge humans, and as a result, developing methods capable of mastering them is considered a stepping stone to achieving human-level per- formance in real-world tasks. Of particular interest are vision-based general video game playing (GVGP) methods. These are methods that learn from pixel inputs and can be applied, without modification, across sets of games. One of the challenges in evolutionary computing is scaling up neuroevolution methods, which have proven effective at solving simpler reinforcement learning problems in the past, to tasks with high- dimensional input spaces, such as video games. This thesis proposes a novel method for vision-based GVGP that combines the representational learning power of deep neural networks and the policy learning benefits of neuroevolution. This is achieved by separating state representation and policy learning and applying neuroevolution only to the latter. The method, AutoEncoder-augmented NeuroEvolution of Augmented Topologies (AE-NEAT), uses a deep autoencoder to learn compact state representations that are used as input for policy networks evolved using NEAT. Experiments on a selection of Atari games showed that this approach can successfully evolve high-performing agents and scale neuroevolution methods that evolve both weights and topology to do- mains with high-dimensional inputs. Overall, the experiments and results demonstrate a proof-of-concept of this separated state representation and policy learning approach and show that hybrid deep learning and neuroevolution-based GVGP methods are a promising avenue for future research.
- IVCNZPedestrian Proximity Detection Using RGB-D DataAdam Tupper, and Richard GreenIn 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ), Dec 2019
This paper presents a novel method for pedestrian detection and distance estimation using RGB-D data. We use Mask R-CNN for instance-level pedestrian segmentation, and the Semiglobal Matching algorithm for computing depth information from a pair of infrared images captured by an Intel RealSense D435 stereo vision depth camera. The resulting depth map is post-processed using both spatial and temporal edge-preserving filters and spatial hole-filling to mitigate erroneous or missing depth values. The distance to each pedestrian is estimated using the median depth value of the pixels in the depth map covered by the predicted mask. Unlike previous work, our method is evaluated on, and performs well across, a wide spectrum of outdoor lighting conditions. Our proposed technique is able to detect and estimate the distance of pedestrians within 5m with an average accuracy of 87.7%.