In this project, we investigate two topics that link AI and the law, and study their implications for New Zealand. The first topic is the use of AI algorithms in government departments. There are several issues to explore here, relating to possible biases of AI systems, the need for explainable systems, and issues relating to how human operators use AI systems. The second topic is the effect of AI technologies on employment. The issues here relate to uses of AI tools in recruiting and monitoring employees, as well as cases where AI systems potentially displace human employees. We are also studying the effects of AI systems on whole professions, as well as on individuals. For each topic, we consider whether additional regulation is needed in New Zealand, to ensure AI is maximally beneficial for the country. More information is available at
How does the brain represent the geometry of 3D objects? Most researchers considering this question focus on vision. However, infants first learn about 3D objects in the haptic system -- that is, by tactile exploration of objects. In this project, we develop a neural network model that learns something about the structure of a 3D cuboid, using input from the motor system that controls a simulated hand navigating on its surfaces. It does this with a simple unsupervised network, that learns to represent frequently-experienced sequences of motor movements. The network learns an approximate mapping from agent-centred (i.e., egocentric) movements to object-centred (i.e., allocentric) locations on the cuboid's surfaces. We also show how this mapping can be improved by the addition of tactile landmarks, by the presence of asymmetries in the cuboid and by the supplement of agent's configurations. We then investigate how the learned geometry of the cuboid can support a reinforcement scheme, that enables the agent to learn simple paths to goal locations on the cuboid.
Surprise has been cast as a cognitive-emotional phenomenon that impacts many aspects from creativity to learning to decision-making. Why are some events more surprising than others? Why do different people have different surprises for the same event? In this project, we try to seek a reasonable definition of "surprise" and apply it in reinforcement learning. A surprise-driven agent can learn to explore without knowing any reward system from the environment. This is done by creating a model of the environment. "Surprise" is the inconsistency between the model prediction and observed environment outcome. Agents learn in a reinforcement learning environment by maximizing this “surprise”.
The overarching aim of this project is to develop a next generation data infrastructure linking farm-management, genetic-improvement and traceability. The objective here is to prove feasibility of using image analysis, particularly latest advances in deep learning, for parentage assignment in livestock using sheep as an exemplar. For example, given we have a facial image from a lamb and multiple pictures of possible fathers (rams) and mothers (ewes) the goal is to correctly identify the parents of this lamb. Rather than requiring manual identification of phenotypes for each new sheep, we aim to create and train a convolutional neural network (CNN) model capable of detecting features for matching parents with their offspring directly from images of animal faces.
People are more interesting than machines. How do our brains and minds work? AI gives us computational tools to these questions, from modelling brain function at a cellular level to exploring aspects of cognition like memory and language. Work on this project has focused on learning and forgetting in artificial neural networks and real brains, and the nature of real and false memories in dynamical neural network systems. After a long break, it is time to get this project moving again.
Neural networks can achieve extraordinary results on a wide variety of tasks. However, when they attempt to sequentially learn a number of tasks, they tend to learn the new task while destructively forgetting previous tasks. This is known as the Catastrophic Forgetting problem and it is an essential problem to solve if we want to achieve artificial agents that can continuously learn. This project aims to solve Catastrophic Forgetting by having the network rehearse previous tasks while learning new ones. Previous tasks are rehearsed from examples produced by a single generative model that is also sequentially trained on all previously learnt tasks.
Despite its recent ascendancy in numerous aspects of AI, deep learning has not yet been successful at satisfingly improving the incredibly poor track record of text recognition from historical documents. A set of hand written journals and letters from the Hocken Collections of University of Otago Library provides a rare opportunity for advancing machine learning while creating a tool for text searching of culturally significant body of documents. This is not about creating a solution to a specific problem. It is about necessity being the mother of the next wave of inventions in deep learning.