My project during my MSc in Intelligent Systems and Robotics was based around deep machine learning. Specifically using Deep Reinforcement Learning to develop an AI system which could play Atari 2600 games. Essentially exploring what Google Deepmind had done and published.
As part of this I started using frameworks such as Theano to implement these ‘deep learning’ algorithms. Now my experience of doing an MSc project is cramming a much learning and practical into a short space of time. On reflection you spot the holes in your knowledge, you think of things you would do differently but generally never get time to do. Fortunately I am the type of person who likes to learn and could free up a little time (1 day per week for a handful of weeks), so I got together with my good friend (and fellow PhD student) Brian. Together we spent a little time exploring and developing different deep machine learning architectures.
In fact, we managed to cover quiet a lot. We rebuilt my entire MSc project, recreated Google Deepminds Deep Q-Network and did some extra work on image classification (which we got published at a peer reviewed conference).
The work for the paper involved training a convolutional neural network to identify images. We took about 2,000 images of 9 different locations/landmarks/POIs in Hull. We then developed a system which was able to identify each image to a very good level of accuracy.
We also applied a dimensionality reduction technique to to project the images into a lower dimension, for previewing. Below is an example of what that looks like.
Here is the paper if you fancy a read.
The citation will be something like this….
Stamford, J. and Peach, B. (2016) “Scene Detection using Convolutional Neural Networks”, 2nd IET International Conference on Technologies for Active and Assisted Living