Category: PhD


Traditional education has as much to fear from the potential of the internet as the worlds of publishing, record shops, book shops, and even now Game shops.  Performing an activity at scale (both volume and geography) reduces the shared cost of any overhead, and a new university called Udacity has sprung out of Stanford to take advantage of this new world.

Udacity presents students with short video lectures interspersed with HTML forms that take in answers to questions.  The focus of the university is programming, and the web application even includes a program interpreter so students need to write code which the servers interpret, test and return marks.  The content of the course is genuinely challenging and allows students to learn in their own time, even though the courses have a fast weekly pace.

The potential of this is quite astonishing – I was part of the beta course in AI (run directly by Stanford).  They had over 85,000 students enroll, and over 23,000 finished the course.  The course was of course globally available and free, so programmers from 57 countries took part, with translations crowd sourced by the students.

I have now recommended the university to at least 10 programmers (from complete beginners to experienced masters) and every single one is coming back with positive feedback.  They are offering courses from the first steps of programming to the details of how to create the brains of a self driving car (which is excellent).  The courses are taught by the absolute cream of the profession (e.g. Sebastian Thrum who pioneered the self driving car in the DARPA grand challenges and now is heading the Google Self Driving Car team).

They are of exceptional quality, and it is hard not to draw a line between the lack of proper ICT training in the UK that is appalling, and the supply provided by off shore web sites and entrepreneurs, and wonder about the state of UK universities in 20 or 30 years time.  My suspicion is they will be a place of networking and research rather than learning.  What is not clear is where the funding will come from for such activities, which in most cases starts with some form of tax or international investment.

This is probably not somewhere for the UK to be left behind – the internet can move the balance of power before most of us have even noticed there was a threat.

Neuro Evolving Robot Operatives

One of the interesting applications of the NEAT algorithm is to generate artificial brains via the use of Artificial Neural Networks.  These are approximations to the way that our brains are wired up, consisting of a bunch of neurons that have firing thresholds, and synapses that link the neurons together.  When you add in some sensor neurons that feed the network with location, sight, sound etc, the artificial brain can, by modification and distortion of the signals across the network, fire signals to effector neurons, which can control an agents behavior, movement, speech etc.

Of course if you randomly wired together a set of neurons and synapses, real or artificial, the ‘brain’ created would most likely do nothing or work itself into a crash state.  However if you start with a very small amount of neurons and links, then slowly add more under a training regime, you can get some startlingly intelligent behavior – in precisely the same way as nature develops embryos to thinking creatures.

Another correlation between real world and artificial embryo development is that the artificial neural networks are stored as a DNA string, which is unpacked to create the network.  The most interesting behavior is shown when individuals with successful brains reproduce with other fit individuals.  In this manner it is possible to create networks that have the best of both parents, and over enough generations it’s possible to create networks displaying highly complex behavior.

The video game N.E.R.O. (Neuro Evolving Robot Operatives) has been created to show how these neural networks can create individual and team behaviors in a virtual world.

The general idea is to take untrained robots (which will make generally random actions) and reward individuals which display promising behavior.  These then are more likely to reproduce in the next generation, and over time the promising behavior gives way to the operatives taking actions which could easily be mistaken for human play.

The NERO engine has been open sourced, so it’s possible for anyone to develop their own AI’s, or present new challenges for existing agents.

This technique has not, to my knowledge, been used in any commercial games to date, although given that it allows characters to develop over time could be used to create games that are able to endlessly adapt to the players style, presenting new challenges within every game.

ObjectiveNEAT Posted

As a learning exercise I recently wrote an implementation of NEAT (Neuro Evolution of Augmenting Topologies) in Objective C, the language of choice for iOS devices.

The godfather of NEAT and many derived AI techniques, Ken Stanley, has picked this up and published the project to the NEAT AI groups here and here.

It’s always nice for something you’ve created to be passed on to a wider group, my hope is that other developers may build upon the code and create some iPhone or iPad games or apps that make use of the AI technique to develop around the user, presenting new challenges or being able to interpret a users intentions through learning over time (like SIRI).

The original source can be found here, with documentation here.

 

 

Procedural Textures

Generating textures for games and virtual worlds is a time consuming task, to give you an example any big budget film or game may have well over 50% of the development budget assigned to creating 3d models and the images which cover them to make them realistic.

I’ve talked about procedural content generation before, but to give you an idea of it’s capabilities is a picture generated by the Graphic Designer Toolbox.

Looks pretty creepy, but the interesting thing is that it is created by a few blocks of mathematical algorithms rather than designed by hand.  Simply swapping in different algorithms can create entirely different effects, from clouds, to cities, mountains, deserts, and many other instantly recognisable features.  The effort required to generate a tool box of features may be large, but once you have the toolbox and can begin chaining effects, a single skilled designer can outperform a team very quickly. Modern films such as Avatar used this along with other PCG techniques to generate a whole alien worlds fauna and flora – without such methods it simply would not have been possible.

In 2002 Ken Stanley and Risto Miikkulainen presented an evolutionary method for developing solutions using Neural Networks – NeuroEvolution of Augmenting Topologies, or NEAT. This technique, while relatively new, takes inspiration from natural evolution and has been used in a number of very interesting applications from agent control through to creation of new music and art forms.

It’s an area of research that is highly relevant to my work, and to better understand the topic I’m putting together an implementation in Objective C, the preferred language for Mac and iOS devices.  As an introduction to the topic and for other students I’ve put together a brief presentation on the subject.

Presentation on NEAT

Since then Ken has moved on to the Evolutionary Complexity Research Group (EPLEX) at the University of Central Florida and extended the research in a number of ways, the most interesting to me being in the search for novelty.  Ken has a website that shows some of the directions and algorithms, and to get an idea of where it is heading I’d recommend his Keynote speech from the 2010 SPLASH conference.

Plant Growth

Generating plants and fauna has been one of the primary successes of computer graphics over the last few decades, most people would be surprised at how often the plants in a film or advert are virtual copies of the real thing.

A simple way to create plants is using Lindenmayer Systems, which take a starting string and over a number of steps re-write the original string based upon simple rules.

So for example we could start with the character A and every time we iterate we replace the letter A with AB, and if we find a B we replace it with letter A. This gives us a series that looks like:

A
AB
ABA
ABAAB
ABAABABA
ABAABABAABAAB

and so on.  Visually we could change the characters to directions for an image drawer (things like turn left, turn right, draw forwards) and instantly we’ve produced some nice little snowflakes.

We can generate all sorts of interesting shapes by making tiny additions to those initial rules, such as adding the ability to jump spaces without drawing, changing colour, all in all a very simple way to make some familiar looking patterns.

Where things get more dynamic is if we allow the algorithms to maintain a stack of visited matrices, which means to branch, a bit like trees.

Move into the realm of 3D and we begin to get some very lifelike trees.  Of course trees are not made up of identical sticks from root to branch, so we need to add in a little stochastic sampling (randomness in the growth) as well as the idea that the tree will grow seasonally, and a trunk will grow differently to a branch, a leaf or a flower, however all of this can be managed with some imagination.

To get an idea of how far this has come, check out the virtual arboretum managed by SpeedTree, a company which specializes in dreaming up virtual fauna.  To get some more details of how the magic happens, I’d recommend The Algorithmic Beauty of Plants by Przemyslaw Prusinkiewicz and Aristid Lindenmayer.

 

Genetic Algorithms

Genetic Algorithms provide computers with the solve all manner of problems by following the simple principles of natural evolution, largely by breeding virtual programs, allowing them to compete at solving a problem, then selecting the best to reproduce and create a new generation of possible solutions.

I first came across this concept while reading The Blind Watchmaker by Richard Dawkins, a book I cannot recommend highly enough – while it focuses on the evolution of life, it provides insight on all manner of subjects.  In the book Richard Dawkins introduces Biomorphs, a computer program he wrote that would take a simple genotype (a coded sequence similar to a DNA string) use it as a seed for a development algorithm which would result in a phenotype (in this case images which bore striking resemblances to insects or plants).  A few examples from his book:

Without going into the technical details, genetic algorithms such as have the potential to solve all sorts of problems faster than random searches or even human design.  Some applications include:

  • Acoustics, in the design of amongst other things opera houses and high fidelity speaker cone enclosures  for optimum sound reproduction
  • Engineering, for example car and wind turbine shapes for aerodynamics
  • Astronomy, in the identification of undiscovered star systems
  • Electrical Engineering, in all manner of chip design optimisations
  • Financial Markets, in many cases outperforming traders predicting market changes
  • Playing games such as Chequers, to the level that within a relatively small number of generations can beat any but the most expert players.
  • Military and other logistical planning, both areas where it is used to aid human decisions in summarizing a near infinity of variables.
  • Data mining or search, particularly in areas where the data sets are so vast that patterns simply cannot be put forward or tested by simpler techniques (including skilled humans) with any hope of success.
  • Recognition of patterns and objects in images.
  • Developing agent and team behavior, used in many computer games and real world competitions such as RoboCup.

My interest in the field is as an underpinning to the main subjects of my research, right now focusing on the use of agent intelligence and procedural content generation to create digital content such as environments, worlds, graphics, music and virtual characters and agents.

These require slightly different use of Genetic Algorithms than in the above examples – generally GA’s are able to find a solution to a given problem when this can be expressed formally (such as a mathematical sum) however in general are not optimised for finding a solution that is based upon taste, fashion or other qualities that are difficult to quantify. They do, however, provide a mechanism to help a human designer create and evaluate such content, an area which traditional procedural content generation techniques have been criticized for being unable to do.  An early example of this happening comes courtesy of Karl Sims from a contribution to the SIGGRAPH 5 years after the Blind Watchmaker was published.

This slideshow requires JavaScript.

To get an idea of the state of the art have another look at Avatar, pretty much 90% of what you see on screen was created using GA graphic techniques.

Software agents are as much a creature of their environments as all creatures on earth are. We have been created by constant evolution brought about by surviving our environment, competing with other entities and succeeding to procreate new generations.

In order to create and evaluate software agents we need to create environments to test them in.

  • International Aerial Robotics Challenge – there have been six challenges set to date of increasing complexity, it’s absolutely amazing what the state of the art in robotics can achieve. The last solved challenge asked teams to build a completely autonomous drone which, without prior knowledge of the environment, flew into an unmanned nuclear facility, identified and located the control room, and sent pictures of the control instrumentation out to a second unmanned drone.  The next challenge is to enter an unknown territory and locate a USB key holding sensitive data:
  • The DARPA Grand Challenge – creating a driverless car which will traverse across unknown terrain, which has been successfully with a variety of different technologies, all of which worthy of further research.  The current challenge is set in an urban environment with other motor vehicles to be successfully negotiated.
  • The NASA Centennial Challenge – to find and retrieve samples of scientific interest in an unknown environment
  • The RoboCup challenge, which requires teams made up of autonomous robots to win a league against other robots.  The organizers are attempting to lead the way for a robotic team to beat the last winners of the World Cup by 2050.  They’ve got a long way to go:

Of more immediate interest are virtual challenges as these are more accessible for direct testing.

  • The Google AI Challenge – this is the third challenge that asks entrants to submit AI bots to compete with each other in virtual worlds, and the first I have entered.  The first year (Winter 2010) was inspired by Tron, competitors needed to take control of a light cycle and cause the other players to crash.  The second was based upon the game PlanetWars, developed by Galcon, competitors needed to take over planets given a number of spaceships.  This years is based upon ants, and competitors need to develop a hive mind which will grab the most food at the same time as destroying other ant hives, all of which takes up valuable resources.  Entrants build AI bots in pretty much any language they like, then submit them to the competitions central servers to be pitted against other agents.
  • Core Wars – A game that could only be thought up by a computer scientist.  Build a set of instructions which are let loose in a simulated computer memory stack, which then compete against other instruction sets to take over the whole memory.  Sounds very geeky but some ingenious entrants.
  • Mario AI Challenge – using an adapted version of Super Mario 3 for the Nintendo, build a software bot that can complete any level thrown at it.  As interesting as the entrants are, the game environment (Super Mario Infinite) is an excellent example of Procedurally Generated Content (more on this later).
  • The StarCraft AI Challenge – StarCraft is a real time strategy game that is celebrated as one of the most hectic and evenly balanced computer games of all time.  In Asia, specifically Korea, this is an international sport.  Most video games can be seen as challenges for AI, with a bot taking over the role of the human player.  All that is needed is an API so that the AI can understand what is happening in the game and relay instructions back to the game (although in some cases a lack of API can be overcome by screenscraping, and this in itself is an area of research as in some real world cases it may be easier to translate a video output than retrofit an API).  The interestig thing about StarCraft is the wealth of information that needs to be processed, the number of available strategies (pretty much infinite) and the real time nature of the game – this is far more complex than chess, although with the luxury of being able to ponder the next move for half an hour.  Here’s a video of a software AI playing (and beating) one of the top ranking players worldwide.
  • Many games open up API’s to allow AI builders to modify or completely recreate AI’s that play against the human player – one very sophisticated and challenging example is Civilization IV, and develop fans have taken to building an open source version called FreeCiv, which allows whole worlds to be built with independent AI’s for those with the time.

Pretty much every video game genre from poker through space and trading simulations, role playing games to first person shooters and platform games, can be used as a test bed for developing and evaluating AI techniques.

Computer games represent a vast number of artificial intelligence routines, and many people are attracted to the video games industry as the ultimate creative and technical software challenge.  While the computer game industry is very competitive and as such most of the technical development experience doesn’t make it into the public domain, occasionally people like to give something back to help out others.

There are a number of online resources for this, including GamaSutra (the art of making games), AIGameDev (run by Alex, a very chilled and wise guy), AI Wisdom and others, the games industry does have a regular magazine called, obviously, Game Developer.

In order to get a picture of the state of the art in Game Development, a quick review of the last few years Game Developer magazines, looking for articles relating to Procedurally Generated Content or Virtual Agents.

  • December 2004, page 40, optimizing pathfinding at low levels
  • June 2006, page 30, blob physics
  • December 2007, page 16, Multi User Dungeons & MMO’s
  • Feb 2008, page 16, creating waves in game
  • March 2008, page 32 – MapZone, creating procedural textures
  • April 2008, Page 36, Intelligent Mistakes, AI
  • September 2008, page 48, Game economies
  • March 2009, page 14 – Using particle systems for terrain modeling.  Basic concept is like comets hitting the earth
  • June/July 2009, page 14 – Procedurally Generated Content in space games
  • August 2009, Page 40 – Procedurally Generated flowers
  • September 2009, page 7, building worlds with voxels (3d pixels)
  •  Feb 2010, Page 40, Procedurally Generated Content
  •  Feb 2010, page 48, Procedurally Generated plants, trees and models
  •  August 2010, Page 8, how to build social games
  •  January 2011, Page 48, techniques for better AI
  •  Feb 2011, page 24, interview about Minecraft, a breathtaking use of procedurally created worlds
  •  April 2011, page 21 – Procedurally Generated Content with Minecraft and perlin noise
  •  Career Guide Fall 2011, page 12 – Useful path finding techniques
  •  August 2011, page 6 – random generation of structures

Apologies for not putting in any more context, however if you’re interested then you’ll know where to go to find out more…

Intelligent Systems: Week 1

One of the reasons to keep this website is to record progress through a PhD in Intelligent Systems, being studied with the Informatics department of Kings College London.

Found a great crash course in AI which is being run by Stanford University, Palo Alto California.  The lecturers are Sebastian Thrum (research professor at Stanford, Google Fellow) and Peter Norvig (Director of Research at Google, previously head of Computer Sciences Division at NASA, fellow of the American Association for Artificial Intelligence).  The course is run online through use of interactive youtube videos, and through tests and examination they are able to use the scalability of the internet to provide their experience to training 145,000 students in 18 different languages.  Have no doubt this is the future of eduction.

www.ai-class.com

The informatics department at Kings has many specialists in Multi Agent Systems, so have been reading the much recommended

Introduction to Multi-Agent Systems, by Michael Wooldridge

As a quick run down, Agent systems, and software agents in general, are a specific class of system where a piece of software has the ability to monitor an environment (be it a stock exchange, a flight to Mars or a computer game) and take actions based upon it’s own beliefs, desires and intentions.

Environments can consist of almost anything, however can generally be classified on the following scales:

  • Accessible vs Inaccessible – how much of the environment can be ‘seen’ by the Agent
  • Deterministic vs Non-deterministic – do actions change the world in a predictable way? Rolling a dice is an example of non-deterministic
  • Static vs Dynamic – is the environment changing through the actions of other agents or entities
  • Discrete vs Continuous – are there a set number of states in the environment (e.g. chess) or infinite combinations (the real world)

We can also add a few more from other sources:

  • Non-hostile vs Hostile – are all agents attempting to bring about the same result, or are there agents working against our goals?

Agents generally need to display and balance the following capabilities:

  • Being Reactive to changes in the environment
  • Being Proactive to set goals and carry out plans to achieve the goal
  • Being Social, working with or around other entities to achieve goals

There are many subjects covered in the book that provide insight into how agents can be built, including:

  • Symbolic reasoning agents & software that can form and manipulate a model of the world
  • Practical reasoning agents, how an agent forms beliefs, desires & intentions, then designs and evaluates plans to achieve goals
  • Reactive and hybrid agents, the idea that intelligence is an emergent property of enough simple logical reactions
  • An overview of the applications of agents in the real world, including Stanley, a car which drove itself 40 km through the Arizona desert to win the DARPA grand challenge
  • Where it gets interesting – methods by which agents can communicate and work together to bring differing capabilities to solve a problem

To give an idea of what the practical outputs of this are:

  • Approximately 70% of trades made on most stock exchanges are carried out by software agents that are constantly evaluating the market.
  • Google employs a vast array of software agents to run their search indexing and other services
  • Apple’s iPhone 4s has an integrated agent called Siri which will understand voice commands and take actions at the users request.

An area of personal interest is the use of biologically inspired methods for programming agents, i.e. looking emulating the ways nature solves problems.  All sorts of creatures using group collaboration to solve problems in swarms, hives, flocks, herds or packs.  Craig Reynolds provided some seminal work in this area, and has published a number of research papers that still engage much interest in a variety of industries:

Craig Reynolds: Flocks, Herds and Schools: A distributed behavior model

In particular this paper presents a number of simple rules which can be implemented across a society of individuals to generate behavior which is startlingly close to flocks of birds or shoals of fish.  If time permits some examples will be posted here to demonstrate.