MadSci Network: Neuroscience
Query:

Re: How neurons know how to fly a fighter jet?

Date: Fri Mar 5 11:22:01 2010
Posted By: Archis Gore, Software Development Engineer
Area of science: Neuroscience
ID: 1267742090.Ns
Message:

I don't have context of these specific experiments you mention, but I can certainly answer your core query - why do learning systems learn only in a specific direction?

First, all of artificial intelligence and related problem domains have their background in mathematics, especially game theory, optimization theory, etc. The very basic function you learn in terms of problems is the "objective function". This is the function that determines why you're doing anything you're doing at all. The objective function is what you struggle to "achieve".

When you devise a system, you're doing it to reach the objective as close as possible. You'r trying to optimize the solution.

When you speak of mice learning to fly a plane, I can assure you, if we dig up the specific publication, that there would be at least as many mice that didn't learn. You're completely correct in assuming that robots might learn to simply spin in place or hug walls, or sleep on the floor. There could be, and in fact there are a million things that they do begin to do. Try giving a plane to any random human, and that's exactly what they might do too - which is why we have highly trained pilots flying our commercial aircraft.

You've answered your own question - there is in fact a lot of training that goes into developing these systems before they can do what is claimed. The training is done towards the objective function. In one case the objective would be to "not crash a plane", while in the other case it would be "to avoid collisions".

The systems in question initially do start out doing quite random things. But then the experimenter will start discouraging bad behaviour through punishment and reinforcing good behaviour through rewards. It's like training a rat to avoid a red light while following a green light by having cheese in front of the green light and a possible electric shock if it goes near the red light.

Yes, there was always some sort of human intervention concerned. This may not always be in the training process, but certainly in the selection process.

In some cases, there may not be any traning at all, and yet human intervention does exist. A human may not train a brain so much as select one which suits their purpose. They may plug a million rats to a plane and pick those which won't crash it, or may simulate a million randomly chosen combination of neurons until they find a combination that does what they want. This is exactly how genetic algorithms work.

What all this simply means is that there exists "a rat brain" that was able to do something. It doesn't mean any rat would do the same or could be trained to do the same.


Current Queue | Current Queue for Neuroscience | Neuroscience archives

Try the links in the MadSci Library for more information on Neuroscience.



MadSci Home | Information | Search | Random Knowledge Generator | MadSci Archives | Mad Library | MAD Labs | MAD FAQs | Ask a ? | Join Us! | Help Support MadSci


MadSci Network, webadmin@madsci.org
© 1995-2006. All rights reserved.