2016-08-09

A psychologists view on Artificial Intelligence, Cognitive Systems, Deep Learning and similar disruptive events.

AlphaGo challenged Lee Sedol this year and won in four out of five matches in the Game of Go. Google made the Deep Learning API TensorFlow available last year, with explicit permission to use it for commercial purposes. IBM's Watson technology left the laboratory and is now a commercial product available for everyone. Nearly every day now I read news worth spreading about disruptive events in the world related to AI technology. I finally thought it is time to start a blog.

Where do I come from?

I was a Star Wars kid and a Trekkie. In my teenage years Blade Runner was one of the most inspiring movies for me. It is based on the Novel 'Do Androids Dream of Electric Sheep?' by Philip K. Dick and asked the question if artificially created beings can have real emotions and should be treated like real beings. Which raised the interesting question for me of what the quality of this reality might be.

After a short interlude of studying physics I switched my subject to psychology, because it was (and still is) such a young science which was full of promises for new discoveries. In the early 90's I discovered one of the first courses about 'Artificial Neural Networks' will be held at the Justus Liebig University Giessen at the department of Psychology. I heard vague rumors about these things and that they might be the technology future R2D2s and C3POs might be based on. As I entered the lecture hall five minutes before the designated time there was only one other guy with long hair and an electric guitar hanging from his ear. I expected the lecture hall will be bursting with students! It turned out we were only three persons for this course, including our Professor: Gert Haubensak. We started with the book 'Parallel Distributed Processing: Explorations in the Microstructure of Cognition' by James McClelland and David Rumelhart. Soon the theory was too dry and we implemented our own Neural Networks based on the source codes in the book 'Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises'. This course was conducted for several years to come and we were constant participants. So a small AI research nucleus was formed at the psychological faculty in Giessen.

After extended study years I was eager to get a research job in this promising and emerging field of cognitive science. I was quite dumbfounded to learn that there seemed to be no need for a psychologist in this field. There were some computer science labs in Germany applying this technology but they had no use for a psychologist. The only Psychologist with a research group in this field was Professor Dietrich Dörner at the Unversity of Bamberg. He and his team developed the PSI-Theory of action regulation, intentions and emotions. Which was quite cool back then and quite tempting for me. Unfortunately his research team was small and he could not take everyone in. So looking for a job was a quite sobering experience.

The Neuroscience years.

I don't know how many application talks I had, where I gave presentations of my research with Artificial Neural Networks to no avail. So many, that I didn't care any more and just viewed them as a lottery ticket. To my luck, Professor Mark. W. Greenlee gave me a position as a research assistant with the possibility to work on my PhD thesis. However, my research field were no longer Artificial Neural Networks but Neuroscience research on the human brain. Specifically researching the human visual system with Dynamic Causal Modelling and fMRI experiments. It seemed a far cry from AI research back then but in the years to come connections between these research field got more visible.

Artificial Neural Networks try to mimic the principles of biological nerve cells in a very simplifying way. In fact the incredible complexity of a single neuron is broken down into a hand full of mathematical equations that should approximate its computational behavior, creating a crude simulation of a wetware computer.

My PhD thesis researched the interaction of regions in the human brain, each region containing hundreds of millions biological neurons. Until some years ago, simulating such vast numbers in a computer was considered a theoretical question at best.

Mind, Brain & Chips


But the computing power problem was tackled in the last decade in two ways. In a talk at Google Tech Talks Geoffrey Hinton boasted jokingly how he made computations 100,000 times faster. He optimized the algorithm to calculate 100 times faster, it took him 17 years, in this time the computers got 1,000 times faster.

Furthermore the technology of graphic accelerators, widely used for enhancing 3D computer gaming experiences, was utilized for Artificial Neural Networks, speeding up the calculations drastically. Since this year the first commercial off the shelf supercomputer for Artificial Neural Network computations is available, NVIDIA's DGX-1.

2015 IBM introduced a computer chip that is resembling an artificial neural network called True North. I wrote a short post about SAP HANA & True North in the SAP SCN community.

The other approach for speeding up things is to understand the vast complexity of mammal brain regions and to boil this complexity down into mathematical equations that give a model of this brain regions behavior. I had the pleasure of visiting the lab of Professor Gustavo Deco at the Pompeu Fabra University in Barcelona, that was 2007. I was introduced into a fascinating field of research that gave hope that one day it will be possible to describe large parts of our mind's working via mathematics.

Professor Klaas Enno Stephan is the head of the Translational Neuromodeling Unit of the University of Zürich and the ETH Zürich. He and his team are researching the system dynamics of the brain, deepening the understanding of it as a cognitive system and making numerical testing of mathematical models possible.

So the year 2016 is a quite fitting year to start a blog about Artificial Intelligence, Cognitive Systems & Theories, Deep Learning, Neuro -science, -computation & -economics, Brain Computer Interfaces etc. pp.

Or in short about Mind, Brain & Chips!

Especially because we have the 60'th year of the term Artificial Intelligence, which was officially coined at Darmouth Conference in 1956.

HAPPY BIRTHDAY AI!

Wait! You wrote Deep Learning?

Yes - but I am too old and too tired to be impressed by the latest hype speak. For me Deep Learning systems are Artificial Neural Networks. I know that some information scientists would strongly disagree now, pointing out that Deep Learning systems have a much higher degree of complexity and variation of underlying algorithms.

From the viewpoint of a neuroscientist a mammal brain is a mammal brain even when a biologist would strongly disagree and point out the differences in behavior and abilities between a rhesus monkey and a human being. The psychologist in me also shrugs and thinks of decades of cognitive psychology research where models where made that consisted of many boxes connected by arrows (like here). If these systems are computed by single CPU's, massive parallel computing architectures or vast assemblies of stone pebbles makes little difference to me.

But I will get used to the term Deep Learning.
Promised.

No comments:

Post a Comment