I thought it would be fun to write down some of my life goals so that I can see in a few years time whether any of them have come true.
I'm generally the happiest when I'm pursuing several of my values at the same time. The strongest of my values (that I'm prepared to talk about on my blog) are adventure, digital intelligence, music, education, sport, politics and travel.
Money makes it easier to pursue lots of values at the same time, so right now I'm focused on turning my three education companies - edu20.com, edu20.org and edu20.net - into a cash cow that can fund my efforts in other areas. I really enjoy working on innovations in education, so I intend to keep running these companies for the satisfaction as well as the income.
If this comes to pass and these companies start generating significant profit, I plan on adopting a "mobile life style" where I will stay for 1-2 months in different areas of the world. I would select each destination largely based on the AI/Cognitive Science conferences that are scheduled in that area. For example, if a major AI conference was scheduled in Rome, I might stay in an Italian villa for 2 months around that time. Conferences are often located in really great areas of the world, so following such conferences almost guarantees a constant stream of interesting locations. Some of these conferences are near where my family lives, so I'll also be able to be closer to them more often. And since my businesses are web-based and my music equipment is portable, it's easy to take everything with me.
I would like the main goal for the rest of my life to be the pursuit of digital intelligence. I think this work is something that's done best by a small team, so I hope to be able to hire some super-smart team members in the next few years that I can work with to achieve this goal. I haven't decided yet whether our work will be open source, but I think it's likely that it will be.
Composing and performing music is the thing that's actually brought me the most joy in the past, so I also plan on devoting more time to music. With any luck I will blog my first new composition later this year!
Well, that's all for this post. The world can change on a dime, so it's entirely possible that none of this will come to pass. It will be interesting to see what pans out.....
It's very common for people to wonder whether a brain is a kind of computer. And, by analogy, whether a mind is a program that runs in the brain. I thought I'd jot down some of my thoughts on these questions.
A human brain is comprised of over a hundred billion neurons and over a trillion glial cells. A neuron is a special type of cell that is good at receiving, processing and transmitting information. Like most other kinds of cells, a neuron has a full complement of cellular machinery including DNA and mitochondria. In addition, it has an axon and dendrites that allow it to form complex connections with large numbers of other neurons.
The neurons in a brain are organized into two square sheets called hemispheres, each about 35cm x 35cm in size and 2mm thick. The neurons themselves are organized into 6 layers. The hemispheres are connected by a couple of bundles of neurons, the largest of which is called the corpus callosum.
How does a brain think? Well, it's clear that it's not born with a lot of pre-programmed knowledge. Instead, the neurons seem to embody some general purpose algorithms for learning, creating, evaluating, and acting. These algorithms are implemented on the cellular level, but only achieve their power when operating in a large network of cells.
In computer science terms, a brain is a massively parallel computer whose processors are highly interconnected and running general purpose algorithms. The algorithms contain very little specific knowledge, but instead allow a brain to learn from its environment.
Given that the brain does indeed seem to be a kind of computer, why do so many people react negatively to the analogy? The reason is that a typical computer is very simple and not like the brain at all. Most PCs have just a few processors compared with the brain's hundred billion processors. Similarly, most PC software is written by humans to do a specific task, compared with the brain's processors that run general purpose algorithms for learning.
Another difference is that a PC program is typically written in a computer language like Java or C that runs on a silicon-based CPU. The "program" that a neuron "runs" is written in DNA, proteins, and cellular structures, and its "memory" is methylation and dendritic connections.
The old adage "computers can only do what you program them to do" is thus both true and highly misleading. In the case of the brain, it has been "programmed" (by evolution) to do just a few things, including how to learn. There is no reason why we cannot program digital computers with the same algorithms and thus create machines that can learn, think and act.
One of the cool things about a brain is that its algorithms are capable of learning techniques that in turn make the brain more efficient. For example (as I mentioned in my recent post about Genius), a brain can learn how to speed read and then use this new knowledge to accelerate its own learning. These additional techniques do not change the algorithms running on individual neurons, but affect their connections so that these higher-level algorithms run on the network level. The brain is thu capable of supporting many different algorithms operating on many different layers of abstraction, all bootstrapped from the initial algorithms running at the cell level.
My guess is that the algorithms running on individual neurons are very simple. I bet that when we figure out how to create a digital mind the amount of code running on each cell will be very small, maybe just 100K or so. However, this code will be profound and will embody principles of information processing that we are only currently catching glimpses of.
This post was inspired by a conversation I had last weekend with a good friend. It was all about geniuses and how they come to be.
One thing that many geniuses seem to have in common is a very stimulating and nurturing environment from a young age. They are often taught to read by the age of 3, and are constantly challenged, encouraged, and provided with insightful ways for looking at the world.
For example, Richard Feynman's father would often take him for walks and one day he started pointing out various birds and told Richard their names. Then he started telling him the names of some birds in various languages like French and Japanese. Finally, he explained that Richard could learn every name of every bird in every language and he would still not have a clue about the important stuff like how birds flew, lived, and communicated. The point of the lesson was that it's far more valuable to understand how something works that to simply know its name.
That particular lesson stayed with Richard for his whole life, and it enabled him to learn new subjects much faster than his peers. For example, he studied Biology for fun later on in life and would join University classes as a newbie. When he started to study, he would ask really basic questions in class that would cause his fellow students to snigger out of embarrassment. However, he learnt very fast and was soon asking questions that were far beyond his fellow students. When asked how he managed this, he remarked that he spent his time trying to understand the workings of biology and spent very little time trying to remember the various complicated names for things. I'm sure that there was more to it than just that, but it shows that the techniques that he learnt early on in life continued to remain useful to him.
In struck me that early techniques for learning have an effect that's similar to compound interest. Once you learn a technique, you can use it to pick up new techniques faster, and so on. The difference in learning speed per month might only rise by a small amount, but compounded over many years this can have a huge impact. For example, if you learn to read at an early age, you can start to read books that contain information about how to learn faster. One of these books might teach you how to speed read. That new technique would in turn allow you to read new books faster!
Another factor is motivation. Animals are genetically wired to be curious and enjoy learning. So the more you learn and understand the world, the more pleasure you get, especially if you have a support system can encourages you and provides an environment that is stimulating. And the smarter you get, the more likely it is that you can solve a problem that no-one has solved before, which provides even more motivation!
Feynman was very fortunate to have a family that provided him with many insightful lessons and techniques that improved the way his mind worked. The question is: what kind of advantage does someone have when they learn to read early and are provided with useful learning techniques?
Let's compare someone who learns to read at age 6 and is not given any learning techniques versus someone who learns to read at age 3 and is given plenty of techniques. I'm going to assume that due to these differences, the former can increase their intellectual capacity at 1% a month and the former can increase it at 2% a month. What is the result of these differences by age 23?
The "normal" person increases capacity by 1.01 ^ (12 * 20) = 11x.
The "boosted" person increases capacity by 1.02 ^ (12 * 20) = 115x.
The difference becomes even more extreme if you assume a 3% a month increase:
The "extra boosted" person increase capacity by 1.03 ^ (12 * 20) = 1200x.
The result of the increased intellectual capacity is the ability to tackle problems that most people cannot even begin to comprehend. If you have looked at the kinds of problems that, say, theoretical physicists try to solve, they are so advanced that it would not surprise me if their mental powers were not 100x greater than the average person.
My conclusion at the end of this analysis is that early reading combined with a good dose of learning techniques provides the extra boost that, when compounded over time, provides a child with an incredible advantage.
Many people have a hard time imagining that a digital intelligence could be creative. To address this, we first need a definition of "creativity". I like this one:
Creativity is a process involving the generation of
new ideas or concepts, or new associations between
existing ideas or concepts.
Clearly, there is a wide spectrum of creative acts, ranging from figuring out a faster way to tie your shoelaces all the way to formulating the theory of general relativity. Most people can easily imagine a digital intelligence performing the former, but not the latter.
Before continuing, I'd like to point out that creativity doesn't require any intelligence at all. The most obvious example of this is Evolution, which works due to random mutation and natural selection. Advantageous mutations accumulated over billions of years, and the human brain is just one result of this highly "creative" process.
The main disadvantage of Evolution as a creative process is that it takes a very long time. Humans couldn't wait for a million years before randomly coming up with creative ways to hunt prey! So brains have evolved ways to speed up the creative process.
Although no-one knows the exact mechanisms for creativity, I think there are some strong clues about how it works. And funnily enough, it seems to build upon the process of Evolution!
Creativity requires at least two things:
It requires the ability to evaluate creations. For example, evolution uses natural selection to evaluate creations, weeding out bad ones and allowing good ones to survive.
It requires the ability to generate new creations. For example, evolution generates new creations via random mutation of previous creations.
Now let's see how humans often create things. I'm going to focus on music, since I'm an amateur musician/composer (there are links to some of my songs at the top of this blog on the right).
If you give a baby a tiny piano, it will usually bang out notes randomly and have a great time. This is because pure randomness is the initial strategy that babies have for creating pretty much anything. However, now and again it might play a particular sequence of notes, or play notes in a particular rhythm that creates a greater-than-usual pleasure. This pleasure might be internally generated (since humans do seem to prefer certain mathematical arrangements of pitch/timing), or externally generated (from a parent that claps their hands and smiles when the baby plays a lucky combination).
Regardless of where the positive feedback comes from, the baby's brain tries to figure out statistically what it did that caused the feedback. This is a basic algorithm built into the brain and doesn't require any conscious effort from the baby. As the baby continues to play random notes and get occasional positive feedback, certain patterns will be tried more often. For example, if the baby realizes that keeping a constant time between notes gets positive feedback, the baby will often use this rule when generating new music.
So what's happening here is that the baby starts off with just one strategy: try stuff randomly. As its creations are evaluated via internal and external feedback, it starts to add new strategies, such as "keep the time between notes the same" and "follow each note with the note right above it".
Each strategy is like a little agent, and is continuously trying to create music according to its own rules. The music that the baby plays is the outcome of all the individual strategies fighting between themselves to generate a particular set of notes.
As the baby learns more strategies related to sequences, chords and tempo, the "try stuff randomly" strategy loses the ability to influence the generation of random notes because of all the other strategies that generate better sequences. However, the "try stuff randomly" strategy is still useful to sequence other strategies. For example, it might pick the "follow each note with the note right above it" for 4 beats, then randomly pick "follow each note with the note right below it" for the next 4 beats. The combination of these two strategies might elicit a positive response, and a new strategy would be added that remembers the particular combination of lower-level strategies.
The brain thus starts with one strategy (randomness) and uses it to
bootstrap new strategies. Successful strategies are used more, and
unsuccessful strategies are used less.
As time goes on, the child creates more sophisticated strategies for creating increasingly complex music. A particularly interested child will spend many hours honing their creative skills, absorbing strategies from other artists as well as creating ones of their own.
The richness of music created by a musician is thus primarily the result of the richness of the strategies that they create. Indeed, some musicians accumulate so many great strategies that they can create great music with very little effort.
The very best artists are often the ones who have pushed the envelope by using new strategies not discovered by anyone else. The Beatles are a great example of this; their music was revolutionary back in the 1960s and there was nothing else like it (I am a huge Beatles fan).
Many strategies have been proven to be general and very useful for the creative process. One of the best general strategies is "challenge your assumptions". Many times the barrier to solving a problem is that you are making an assumption that is invalid. For example, when Einstein came up with the theory of relativity, everyone assumed that time was a constant throughout the Universe. Einstein challenged this assumption and was able to reformulate physics by adopting a different perspective
Now how does this all apply to digital intelligence? Well, I think that the process behind human creativity can be extracted as a set of algorithms that could be executed by a computer. These algorithms are not the strategies for, say, writing music, but the algorithms for creating strategies and letting them interact. My guess is that the algorithms are very simple, and could probably be written down in just a few pages of computer code. However, let them operate a trillion times a second and I think you'd see some serious creativity!
To summarize, I don't there's any magic behind creativity. Evolution has done a fantastic job but it took billions of years. Brains have a mechanism for creating strategies that guide the creative process, still harnessing randomness but directing it towards promising areas. People with the good strategies can create faster and deeper than people without such strategies, and people with a fantastic set of strategies are considered "geniuses". But it's still the same basic underlying concept, and it's a concept that can be used by a digital intelligence.
After I wrote this piece, I did some research on the web to see if anyone had used this approach for creating music by computer. And funnily enough, I stumbled across "Songs of the Neurons" which is music created by a "Creativity Machine". According to the site, the music was generated by some specially configured neural networks that embody the principles that I outlined above.
Here's a video that showcases the "Creativity Machine" capabilities with a sound track called "Crying in Vacuum" which is one of the "Songs of the Neurons". Part of me can't believe that the music was created by a computer - it just sounds too good!
The long-term vision for edu2.0 has always been a networked community of educators and students. The earliest adopters of our site were individual teachers, then schools, and now school districts. So the network is broadening as it grows.
We just released the first version of our "country home pages", which will be the focal point for edu2.0 for each particular country. The URL of each country home page is the country code + .edu20.org.
Right now, each page is very simple and just shows the location of all the schools within that country that are using edu20.org. You can hover over a school to see its name or click on a school to see its home page. We have a lot of plans for country pages, but this is a good start.
Here's a screen shot of the country page for the United States:
We added more than 6,000 new users to EDU 2.0 in the last month, which make it our biggest month so far. It's particularly encouraging because most schools are currently on vacation and so June and July are typically very slow for us.
Here's a chart of our growth, which shows us breaking the 56,000 mark a week ago: