Tuesday, July 15, 2008
On the Gaia BRAIN
also I agree more with what People on the edge wrote:
I see possibilities of computer-aided petabyte analysis like a microscope that enables us to see phenomena normally hidden from the human eye. computer-aided petabyte analysis can uncover correlation previously indiscoverable for human kind, but still we will try to model the underlying principles and try to understand what is going on. Statistical facts will not eradicate systematic modeling. The beauty of proof over evidence is that it can predict farther into the future, like we saw with the Laser, the Memristor or Black Holes. All predicted years before a real-world representation was discovered.
Combined this will lead to an unparalleled explosion of invention and innovation in the next years.
What I see emerge in the networked globe is that we are currently creating a new brain. The Gaia Brain or One Machine (Kevin Kelly).
1. We have billions of computers networked n:n, the storage capacity is unlimited. The network cables are the neurons. As Kevin Kelly sums up here.
The quality of a silicon based intelligence will be fundamentally different from a protein based one. The comparison has to take place on a higher abstraction level.
(What is interesting though is my observation, that anything you imagine or ideate, has been thought and published on the internet already by someone else [often only weeks or days before I though of it], you only have to find it.)
2. On top we have the World wide web with links and gigabit transfer speed (much faster that the 300 m/s of a human brain). The Links and email are the axons and neurons used to transfer concepts. concepts are stored and grouped in webpages and databases.
3. Google, de.licio.us and others are working the reinforcment cycles like Axons and Dendrite. Important "learnings" are reinforced by the number of links linking to a specific information, increasing the visibility across the globe. There is short term knowledge and long term knowledge.
4. The Gaia Brain does not forget, which is a new quality. The Gaia Brain has all the information ever published available in 0.25 seconds.
5. We have all the memories a human brain can hold: visual, audible, structured, unstructured and so on.
6. the social networks climb even higher by linking representations of ourselves to other concept holders.
Apparently creating statistical representations of facts (google translate, farecast) help in conceiving a new quality that will complement the classical Neural net theories.
The new thing is that statistical approaches in the petabyte scale do not need a structured model of how it works to work. Those representations produce fairly valuable results, also it will still need a human interpreter to assess the quality of the source.
This could lead to the working Gaia Brain, where we humans have no idea how it works and it will take years and years until we do so. First we would have to understand how our brain works, then see how the Gaia Brain works. Where meanwhile the new findings will be applied to the Gaia brain to improve it.
Kelly writes: To keep things going the Machine uses approximately 800 billion kilowatt hours per year, or 5% of global electricity.
If we compare this to the human brain, which consumes around 20% of the body's energy, this could be a starting point for our growth and development prediction. What will happen once the the Gaia brain consumes 20% of the planets energy?
Maybe we will see that Zenos parable holds true: Achilles cannot outrace the turtle. We can use but never understand the new Brain we have created.
Eventually we will connect ourselves to the Brain with cybernetic components, so we will have immediate parallel access to any resource in the world. We will form the Gaia Brain 2.0.
That will be the Omega Point.
Wednesday, June 25, 2008
On Free
In todays Web 2.0 there is asymmetry in place:
People that contribute are not paid for their value generation, but the platform provider is.
In this case its blogspot.com - Google but not me.
You only need 1% of the visitors to contribute to create a community. That community is making money in the 6 ways mentioned in Chris Andersons Article on Free (http://www.wired.com/techbiz/it/magazine/16-03/ff_free?currentPage=1), but we are not paid for doing the work and creating the value, anywhere today.
Amazon, facebook, wikipedia, you name it.
Sunday, June 22, 2008
On Innovation
Do we invest mathematics or do we discover it ?
“Mathematical objects are just concepts; they are the mental idealizations that mathematicians make, often stimulated by the appearance and seeming order of aspects of the world about us, but mental idealizations nevertheless.” (Roger Penrose)
While the mathematical language is a human invention, the objects it describes seems to have an existence of their own.
Examples are the phenomenon like the Laser and Black Holes that were calculated and predicted to exist by Einstein and Schwarzschild decades before they were actually found in the real world.
Mathematics base on pure logic and there is reason to believe mathematics maybe true without any dependence on Universe. That means one has only to uncover it using the state of the body of knowledge in a given field and bring some predisposition, manifested in his experience and personal knowledge as well as the scientific principle.
In my opinion the same holds true for any kind of invention.
One can think of any body of knowledge as the area inside a rubber band, where the rim limits everything we know on a given topic at a certain time.
So if we discover things around us, we push a little lump into the edge of the rubber band based on the knowledge we acquired, increasing the area just a notch. If we publish it, this becomes then the new area of the rubber band. Rubberband 2.0 so to say.
If we look into the last 2000 years, we can see that access to knowledge was limited but that access to Information and Knowledge has been steadily increasing with Gutenberg, Radio, Television, and finally the Internet.
So in short: Cultural Advancement has been heavily depended on the speed of information diffusion. And the groundbreaking Inventions have always reduced the time a process needs to produce the wanted results and be consumed by an individual, whether it was bookprinting, ginving more people access to knowledge and education, the light bulb increasing time to produce and consume, the steam and combustion engine, extending the radius of operation, telegraph and telephone diffusing information almost instantly, reducing the time to make decisions, or the Internet, where anyone can consume information on anything. But that evolution is not yet complete, to the contrary if you take the work by Ray Kurzweil it is gaining speed fast.
I believe innovation works much the same way. You start with the body of knowledge on any given topic and then you connect some dots, that were disconnected at first.
Example (I could have thought of that)
If inventions are discovered not invented, it is only a matter of time until someone will invent something. It is not the matter who will invent it, as it will be invented anyway.
A nice example is that many groundbreaking inventions have been invented around the same time by different people. And there is the saying that “no invention is named after its original inventor (###quelle)
The more people collaborate on innovation, the faster it output will be produced. I you cant think of it, maybe someone else has the winning idea.
If you add that the human knowledge is available to you, that puts any individual with access to that knowledge into a historically unique position.
Tuesday, June 17, 2008
On Google IO, San Francisco
I remember back then SUN was big and anyone who could afford it, would buy some SUN servers. Only 100k$ didn't buy a lot of SUN’s back then.
So I assume they got the idea of building Google on consumer motherboards and hard drives, with the first Google FS on top of linux and mysql. It had to be cheap, so Windows and Oracle was out oft he question. This decision eventually lead to three distinct approaches contrary to what datacenter people and admins of mission critical hardware would advise in those days.
1. The software was free, no license fees attached.
2. Failure (of Hardware) is acknowledged as something that happens regularly
3. Redundancy is not bad after all (despite what you learned about normalization in college)
As we all know automating the handling of failure without disrupting service lead to the Google cloud, which today could be the largest (rumors go Google handles about 200 Petabytes of data today) and most robust database in the world.
On top they parallelized mysql with big table and map reduce, making it also the fastest database on the web. According to Jeff Dean around 1000 servers are hit in parallel whenever they receive a query. One half of those servers looks up the links, the other the documents and assembles the query with the text snipplets based on that query, the first 10 hits are returned in a quarter of a second.
So what do you do if you have the largest, fastest and most robust database in the world ? You apply the same principles that made you successful with hardware in the first place to software.
These are
1. Its free (open source) Google has a marginal cost of zero for additional query execution and hard drive space.
2. Redundancy – There will be a lot of projects around the Google offerings (api, app engine and google apps) that basically do the same thing.
3. Failure – With a lot of those open source projects being abandoned after a while, only a few will make it, but you only need a few big once like Google earth or Gmail.
Any project based on Google open source makes Google stronger.
According to an analysis by Don Dodge of Microsoft and Bradley Horowitz, of AltaVista (http://dondodge.typepad.com/the_next_big_thing/2008/06/social-networks-1-rule-or-the-community-pyramid.html), for every active contributor of a network-effect participation site only one percent oft he visitors actively contribute over a longer time. 10% chip in a little effort like commenting and the vast majority only consumes. That’s all you need to have a globally successful web service. According o him those numbers are consistent across Wikipedia, facebook and others.
If you assume that every Google developer is a member oft he 1% keeping the important projects alive, the open source community would be the 10% with a spill over into the 1%. That is a leverage on 1 to 100 for Google at zero cost to fuel the ad engine.
16000 developers at Google core
160000 developers working with it
1600000 consumers.
Google needs to fuel the ad engine so anything that serves ads will do.
You have two options to extend your reach. Build new channels for users to consume the offerings, that is what happens if you add mobile (android), offline capabilities and translation.
They also extend their reach within each channel with earth, apps, api, and all those google labs contenders. Now that Google has opened all the apis for read/write access, the community will do the permutations of coupling each service with each other service. The growth is exponential.
So if that bet wins, we can only imagine where Google is headed. If they can keep up their logistics and HDD latency with the upcoming flash drives I predict that the first functional AI on earth will rely on Google. So Mr. Kurzweil scores again.