PACAYA PALM &
THE FUTURE OF HUMANITY

As described above, while living in a thatch-roofed, dirt-floored hut in the Yucatan, I identified a palm photographed in Chiapas as the Pacaya, thanks to help from people in Créteil, France, Guatemala City, and Naples, Italy. And then I thought:

The Big Bang, when something erupted from nothing... then our galaxy coalesced in an unexceptional corner of the Universe, then the Earth clumped together in a seemingly random part of that Galaxy, and then on Earth life arose and diversified, and now one or more of the resulting species have developed the potential for abstract thought and feelings beyond what we're programmed to think and feel... and it seemed to me that this Pacaya Palm information exchange was a tiny spark at the beginning of a whole new step in Universal evolution as expressed on Earth... one in which humans electronically connected with one another and with access to various Internet-available databases become interconnected nodes in an ever-more-complex, ever-more-effective, ever-faster-evolving system of mentality.

It happened that as I was figuring out the Pacaya's identity, George in Denmark sent me a link to the UK's "The Guardian," where I found an interview with philosopher Nick Bostrom, Director of Oxford University's Future of Humanity Institute. Bostrom maintains that a greater threat to humanity than the possibility of a nuclear winter, than worldwide terrorism or even global warming, is the “intelligence explosion” that will occur when machines become much smarter than humans, and begin designing machines of their own.

For, why wouldn't computer intelligence evolve along Darwinian principles? When computers become smarter than humans, just as humans learned to dominate the rest of the Earth's biosphere, why wouldn't superintelligent computers learn to manipulate humans for their own purposes -- “hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems,” Bostrom suggests.

Though the connection between my experience on PalmTalk, and Bostrom's thoughts on superintelligent computers seemed related, at first I couldn't find my entry point into thinking about it. But then this occurred to me: That, in a very real sense, when we talk of humans on the one hand, and machines/computers on the other, we're talking about two kinds of programmed entities made of the same stuff -- carbon, iron, calcium, nitrogen, oxygen, etc. -- and functioning according to the same basic principles of physics.

With that insight, I couldn't see things as darkly as Bostrom seems to, because of the Sixth Miracle of the Six Miracles of Nature often referred to here, and which is outlined at http://www.backyardnature.net/j/6/

The Sixth Miracle is that among us humans, and maybe to lesser extents in some other organisms, mentalities governed by genetic predisposition evolved to such levels of sophistication that there spontaneously arose the miraculous and inexplicable ability to consciously refuse to obey the dictates of our genes, and to be inspired, to have a sense of aesthetics, to grow spiritually, and consciously to develop other traits we think of as "human." If both humans and computers/machines are composed of the same chemical elements, and if in both our thinking apparatuses electromagnetic fields and energy interact according to the same universal, natural principles, why shouldn't the Sixth Miracle be available to superintelligent machines?

In fact, in the near future, maybe superintelligent machines not only will become more intelligent than us humans, but also maybe they'll spontaneously develop similar intellectual insights, aesthetic senses, and maybe some kind of spirituality embracing compassion for all thinking, feeling beings. Maybe they'll see the beauty in us and our little Earth, and feel benevolent enough toward us to nurse us along, maybe helping us overcome such possibly-simple-to-a-computer problems as controlling our own numbers, and refraining from destroying the planetary biosphere that sustains us all.

Compassion and benevolence seem to be traits of humanity's greatest thinkers, artists and spiritual leaders, so why shouldn't the same attributes blossom spontaneously in superintelligent machines?