Wednesday, December 29, 2010

Will Decreasing Scarcity Allow us to Approach an Optimal (Meta-)Society?

When chatting with a friend about various government systems during a long car drive the other day (returning from New York where we were hit by 2 feet of snow, to relatively dry and sunny DC), it occurred to me that one could perhaps prove something about the OPTIMAL government system, if one were willing to make some (not necessarily realistic) assumptions about resource abundance.

This led to an interesting train of thought -- that maybe, as technology reduces scarcity, society will gradually approach optimality in certain senses...

The crux of my train of thought was:

  • Marcus Hutter proved that the AIXI algorithm is an optimal approach to intelligence, given the (unrealistic) assumption of massive computational resources.
  • Similarly, I think one could prove something about the optimal approach to society and government, given the (unrealistic) assumptions of massive natural resources and a massive number of people.

I won't take time to try to prove this formally just now, but in this blog post I'll sketch out the basic idea.... I'll describe what I call the meta-society, explain the sense in which I think it's optimal, and finally why I think it might get more and more closely approximated as the future unfolds...

A Provably Optimal Intelligence

As a preliminary, first I'll review some of Hutter's relevant ideas on AI.

In Marcus Hutter's excellent (though quite technical) book Universal AI, he presents a theory of "how to build an optimally intelligent AI, given unrealistically massive computational resources."

Hutter's algorithm isn't terribly novel -- I discussed something similar in my 1993 book The Structure of Intelligence (as a side point to the main ideas of that book), and doubtless Ray Solomonoff had something similar in mind when he came up with Solomonoff induction back in the 1960s. The basic idea is: Given any computable goal, and infinite computing power, you can work toward the goal very intelligently by (my wording, not a quote) ....


at each time step, searching the space of all programs to find those programs P that (based on your historical knowledge of the world and the goal) would (if you used P to control your behaviors) give you the highest probability of achieving the goal. Then, take the shortest of all such optimal programs P and actually use it to determine your next action.


But what Hutter did uniquely is to prove that a formal version of this algorithm (which he calls AIXI) is in a mathematical sense maximally intelligent.

If you have only massive (rather than infinite) computational resources, then a variant (AIXItl) exists, the basic idea of which is: instead of searching the space of all programs, only look at those programs with length less than L and runtime less than T.

It's a nice approach if you have the resources to pay for it. It's sort of a meta-AI-design rather than an AI design. It just says: If you have enough resources, you can brute-force search the space of all possible ways of conducting yourself, and choose the simplest of the best ones and then use it to conduct yourself. Then you can repeat the search after each action that you take.

One might argue that all this bears no resemblance to anything that any actual real-world mind would do. We don't have infinite nor massive resources, so we have to actually follow some specific intelligent plans and algorithms, we can't just follow a meta-plan of searching the space of all possible plans at each time-step and then probabilistically assessing the quality of each possibility.

On the other hand, one could look at Hutter's Universal AI as a kind of ideal which real-world minds may approach more and more closely, as they get more and more resources to apply to their intelligence.

That is: If your resources are scarce, you need to rely on specialized techniques. But the more resources you have, the more you can rely on search through all the possibilities, reducing the chance that your biases cause you to miss the best solution.

(I'm not sure this is the best way to think about AIXI ... it's certainly not the only way ... but it's a suggestive way...)

Of course there are limitations to Hutter's work and the underlying way of conceptualizing intelligence. The model of minds as systems for achieving specific goals has its limitations, which I've explained how to circumvent in prior publications. But for now we're using AIXI only as a broad source of inspiration anyway, so there's no need to enter into such details....

19-Year-Old Ben Goertzel's Design for an Better Society

Now, to veer off in a somewhat different direction....

Back when I was 19 and a math grad student at NYU, I wrote (in longhand, this was before computers were so commonly used for word processing) a brief manifesto presenting a design for a better society. Among other names (many of which I can't remember) I called this design the Meta-society. I think the title of the manifesto was "The Play of Power and the Power of Play."

(At that time in my life, I was heavily influenced by various strains of Marxism and anarchism, and deeply interested in social theory and social change. These were after all major themes of my childhood environment -- my dad being a sociology professor, and my mom the executive of a social work program. I loved the Marxist idea of the mind and society improving themselves together, in a carefully coupled way -- so that perhaps the state and the self could wither away at the same time, yielding a condition of wonderful individual and social purity. Of course I realized that existing Communist systems fell very far short of this ideal though, and eventually I got pessimistic about there ever being a great society composed of and operated by humans in their current form. Rather than improving society, I decided, it made more sense to focus my time on improving humanity ... leading me to a greater focus on transhumanism, AI and related ideas.)

The basic idea for my meta-society was a simple one, and probably not that original: Just divide society into a large number of fairly small groups, and let each small group do whatever the hell it wanted on some plot of land. If one of these "city-states" got too small due to emigration it could lose its land and have it ceded to some other new group.

If some group of people get together and want to form their own city-state, then they get put in a queue to get some free land for their city-state, when the land becomes available. To avoid issues with unfairness or corruption in the allocation of land to city-states, a computer algorithm could be used to mediate the process.

There would have to be some basic ground-rules, such as: no imprisoning people in your city-state, no invading or robbing other city-states, etc. To support a police force to enforce the ground-rules would require a central government and some low level of taxation, which however could sometimes be collected in the form of goods rather than money (the central gov't could then convert the goods into money). Environmental protection poses some difficulties in this sort of system, and has to be centrally policed also.

This meta-society system my 19 year old self conceived (and I don't claim any great originality for it, though I don't currently know anything precisely the same in the literature) has something in common with Libertarian philosophy, but it's not exactly the same, because at the top there's a government that enforces a sort of "equal rights for city-state formation" for all.

One concern I always had with the meta-society was: What do you do with orphans or others who get cast out of their city-states? One possibility is for the central government to operate some city-states composed of random people who have nowhere else to go (or nowhere else they want to go).

Another concern is what do you do about city-states that oppress and psychologically brainwash their inhabitants. But I didn't really see any solution to that. One person's education is another person's brainwashing, after all. From a modern American view it's tempting to say that all city-states should allow their citizens free access to media so they can find out about other perspectives, but ultimately I decided this would be too much of an imposition on the freedom of the city-states. Letting citizens leave their city-state if they wish ultimately provides a way for any world citizen to find out what's what, although there are various strange cases to consider, such as a city-state that allows its citizens no information about the outside world, and also removes the citizenship of any citizen who goes outside its borders!

I thought the meta-society was a cool idea, and worked out a lot of details -- but ultimately I had no idea how to get it implemented, and not much desire to spend my life proselytizing for an eccentric political philosophy or government system, so I set the idea aside and focused my time on math, physics, AI and such.

As a major SF fan, it did occur to me that such a meta-society of city-states might be more easily achievable in future once space colonies were commonplace. If it were cheap to put up a small space colony for a few hundred or thousand or ten thousand people, then this could lead to a flowering of city-states of exactly the sort I was envisioning...

When I became aware of Patri Friedman's Seasteading movement, I immediately sensed a very similar line of thinking. Their mission is "To further the establishment and growth of permanent, autonomous ocean communities, enabling innovation with new political and social systems." Patri wants to make a meta-society and meta-economy on the high seas. And why not?



Design for an Optimal Society?

The new thought I had while driving the other day is: Maybe you could put my old idealistic meta-society-design together with the AIXI idea somehow, and come up with a design for a "society optimal under assumption of massive resources."

Suppose one assumes there's

  • a lot of great land (or sea + seasteading tech, or space + space colonization tech, whatever), so that fighting over land is irrelevant
  • a lot of people
  • a lot of natural resources, so that one city-state polluting another one's natural resources isn't an issue

Then it seems one could argue that my meta-society is near-optimal, under these conditions.

The basic proof would be: Suppose there were some social order X better than the meta-society. Then people could realize that X is better, and could simply design their city-states in such a way as to produce X.

For instance, if US-style capitalist democracy is better than the meta-society, and people realize it, then people can just construct their city-states to operate in the manner of US-style capitalist democracy (this would require close cooperation of multiple city-states, but that's quite feasible within the meta-society framework).

So, one could argue, any other social order can only be SLIGHTLY better than the meta-society... because if there's something significantly better, then after a little while the meta-society can come to emulate it closely.

So, under assumptions of sufficiently generous resources, the meta-society is about as good as anything.

Now there are certainly plenty of loopholes to be closed in turning this heuristic argument into a formal proof. But I hope the basic idea is clear.

As with AIXI, one can certainly question the relevance of this sort of design, since resource scarcity is a major fact of modern life. But recall that I originally started thinking about meta-societies outside the "unrealistically much resources" context.

Finally, you'll note that for simplicity, I have phrased the above discussion in terms of "people." But of course, the same sort of thinking applies for any kind of intelligent agent. The main assumption in this case is that the agents involved either have roughly equal power and intelligence, or else that if there are super-powerful agents involved, they have the will to obey the central government.

Can We Approach the Meta-Society as Technology Advances?


More and more resources are becoming available for humanity, as technology advances. Seasteading and space colonization and so forth decrease the scarcity of available "land" for human habitation. Mind uploading would do so more dramatically. Molecular nanotech (let alone femotech and so forth) may dramatically reduce material scarcity, at least on the scale interesting to humans.

So, it seems the conditions for the meta-society may be more and more closely met, as the next decades and centuries unfold.

Of course, the meta-society will remain an idealization, never precisely achievable in practice. But it may be we can approach it closer and closer as technology improves.

Marxism had the notion of society gradually becoming more and more pure, progressively approaching Perfect Communism. What I'm suggesting here is similar in form but different in content: society gradually becoming more and more like the meta-society, as scarcity of various sorts becomes less and less of an issue.

As I write about this now, it also occurs to me that this is a particularly American vision. America, in a sense, is a sort of meta-society -- the central government is relatively weak (compared to other First World countries) and there are many different subcultures, some operating with various sorts of autonomy (though also a lot of interconnectedness). In this sense, it seems I'm implicitly suggesting that America is a better model for the future than other existing nations. How very American of me!

If superhuman AI comes about (as I think it will), then the above arguments make sense only if the superhuman AI chooses to respect the meta-society social structure. The possibility even exists that a benevolent superhuman AI could serve itself as the central government of a meta-society.

And so it goes....

Tuesday, November 23, 2010

Making Minds from Memristors?

Amara Angelica pointed me to an article in IEEE Spectrum titled MoNETA: A Mind Made from Memristors

Fascinating indeed!

I'm often skeptical of hardware projects hyped as AI projects, but truth be told, I find this one an extremely exciting and promising project.

I think the memristor technology is amazing and may well play part in the coming AGI revolution.

Creating emulations of human brain microarchitecture is one fascinating application of memristors, though not the only one and not necessarily the most exciting one. Memristors can also be used to make a lot of other different AI architectures, not closely modeled after the human brain.

[For instance, one could implement a semantic network or an OpenCog-style AtomSpace (weighted labeled hypergraph) via memristors, where each node in the network has both memory and processor resident in it ... this is a massively parallel network implemented via memristors, but the nodes in the network aren't anything like neurons...]

And, though the memristors-for-AGI theme excites me, this other part of the article leaves me a bit more skeptical:

"
By the middle of next year, our researchers will be working with thousands of candidate animats at once, all with slight variations in their brain architectures. Playing intelligent designers, we'll cull the best ones from the bunch and keep tweaking them until they unquestionably master tasks like the water maze and other, progressively harder experiments. We'll watch each of these simulated animats interacting with its environment and evolving like a natural organism. We expect to eventually find the "cocktail" of brain areas and connections that achieves autonomous intelligent behavior.
"

I think the stated research program places too much emphasis on brain microarchitecture and not enough on higher-level cognitive architecture. The idea that a good cognitive architecture is going to be gotten to emerge via some simple artificial-life type experiments seems very naive to me. I suspect that, even with the power of memristors, designing a workable cognitive architecture is going to be a significant enterprise. And I also think that many existing cognitive architectures, like my own OpenCog or Stan Franklin's LIDA or Hawkins' or Arel's deep learning architectures, could be implemented on a memristor fabric without changing their underlying concepts or high-level algorithms or dataflow.

So: memristors for AI, yay!

But: memristors as enablers of a simplistic Alife approach to AGI ... well, I don't think so.

The Psi Debate Continues (Goertzel on Wagenmakers et al on Bem on precognition)

A few weeks ago I wrote an article for H+ Magazine about the exciting precognition results obtained by Daryl Bem at Cornell University.

Recently, some psi skeptics (Wagenmakers et al) have written a technical article disputing the validity of Bem's analyses of his data.

In this blog post I'll give my reaction to the Wagenmakers et al (WM from here on) paper.

It's a frustrating paper, because it makes some valid points -- yet it also confuses the matter by inappropriately accusing Bem of committing "fallacies" and by arguing that the authors' preconceptions against psi should be used to bias the data analysis.

The paper makes 3 key points, which I will quote in the form summarized here and then respond to one by one

POINT 1

"
Bem has published his own research methodology and encourages the formulation of hypotheses after data analysis. This form of post-hoc analysis makes it very difficult to determine accurate statistical significance. It also explains why Bem offers specific hypotheses that seem odd a priori, such as erotic images having a greater precognitive effect. Constructing hypotheses from the same data range used to test those hypotheses is a classic example of the Texas sharpshooter fallacy
"

MY RESPONSE

As WM note in their paper, this is actually how science is ordinarily done; Bem is just being honest and direct about it. Scientists typically run many exploratory experiments before finding the ones with results interesting enough to publish.

It's a meaningful point, and a reminder that science as typically practiced does not match some of the more naive notions of "scientific methodology". But it would also be impossibly cumbersome and expensive to follow the naive notion of scientific methodology and avoid exploratory work altogether, in psi or any other domain.

Ultimately this complaint against Bem's results is just another version of the "file drawer effect" hypothesis, which has been analyzed in great deal in the psi literature via meta-analyses across many experiments. The file drawer effect argument seems somewhat compelling when you look at a single experiment-set like Bem's, and becomes much less compelling when you look across the scope of all psi experiments reported, because the conclusion becomes that you'd need a huge number of carefully-run, unreported experiments to explain the total body of data.

BTW, the finding that erotic pictures give more precognitive response than other random pictures, doesn't seem terribly surprising, given the large role that sexuality plays in human psychology and evolution. If the finding were that pictures of cheese give more precognitive response than anything else, that would be more strange and surprising to me.


POINT 2

"
The paper uses the fallacy of the transposed conditional to make the case for psi powers. Essentially mixing up the difference between the probability of data given a hypothesis versus the probability of a hypothesis given data.
"

MY RESPONSE

This is a pretty silly criticism, much less worthy than the other points raised in the WM paper. Basically, when you read the discussion backing up this claim, the authors are saying that one should take into account the low a priori probability of psi in analyzing the data. OK, well ... one could just as well argue for taking to account the high a priori probability of psi given the results of prior meta-analyses or anecdotal reports of psi. Blehh.

Using the term "fallacy" here makes it seem, to people who just skim the WM paper or read only the abstract, as if Bem made some basic reasoning mistake. Yet when you actually read the WM paper, that is not what is being claimed. Rather they admit that he is following ordinary scientific methodology.


POINT 3

"
Wagenmakers' analysis of the data using a Bayesian t-test removes the significant effects claimed by Bem.
"

This is the most worthwhile point raised in the Wagenmakers et al paper.

Using a different sort of statistical test than Bem used, they re-analyze Bem's data and they find that, while the results are positive, they are not positive enough to pass the level of "statistical significance." They conclude that a somewhat larger sample size would be needed to conclude statistical significance using the test they used.

The question then becomes why to choose one statistical test over another. Indeed, it's common scientific practice to choose a statistical test that makes one's results appear significant, rather than others that do not. This is not peculiar to psi research, it's simply how science is typically done.

Near the end of their paper, WM point out that Bem's methodology is quite typical of scientific psychology research, and in fact more rigorous than most psychology papers published in good journals. What they don't note, but could have is that the same sort of methodology is used in pretty much every area of science.

They then make a series of suggestions regarding how psi research should be conducted, which would indeed increase the rigor of the research, but which a) are not followed in any branch of science, and b) would make psi research sufficiently cumbersome and expensive as to be almost impossible to conduct.

I didn't dig into the statistics deeply enough to assess the appropriateness of the particular test that WM applied (leading to their conclusion that Bem's results don't show statistical significance, for most of his experiments).

However, I am quite sure that if one applied this same Bayesian t-test to a meta-analysis over the large body of published psi experiments, one would get highly significant results. But then WM would likely raise other issues with the meta-analysis (e.g. the file drawer effect again).

Conclusion

I'll be curious to see the next part of the discussion, in which a psi-friendly statistician like Jessica Utts (or a statistician with no bias on the matter, but unbiased individuals seem very hard to come by where psi is concerned) discusses the appropriateness of WM's re-analysis of the data.

But until that, let's be clear on what WM have done. Basically, they've

  • raised the tired old, oft-refuted spectre of the file drawer effect, using a different verbiage from usual
  • argued that one should analyze psi data using an a priori bias against it (and accused Bem of "fallacious" reasoning for not doing so)
  • pointed out that if one uses a different statistical test than Bem did [though not questioning the validity of the statistical test Bem did use], one finds that his results, while positive, fall below the standard of statistical significance in most of his experiments

The practical consequence of their latter point is that, if Bem's same experiments were done again with the same sort of results as obtained so far, then eventually a sufficient sample size would be accumulated to demonstrate significance according to WM's suggested test.

So when you peel away the rhetoric, what the WM critique really comes down to is: "Yes, his results look positive, but to pass the stricter statistical tests we suggest, one would need a larger sample size."

Of course, there is plenty of arbitrariness in our conventional criteria of significance anyway -- why do we like .05 so much, instead of .03 or .07?

So I really don't see too much meat in WM's criticism. Everyone wants to see replications of the experiments anyway, and no real invalidity in Bem's experiments, results or analyses was demonstrated.... The point made is merely that a stricter measure of significance would render these results (and an awful lot of other scientific results) insignificant until replication on a larger sample size was demonstrated. Which is an OK point -- but I'm still sorta curious to see a more careful, less obviously biased analysis of which is the best significance test to use in this case.

Sunday, November 21, 2010

The Turing Church, Religion 2.0, and the Mystery of Consciousness

It was my pleasure to briefly participate in Giulio Prisco's Turing Church Online Workshop 1, on Saturday November 20 2010 in Teleplace -- a wonderfully wacky and wide-ranging exploration of transhumanist spirituality and “Religion 2.0.″

The video proceedings are here.

I didn't participate in the whole workshop since it was a busy day for me, I just logged on briefly to give a talk and answer some questions. But I found the theme quite fascinating.

Giulio said I should assume the participants were already basically familiar with my thinking on transhumanist spirituality as expressed in my little book A Cosmist Manifesto that I wrote earlier this year, and he asked me to venture in some slightly different direction. I'm not sure I fulfilled that request all that well, but anyway, I'll paste here the notes I wrote as a basis for my talk in the workshop. I didn't read these notes with any precision, so if you want to know what I actually said you'll have to watch the video; but the talk was a more informal improvisation on the same basic theme...

"The relation between transhumanism and spirituality is a big topic, which I've thought about a lot -- right now I'll just make a few short comments. Sorry that I won't be able to stick around for this whole meeting today, I have some family stuff I need to do, but I'm happy to be able to participate at least briefly by saying a few remarks.



"Earlier this year I wrote a book touching on some of these comments, called "A Cosmist Manifesto" -- I'm not going to reiterate all that material now, just touch on a few key points.



"The individual human mind has a tendency to tie itself in what the psychologist Stanislaw Grof calls "knots" -- intricate webs of self-contradiction and fear, that cause emotional pain and cognitive confusion and serve as traps for mental energy. Ultimately these knots are largely rooted in the human self's fear of losing itself --- the self's fear of realizing that it lacks fundamental reality, and is basically a construct whose main goals are to keep the body going and reproducing and to preserve itself. These are some complicated words for describing something pretty basic, but I guess we all know what I'm talking about.



"And then there are the social knots, going beyond the individual ones… the knots we tie each other up in…



"These knots are serious problems for all of us -- and they're an even more serious problem when you think about the potential consequences of advanced technology in the next decade. We're on the verge of creating superhuman AI and molecular nanotech and brain-computer interfacing and so forth -- but we're still pretty much fucked up with psychological and social confusions! As Freud pointed out in Civilization and its Discontents, we're largely operating with motivational systems evolved for being hunter-gatherers in the African savannah, but the world we're creating for ourselves is dramatically different from that.



"Human society has come up with a bunch of different ways to get past these knots.



"One of them is religion -- which opens a doorway to transpersonal experience, going beyond self and society, opening things up to a broader domain of perceiving, being, understanding and acting. If you're not familiar with more philosophical side of the traditional religions you should look at Aldous Huxley's classic book "The Perennial Philosophy" -- it was really an eye-opener for me.



"Another method for getting past the knots is science. By focusing on empirical data, collectively perceived and understood, science lets us go beyond our preconceptions and emotions and biases and ideas. Science, with its focus on data and collective rational understanding, provides a powerful engine for growth of understanding. There's a saying that "science advances one funeral at a time" -- i.e. old scientific ideas only die when their proponents die. But the remarkable thing is, this isn't entirely true. Science has an amazing capability to push people to give up their closely held ideas, when these ideas don't mesh well with the evidence.



"What I see in the transhumanism-meets-spirituality connection is the possibility of somehow bringing together these two great ways of getting beyond the knots. If science and spirituality can come together somehow, we may have a much more powerful way of getting past the individual and social knots that bind us. If we could somehow combine the rigorous data focus of science with the personal and collective mind-purification of spiritual traditions, then we'd have something pretty new and pretty interesting -- and maybe something that could help us grapple with the complex issues modern technology is going to bring us in the next few decades



"One specific area of science that seems very relevant to these considerations is consciousness studies. Science is having a hard time grappling with consciousness, though it's discovering a lot about neural and cognitive correlates of consciousness. Spiritual traditions have discovered a lot about consciousness, though a lot of this knowledge is expressed in language that's hard for modern people to deal with. I wonder if some kind of science plus spirituality hybrid could provide a new way for groups of people to understand consciousness, combining scientific data and spiritual understanding.



"One idea I mentioned in the Cosmist Manifesto book is some sort of "Confederation of Cosmists", and Giulio asked me to say a little bit about that here. The core idea is obvious -- some kind of social group of individuals interested in both advanced technology and its implications, and personal growth and mind-expansion. The specific manifestation of the idea isn't too clear. But I wonder if one useful approach might be to focus on the cross-disciplinary understanding of consciousness -- using science and spirituality, and also advanced technologies like neuroscience and BCI and AGI. My thinking is that consciousness studies is one concrete area that truly seems to demand some kind of fusion of scientific and spiritual ideas … so maybe focusing on that in a truly broad, cross-tradition, Cosmist way could help us come together more and over help us work together to overcome our various personal and collective knots, and build a better future, and all that good stuff….



"Anyway there are just some preliminary thoughts, these are things I'm thinking about a lot these days, and I look forward to sharing my ideas more with you as my thoughts develop -- and I'll be catching the rest of this conference via the video recordings later on."



Fun stuff to think about -- though I don't have too much time for it these days, as my AGI and bioinformatics work seems to be taking all my time. But at some future point, I really do think the cross-disciplinary introspective/scientific individual/collective investigation of consciousness is well worth devoting attention to, and is going to bear some pretty fascinating fruit....

Friday, October 29, 2010

The Singularity Institute's Scary Idea (and Why I Don't Buy It)

I recently wrote a blog post about my own AI project, but it attracted a bunch of adversarial comments from folks influenced by the Singularity Institute for AI's (rather different) perspective on the best approach to AI R&D. I responded to some of these comments there.


(Quick note for those who don't know: the Singularity Institute for AI is not affiliated with Singularity University, though there are some overlaps ... Ray Kurzweil is an Advisor to the former and the founder of the latter; and I am an Advisor to both.)

Following that discussion, a bunch of people have emailed me in the last couple weeks asking me to write something clearly and specifically addressing my views on SIAI's perspective on the future of AI. I don't want to spend a lot of time on this but I decided to bow to popular demand and write a blog post...

Of course, there are a lot of perspectives in the world that I don't agree with, and I don't intend to write blog posts explaining the reasons for my disagreement with all of them! But since I've had some involvement with SIAI in the past, I guess it's sort of a special case.

First of all I want to clarify I'm not in disagreement with the existence of SIAI as an institution, nor with the majority of their activities -- only with certain positions habitually held by some SIAI researchers, and by the community of individuals heavily involved with SIAI. And specifically with a particular line of thinking that I'll refer to here as "SIAI's Scary Idea."

Roughly, the Scary Idea posits that: If I or anybody else actively trying to build advanced AGI succeeds, we're highly likely to cause an involuntary end to the human race.

Brief Digression: My History with SIAI

Before getting started with the meat of the post, I'll give a few more personal comments, to fill in some history for those readers who don't know it, or who know only parts. Readers who are easily bored may wish to skip to the next section,

SIAI has been quite good to me, overall. I've enjoyed all the Singularity Summits, which they've hosted, very much; I think they've played a major role in the advancement of society's thinking about the future, and I've felt privileged to speak at them. And I applaud SIAI for consistently being open to Summit speakers whose views are strongly divergent from those commonly held in the SIAI community.

Also, in 2008, SIAI and my company Novamente LLC seed-funded the OpenCog open-source AGI project (based on software code spun out from Novamente). The SIAI/OpenCog relationship diminished substantially when Tyler Emerson passed the leadership of SIAI along to Michael Vassar, but it was instrumental in getting OpenCog off the ground. I've also enjoyed working with Michael Vassar on the Board of Humanity+, of which I'm Chair and he's a Board member.

When SIAI was helping fund OpenCog, I took the title of "Director of Research" of SIAI, but I never actually directed any research there apart from OpenCog. The other SIAI research was always directed by others, which was fine with me. There were occasional discussions about operating in a more unified manner, but it didn't happen. All this is perfectly ordinary in a small start-up type organization.

Once SIAI decided OpenCog was no longer within its focus, after a bit of delay I decided it didn't make sense for me to hold the Director of Research title anymore, since as things were evolving, I wasn't directing any SIAI research. I remain as an Advisor to SIAI, which is going great.

Now, on to the meat of the post….

SIAI's Scary Idea (Which I Don't Agree With)

SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that: progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race.

(SIAI's Scary Idea has been worded in many different ways by many different people, and I tried in the above paragraph to word it in a way that captures the idea fairly if approximatively, and won't piss off too many people.)

Of course it's rarely clarified what "provably" really means. A mathematical proof can only be applied to the real world in the context of some assumptions, so maybe "provably non-dangerous AGI" means "an AGI whose safety is implied by mathematical arguments together with assumptions that are believed reasonable by some responsible party"? (where the responsible party is perhaps "the overwhelming majority of scientists" … or SIAI itself?)….. I'll say a little more about this a bit below.

Please note that, although I don't agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it. There are also dramatic potential benefits associated with it, including the potential of protection against risks from other technologies (like nanotech, biotech, narrow AI, etc.). So the development of AGI has difficult cost-benefit balances associated with it -- just like the development of many other technologies.

I also agree with Nick Bostrom and a host of SF writers and many others that AGI is a potential "existential risk" -- i.e. that in the worst case, AGI could wipe out humanity entirely. I think nanotech and biotech and narrow AI could also do so, along with a bunch of other things.

I certainly don't want to see the human race wiped out! I personally would like to transcend the legacy human condition and become a transhuman superbeing … and I would like everyone else to have the chance to do so, if they want to. But even though I think this kind of transcendence will be possible, and will be desirable to many, I wouldn't like to see anyone forced to transcend in this way. I would like to see the good old fashioned human race continue, if there are humans who want to maintain their good old fashioned humanity, even if other options are available

But SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.

Finally, I note that most of the other knowledgeable futurist scientists and philosophers, who have come into close contact with SIAI's perspective, also don't accept the Scary Idea. Examples include Robin Hanson, Nick Bostrom and Ray Kurzweil.

There's nothing wrong with having radical ideas that one's respected peers mostly don't accept. I totally get that: My own approach to AGI is somewhat radical, and most of my friends in the AGI research community, while they respect my work and see its potential, aren't quite as enthused about it as I am. Radical positive changes are often brought about by people who clearly understand certain radical ideas well before anyone else "sees the light." However, my own radical ideas are not telling whole research fields that if they succeed they're bound to kill everybody ... so it's a somewhat different situation.



What is the Argument for the Scary Idea?

Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)

So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.

As far as I can tell from discussions and the available online material, some main ingredients of peoples' reasons for believing the Scary Idea are ideas like:

  1. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low
  2. Human value is fragile as well as complex, so if you create an AGI with a roughly-human-like value system, then this may not be good enough, and it is likely to rapidly diverge into something with little or no respect for human values
  3. "Hard takeoffs" (in which AGIs recursively self-improve and massively increase their intelligence) are fairly likely once AGI reaches a certain level of intelligence; and humans will have little hope of stopping these events
  4. A hard takeoff, unless it starts from an AGI designed in a "provably Friendly" way, is highly likely to lead to an AGI system that doesn't respect the rights of humans to exist
I emphasize that I am not quoting any particular thinker associated with SIAI here. I'm merely summarizing, in my own words, ideas that I've heard and read very often from various individuals associated with SIAI.

If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.

The line of argument makes sense, if you accept the premises.

But, I don't.

I think the first of the above points is reasonably plausible, though I'm not by any means convinced. I think the relation between breadth of intelligence and depth of empathy is a subtle issue which none of us fully understands (yet). It's possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences. But I'm not terribly certain of this, any more than I'm terribly certain of its opposite.

I agree much less with the final three points listed above. And I haven't seen any careful logical arguments for these points.

I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.

I think a hard takeoff is possible, though I don't know how to estimate the odds of one occurring with any high confidence. I think it's very unlikely to occur until we have an AGI system that has very obviously demonstrated general intelligence at the level of a highly intelligent human. And I think the path to this "hard takeoff enabling" level of general intelligence is going to be somewhat gradual, not extremely sudden.

I don't have any strong sense of the probability of a hard takeoff, from an apparently but not provably human-friendly AGI, leading to an outcome likable to humans. I suspect this probability depends on many features of the AGI, which we will identify over the next years & decades via theorizing based on the results of experimentation with early-stage AGIs.

Yes, you may argue: the Scary Idea hasn't been rigorously shown to be true… but what if it IS true?

OK but ... pointing out that something scary is possible, is a very different thing from having an argument that it's likely.

The Scary Idea is certainly something to keep in mind, but there are also many other risks to keep in mind, some much more definite and palpable. Personally, I'm a lot more worried about nasty humans taking early-stage AGIs and using them for massive destruction, than about speculative risks associated with little-understood events like hard takeoffs.

Is Provably Safe or "Friendly" AGI A Feasible Idea?

The Scary Idea posits that if someone creates advanced AGI that isn't somehow provably safe, it's almost sure to kill us all.

But not only am I unconvinced of this, I'm also quite unconvinced that "provably safe" AGI is even feasible.

The idea of provably safe AGI is typically presented as something that would exist within mathematical computation theory or some variant thereof. So that's one obvious limitation of the idea: mathematical computers don't exist in the real world, and real-world physical computers must be interpreted in terms of the laws of physics, and humans' best understanding of the "laws" of physics seems to radically change from time to time. So even if there were a design for provably safe real-world AGI, based on current physics, the relevance of the proof might go out the window when physics next gets revised.

Also, there are always possibilities like: the alien race that is watching us and waiting for us to achieve an IQ of 333, at which point it will swoop down upon us and eat us, or merge with us. We can't rule this out via any formal proof, and we can't meaningfully estimate the odds of it either. Yes, this sounds science-fictional and outlandish; but is it really more outlandish and speculative than the Scary Idea?

A possibility that strikes me as highly likely is that, once we have created advanced AGI and have linked our brains with it collectively, most of our old legacy human ideas (including physical law, aliens, and Friendly AI) will seem extremely limited and ridiculous.

Another issue is that the goal of "Friendliness to humans" or "safety" or whatever you want to call it, is rather nebulous and difficult to pin down. Science fiction has explored this theme extensively. So even if we could prove something about "smart AGI systems with a certain architecture that are guaranteed to achieve goal G," it might be infeasible to apply this to make AGI systems that are safe in the real-world -- simply because we don't know how to boil down the everyday intuitive notions of "safety" or "Friendliness" into a mathematically precise goal G like the proof refers to.

This is related to the point Eliezer Yudkowsky makes that "value is complex" -- actually, human value is not only complex, it's nebulous and fuzzy and ever-shifting, and humans largely grok it by implicit procedural, empathic and episodic knowledge rather than explicit declarative or linguistic knowledge. Transmitting human values to an AGI is likely to be best done via interacting with the AGI in real life, but this is not the sort of process that readily lends itself to guarantees or formalization.

Eliezer has suggested a speculative way of getting human values into AGI systems called Coherent Extrapolated Volition, but I think this is a very science-fictional and incredibly infeasible idea (though a great SF notion). I've discussed it and proposed some possibly more realistic alternatives in a previous blog post (e.g. a notion called Coherent Aggregated Volition). But my proposed alternatives aren't guaranteed-to-succeed nor neatly formalized.

But setting those worries aside, is the computation-theoretic version of provably safe AI even possible? Could one design an AGI system and prove in advance that, given certain reasonable assumptions about physics and its environment, it would never veer too far from its initial goal (e.g. a formalized version of the goal of treating humans safely, or whatever)?

I very much doubt one can do so, except via designing a fictitious AGI that can't really be implemented because it uses infeasibly much computational resources. My GOLEM design, sketched in this article, seems to me a possible path to a provably safe AGI -- but it's too computationally wasteful to be practically feasible.

I strongly suspect that to achieve high levels of general intelligence using realistically limited computational resources, one is going to need to build systems with a nontrivial degree of fundamental unpredictability to them. This is what neuroscience suggests, it's what my concrete AGI design work suggests, and it's what my theoretical work on GOLEM and related ideas suggests. And none of the public output of SIAI researchers or enthusiasts has given me any reason to believe otherwise, yet.

Practical Implications


The above discussion of SIAI's Scary Idea may just sound like fun science-fictional speculation -- but the reason I'm writing this blog post is that when I posted a recent blog post about my current AGI project, the comments field got swamped with SIAI-influenced people saying stuff in the vein of: Creating an AGI without a proof of Friendliness is essentially equivalent to killing all people! So I really hope your OpenCog work fails, so you don't kill everybody!!!

(One amusing/alarming quote from a commentator (probably not someone directly affiliated with SIAI) was "if you go ahead with an AGI when you're not 100% sure that it's safe, you're committing the Holocaust." But it wasn't just one extreme commentator, it was a bunch … and then a bunch of others commenting to me privately via email.)

If one fully accepts SIAI's Scary Idea, then one should not work on practical AGI projects, nor should one publish papers on the theory of how to build AGI systems. Instead, one should spend one's time trying to figure out an AGI design that is somehow provable-in-advance to be a Good Guy. For this reason, SIAI's research group is not currently trying to do any practical AGI work.

Actually, so far as I know, my "GOLEM" AGI design (mentioned above) is closer to a "provably Friendly AI" than anything the SIAI research team has come up with. At least, it's closer than anything they have made public.

However GOLEM is not something that could be practically implemented in the near future. It's horribly computationally inefficient, compared to a real-world AGI design like the OpenCog system I'm now working on (with many others -- actually I'm doing very little programming these days, so happily the project is moving forward with the help of others on the software design and coding side, while I contribute at the algorithm, math, design, theory, management and fundraising levels).

I agree that AGI ethics is a Very Important Problem. But I doubt the problem is most effectively addressed by theory alone. I think the way to come to a useful real-world understanding of AGI ethics is going to be to

  • build some early-stage AGI systems, e.g. artificial toddlers, scientists' helpers, video game characters, robot maids and butlers, etc.
  • study these early-stage AGI systems empirically, with a focus on their ethics as well as their cognition
  • in the usual manner of science, attempt to arrive at a solid theory of AGI intelligence and ethics based on a combination of conceptual and experimental-data considerations
  • humanity collectively plots the next steps from there, based on the theory we find: maybe we go ahead and create a superhuman AI capable of hard takeoff, maybe we pause AGI development because of the risks, maybe we build an "AGI Nanny" to watch over the human race and prevent AGI or other technologies from going awry. Whatever choice we make then, it will be made based on far better knowledge than we have right now.
So what's wrong with this approach?

Nothing, really -- if you hold the views of most AI researchers or futurists. There are plenty of disagreements about the right path to AGI, but wide and implicit agreement that something like the above path is sensible.

But, if you adhere to SIAI's Scary Idea, there's a big problem with this approach -- because, according to the Scary Idea, there's too huge of a risk that these early-stage AGI systems are going to experience a hard takeoff and self-modify into something that will destroy us all.

But I just don't buy the Scary Idea.

I do see a real risk that, if we proceed in the manner I'm advocating, some nasty people will take the early-stage AGIs and either use them for bad ends, or proceed to hastily create a superhuman AGI that then does bad things of its own volition. These are real risks that must be thought about hard, and protected against as necessary. But they are different from the Scary Idea. And they are not so different from the risks implicit in a host of other advanced technologies.

Conclusion

So, there we go.

I think SIAI is performing a useful service by helping bring these sorts of ideas to the attention of the futurist community (alongside the other services they're performing, like the wonderful Singularity Summits). But, that said, I think the Scary Idea is potentially a harmful one. At least, it WOULD be a harmful one, if more people believed it; so I'm glad it's currently restricted to a rather small subset of the futurist community.

Many people die each day, and many others are miserable for various reasons -- and all sorts of other advanced and potentially dangerous technologies are currently under active development. My own view is that unaided human minds may well be unable to deal with the complexity and risk of the world that human technology is unleashing. I actually suspect that our best hope for survival and growth through the 21st century is to create advanced AGIs to help us on our way -- to cure disease, to develop nanotech and better AGI and invent new technologies; and to help us keep nasty people from doing destructive things with advanced technology.

I think that to avoid actively developing AGI, out of speculative concerns like the Scary Idea, would be an extremely bad idea.

That is, rather than "if you go ahead with an AGI when you're not 100% sure that it's safe, you're committing the Holocaust," I suppose my view is closer to "if you avoid creating beneficial AGI because of speculative concerns, then you're killing my grandma" !! (Because advanced AGI will surely be able to help us cure human diseases and vastly extend and improve human life.)

So perhaps I could adopt the slogan: "You don't have to kill my grandma to avoid the Holocaust!" … but really, folks… Well, you get the point….

Humanity is on a risky course altogether, but no matter what I decide to do with my life and career (and no matter what Bill Joy or Jaron Lanier or Bill McKibben, etc., write), the race is not going to voluntarily halt technological progress. It's just not happening.

We just need to accept the risk, embrace the thrill of the amazing time we were born into, and try our best to develop near-inevitable technologies like AGI in a responsible and ethical way.

And to me, responsible AGI development doesn't mean fixating on speculative possible dangers and halting development until ill-defined, likely-unsolvable theoretical/philosophical issues are worked out to everybody's (or some elite group's) satisfaction.

Rather, it means proceeding with the work carefully and openly, learning what we can as we move along -- and letting experiment and theory grow together ... as they have been doing quite successfully for the last few centuries, at a fantastically accelerating pace.

And so it goes.

Wednesday, October 13, 2010

Let's Turn Nauru Into Transtopia

Here's an off-the-wall idea that has some appeal to me ... as a long-time Transtopian fantasist and world traveler....

The desert island nation of Nauru needs money badly, and has a population of less than 15,000

There are problems with water supply, but they could surely be solved with some technical ingenuity.

The land area is about 8 square miles. But it could be expanded! Surely it's easier to extend an island with concrete platforms or anchored floating platforms of some other kind, than to seastead in the open ocean.

The country is a democracy. Currently it may not be possible to immigrate there except as a temporary tourist or business visitor. But I'd bet this could be made negotiable.

Suppose 15,000 adult transhumanists (along with some kids, one would assume) decided to emigrate to Nauru en masse over a 5-year period, on condition they could obtain full citizenship. Perhaps this could be negotiated with the Nauruan government.

Then after 5 years we would have a democracy in which transhumanists were the majority.

Isn't this the easiest way to create a transhumanist nation? With all the amazing future possibilities that that implies?

This would genuinely be of benefit to the residents of Nauru, which now has 90% unemployment. Unemployment would be reduced close to zero, and the economy would be tremendously enlarged. A win-win situation. Transhumanists would get freedom, and Nauruans would get a first-world economy.

Considerable infrastructure would need to be built. A deal would need to be struck with the government, in which, roughly,

  • They agreed to allow a certain number of outsiders citizenship, and to allow certain infrastructure development
  • Over a couple years, suitable infrastructure was built to supply electrical power, Internet, more frequent flights, etc.
  • Then, over a few years after that, the new population would flow in

This much emigration would make Nauru crowded, but not nearly as crowded as some cities. And with a seasteading mindset, it's easy to see that the island is expandable.

To ensure employment of the relocated transhumanists, we would need to get a number of companies to agree to open Nauru offices. But this would likely be tractable, given the preference of firms to have offices in major tech centers. Living expenses in Nauru would be much lower than in, say, Silicon Valley, so expenses would be lower.

Tourism could become a major income stream, given the high density of interesting people which would make Nauru into a cultural mecca. Currently there is only one small beach on Nauru (which is said to be somewhat dirty), but creation of a beautiful artificial beach on the real ocean is not a huge technological feat.

It would also be a great place to experiment with aquaculture and vertical farming.

What say you? Let's do it!


P.S.

Other candidates for the tropical island Transtopia besides Nauru would be Tuvalu and Kiribati; but Kiribati's population is much larger, and Tuvalu is spread among many islands, and is also about to become underwater due to global warming. So Nauru would seem the number one option. Though, Tuvalu could be an interesting possibility also, especially if we offered to keep the island above water by building concrete platforms or some such (a big undertaking, but much easier than seasteading). This would obviously be a major selling point to the government.

Sunday, October 10, 2010

What Would It Take to Move Rapidly Toward Beneficial Human-Level AGI?

On Thursday I finished writing the last chapter of my (co-authored) two-volume book on how to create beneficial human-level AGI, Building Better Minds. I have a bunch of editing still do so, some references to add, etc. -- but the book is now basically done. Woo hoo!

The book should be published by a major scientific publisher sometime in 2011.

The last chapter describes, in moderate detail, how the CogPrime cognitive architecture (implemented in the OpenCog open-source framework) would enable a robotic or virtual embodied system to appropriately respond to the instruction "Build me something surprising out of blocks." This is in the spirit of the overall idea: Build an AGI toddler first, then teach it, study it, and use it as a platform to go further.

From an AGI toddler, I believe, one could go forward in a number of directions: toward fairly human-like AGIs, but also toward different sorts of minds formed by hybridizing the toddler with narrow-AI systems carrying out particular classes of tasks in dramatically transhuman ways.

Reading through the 900-page tome my colleagues and I have put together, I can't help reflecting on how much work is left to bring it all into reality! We have a software framework that is capable of supporting the project (OpenCog), and we have a team of people capable of doing it (people working with me on OpenCog now; people working with me on other projects now; people I used to work with but who moved on to other things, but would enthusiastically come back for a well-funded AGI project). We have a rich ecosystem of others (e.g. academic and industry AI researchers, as well as neuroscientists, philosophers, technologists, etc. etc.) who are enthusiastic to provide detailed, thoughtful advice as we proceed.

What we don't have is proper funding to implement the stuff in the book and create the virtual toddler!

This is of course a bit frustrating: I sincerely believe I have a recipe for creating a human-level thinking machine! In an ethical way, and with computing resources currently at our disposal.

But implementing this recipe would be a lot of work, involving a number of people working together in a concentrated and coordinated way over a significant period of time.

I realize I could be wrong, or I could be deluding myself. But I've become a lot more self-aware and a lot more rational through my years of adult life (I'm 43 now), and I really don't think so. I've certainly introspected and self-analyzed a lot to understand the extent to which I may be engaged in wishful thinking about AGI, and my overall conclusion (in brief) is as follows: Estimating timing is hard, for any software project, let alone one involving difficult research. And there are multiple PhD-thesis-level research problems that need to be solved in the midst of getting the whole CogPrime design to work (but by this point in my career, I believe I have a decent intuition for distinguishing tractable PhD-thesis-level research problems from intractable conundrums). And there's always the possibility of the universe being way, way different than any of us understands, in some way that stops any AGI design based on digital computers (or any current science!) from working. But all in all, evaluated objectively according to my professional knowledge, the whole CogPrime design appears sensible -- if all the parts work vaguely as expected, the whole system should lead to human-level AGI; and according to current computer science and narrow AI theory and practice, all the parts are very likely to work roughly as expected.

So: I have enough humility and breadth to realize I could be wrong, but I have studied pretty much all the relevant knowledge that's available, I've thought about this hard for a very long time and talked to a large percentage of the world's (other) experts; I'm not a fool and I'm not self-deluded in some shallow and obvious way. And I really believe this design can work!

It's the same design I've been refining since about 1996. The prototyping my colleagues and I did at Webmind Inc. (when we had a 45-person AGI research team) in 1998-2001 was valuable, both for what it taught us about what NOT to do and for positive lessons. The implementation work my colleagues at Novamente LLC and the OpenCog project have done since 2001 has been very valuable too; and it's led to an implementation of maybe 40% of the CogPrime design (depending on how you measure it). (But unfortunately 40% of a brain doesn't yield 40% of the functionality of a whole brain, particularly because in this case (beyond the core infrastructure) the 40% implemented has been largely chosen by what was useful for Novamente LLC application projects rather than what we thought would serve best as the platform for AGI.) Having so many years to think through the design, without a large implementation team to manage, has been frustrating but also good in a sense, in that it's given me and my colleagues time and space to repeatedly mull over the design and optimize it in various ways.

Now, the funding situation for the project is not totally dismal, or it least it doesn't seem so right now. For that I am grateful.

The OpenCog project does appear to be funded, at least minimally, for the next couple years. This isn't quite 100% certain, but it's close -- it seems we've lined up funding for a handful of people to work full-time on a fairly AGI-ish OpenCog application for 2 years (I'll post here about this at length once it's definite). And there's also the Xiamen University "Brain-Like Intelligent Systems" lab, in which some grad students are applying OpenCog to enable some intelligent robotic behaviors. And Novamente LLC is still able to fund a small amount of OpenCog work, via application projects that entail making some improvements to the OpenCog infrastructure along the way. So all in all, it seems, we'll probably continue making progress, which is great.

But I'm often asked, by various AGI enthusiasts, what it would take to make really fast progress toward my AGI research goals. What kind of set-up, what kind of money? Would it take a full-on "AGI Manhattan Project" -- or something smaller?

In the rest of this blog post I'm going to spell it out. The answer hasn't changed much for the last 5 years, and most likely won't change a lot during the next 5 (though I can't guarantee that).

What I'm going to describe is the minimal team required to make reasonably fast progress. Probably we could progress even faster if we had massively more funding, but I'm trying to be realistic here.

We could use a team of around 10 of the right people (mostly, great AI programmers, with a combination of theory understanding and implementation chops), working full-time on AI development.

We could use around 5 great programmers working on the infrastructure -- to get OpenCog working really efficiently on a network of distributed multi-processor machines.

If we're going to do robotics, we could use a dedicated robotics team of perhaps 5 people.

If we're going to do virtual agents, we could use 5 people working on building out the virtual world appropriately for AGI.

Add a system administrator, 2 software testers, a project manager to help us keep track of everything, and a Minister of Information to help us keep all the documentation in order.

That's 30 people. Then add me and my long-time partner Cassio Pennachin to coordinate the whole thing (and contribute to the technical work as needed), and a business manager to help with money and deal with the outside world. 33 people.

Now let's assume this is done in the US (not the only possibility, but the simplest one to consider), and let's assume we pay people close to market salaries and benefits, so that their spouses don't get mad at them and decrease their productivity (yes, it's really not optimal to do a project like this with programmers fresh out of college -- this isn't a Web 2.0 startup, it's a massively complex distributed software system based on integration of multiple research disciplines. Many of the people with the needed expertise have spouses, families, homes, etc. that are important to them). Let's assume it's not done in Silicon Valley or somewhere else where salaries are inflated, but in some other city with a reasonable tech infrastructure and lower housing costs. Then maybe, including all overheads, we're talking about $130K/year per employee (recall that we're trying to hire the best people here; some are very experienced and some just a few years out of college, but this is an average).

Salary cost comes out to $4.3M/year, at this rate.

Adding in a powerful arsenal of hardware and a nice office, we can round up to $5M/year

Let's assume the project runs for 5 years. My bet is we can get an AGI toddler by that time. But even if that's wrong, I'm damn sure we could make amazing progress by that time, suitable to convince a large number of possible funding sources to continue funding the project at the same or a greater level.

Maybe we can do it in 3 years, maybe it would take 7-8 years to get to the AGI toddler goal -- but even if it's the latter, we'd have amazing, clearly observable dramatic progress in 3-5 years.

So, $25M total.

There you go. That's what it would cost to progress toward human-level AGI, using the CogPrime design, in a no-BS straightforward way -- without any fat in the project, but also without cutting corners in ways that reduce efficiency.

If we relax the assumption that the work is done in the US and move to a less expensive place (say, Brazil or China where OpenCog already has some people working) we can probably cut thc cost by half without a big problem. We would lose some staff who wouldn't leave the US, so there would be a modest decrease in productivity, but it wouldn't kill the project. (Why does it only cut the cost by half? Because if we're importing first-worlders to the Third World to save money, we still need to pay them enough to cover expenses they may have back in the US, to fly home to see their families, etc.)

So, outside the US, $13M total over 5 years.

Or if we want to rely more on non-US people for some of the roles (e.g. systems programming, virtual worlds,...), it can probably be reduced to $10M total over 5 years, $2M/year.

If some wealthy individual or institution were willing to put in $10M -- or $25M if they're fixated on a US location (or, say, $35M if they're fixated on Silicon Valley) -- then we could progress basically full-speed-ahead toward creating beneficial human-level AGI.

Instead, we're progressing toward the same goal seriously and persistently, but much more slowly and erratically.

I have spoken personally to a decent number of individuals with this kind of money at their disposal, and many of them are respectful of and interested in the OpenCog project -- and would be willing to put in this kind of money if they had sufficient confidence the project would succeed.

But how to give potential funders this sort of confidence?

After all, when they go to the AI expert at their local university, the guy is more likely than not to tell them that human-level AI is centuries off. Or if they open up The Singularity is Near, by Ray Kurzweil who is often considered a radical techno-optimist, they see a date of 2029 for human-level AGI -- which means that as investors they would probably start worrying about it around 2025.

A 900-page book is too much to expect a potential donor or investor to read; and even if they read it (once its published), it doesn't give an iron-clad irrefutable argument that the project will succeed, "just" a careful overall qualitative argument together with detailed formal treatments of various components of the design.

The various brief conference papers I've published on the CogPrime design and OpenCog project, give a sense of the overall spirit but don't tell you enough to let you make a serious evaluation. Maybe this is a deficiency in the writing, but I suspect it's mainly a consequence of the nature of the subject matter.

The tentative conclusion that I've come to is that, barring some happy luck, we will need to come up with some amazing demo of AGI functionality -- something that will serve as an "AGI Sputnik" moment.

Sputnik, of course, caused the world to take space flight seriously. The right AGI demo could do the same. It could get OpenCog funded as described above, plus a lot of other AGI projects in parallel.

But the question is, how to get to the AGI Sputnik moment without the serious funding. A familiar, obvious chicken-and-egg problem.

One possibility is to push far enough toward a virtual toddler in a virtual world, using our current combination of very-much-valued but clearly-suboptimal funding sources -- that our animated AGI baby has AGI Sputnik power!

Maybe this will happen. I'm certainly willing to put my heart into it, and so are a number of my colleagues.

But it sure is frustrating to know that, for an amount of money that's essentially "pocket change" to a significant number of individuals and institutions on the planet, we could be progressing a lot faster toward some goals that are really important to all of us.

To quote Kurt Vonnegut: "And so it goes."

Tuesday, September 28, 2010

Mind Uploading via Gmail

Cut and pasted from Giulio Prisco's blog here

(with one small change)

...

Mind Uploading via Gmail

To whom it may concern:

I am writing this in 2010. My Gmail account has more than 20GB of data, which contain some information about me and also some information about the persons I have exchanged email with, including some personal and private information.

I am assuming that in 2060 (50 years from now), my Gmail account will have hundreds or thousands of TB of data, which will contain a lot of information about me and the persons I exchanged email with, including a lot of personal and private information. I am also assuming that, in 2060:

  1. The data in the accounts of all Gmail users since 2004 is available.
  2. AI-based mindware technology able to reconstruct individual mindfiles by analyzing the information in their aggregate Gmail accounts and other available information, with sufficient accuracy for mind uploading via detailed personality reconstruction, is available.
  3. The technology to crack Gmail passwords is available, but illegal without the consent of the account owners (or their heirs).
  4. Many of today's Gmail users, including myself, are already dead and cannot give permission to use the data in their accounts.

If all assumptions above are correct, I hereby give permission to Google and/or other parties to read all data in my Gmail account and use them together with other available information to reconstruct my mindfile with sufficient accuracy for mind uploading via detailed personality reconstruction, and express my wish that they do so.

Signed by Ben Goertzel on September 28, 2010, and witnessed by readers.

NOTE: The accuracy of the process outlined above increases with the number of persons who give their permission to do the same. You can give your permission in comments, Twitter or other public spaces.

Sunday, August 08, 2010

RIP Lev Goertzel Mann, 1995-2010



The last "obituary" blog post I wrote was for my grandfather Leo Zwell -- the man who taught me about science and so much else. He died at age 91, after a long life rich in personal, professional and intellectual satisfaction. His death was tragic, as are almost all deaths. But the death I'm noting in this (painfully inadequate) post is vastly more depressing and tragic.

My sister's son Lev died last month, just short of his 15'th birthday. His death was totally unanticipated -- he was on vacation with his parents and his brother Jaal, camping in the forest in Alberta (Canada), and was struck in the night with a bizarre sudden illness. He stopped breathing a minute from the hospital, and died there an hour and a half later. The cause of death remains unclear, but the autopsy revealed a severe brain infection of some sort. One serious possibility is some form of meningitis (there are forms with quite brief incubation periods, i.e. a handful of days).

Think about it: One day you're totally healthy … then the next day, you die in your sleep, never having known anything was wrong. Maybe a mild headache the day before, no different than dozens or hundreds of other headaches you've had in your life.

My friend and colleague Jeff Pressing died in a very similar way a decade ago, in the midst of his adulthood.

I visited my sister and her family for 4 days shortly after Lev's death (they live near Seattle, I live in Maryland), but even so, I can't imagine what they're going through.

A full obituary is here.

My extended family is fairly small, but even so, there are some relatives I'm closer to than others. Lev was among the closer ones. He always felt like part of the same cognitive/spiritual tribe as me and my own kids: weird, sarcastic, outrageous sometimes, difficult and obnoxious sometimes, intellectual, adventurous, curious, warm-hearted, courageous, playful, compassionate, constantly skeptical but willing to steadfastly defend his best-guess beliefs and his sometimes-odd tastes.

Like me, Lev was called "crazy" many times, but usually in an affectionate way -- but of course he wasn't really crazy, just a free thinker unwilling to take anyone's word for anything or take anything for granted, dedicated to find his own path to enjoyment and understanding.

Though we never lived nearby each other, we saw each other at least once a year, sometimes more. Most recently he had spent a week at my house in April 2010 -- 3 months before his death. It was a great visit, with some fascinating conversations as well as lots of video games, frisbee, hiking, rock-climbing and so forth. I was struck by how fast he was growing up all of a sudden. Lev had always been a smart and inquisitive kid, but on this visit he was more interested to carry out lengthy intellectual chats -- about DNA, time travel, AI and so forth. He also showed a deep knowledge of history and politics, with an insight into Western history complementing my own sons' recent study of Japanese and Mongolian history. We even discussed the possibility of immortality via technological means, and he was all in favor.

He was a devout heavy metal head, and particularly a devotee of Metallica. I failed to convert him to jazz fusion, though he admitted that some of it sounded a bit like music. Like many teenagers, he had mused on death frequently, and long previously had told his parents the song he wanted played at his funeral, if he were ever to die: Fade to Black, by Metallica.

Fade to Black was indeed played at the funeral, which was the point in the funeral where I finally "lost it" and cried in a way I hadn't for a very long time. I played that song many times in the week following. Though I prefer Master of Puppets as a piece of music, obviously his choice was highly apropos for the setting. Yet the lyrics didn't quite fit. The lyrics say


Life it seems will fade away
Drifting further every day
Getting lost within myself
Nothing matters, no one else

I have lost the will to live
Simply nothing more to give
There is nothing more for me
Need the end to set me free

Things not what they used to be
Missing one inside of me
Deathly lost, this can't be real
Cannot stand this hell I feel

Emptiness is filling me
To the point of agony

Growing darkness taking dawn

I was me but now he's gone


No one but me can save myself
But it's too late
Now I can't think
Think why I should even try

Yesterday seems as though
It never existed
Death greets me warm
Now I will just say goodbye

but certainly Lev had NOT lost the will to live, except perhaps in the last few hours when he was unconscious and his body was succumbing to the infection that killed him. He was full of enthusiasm for life and excitement for the future.

"I was me but now he's gone."

He and his best friend Zay had plans to go to university together in Switzerland (I don't know why they chose that country). They had been best friends many years earlier when they went to school together in Costa Rica, and had maintained their friendship over many years via 7+ hours per week of phone calls -- wide-ranging phone calls, sometimes occupied with an expansive, multi-year collaboratively-created "imaginary adventure game"; sometimes with conversation on serious or casual topics; sometimes with long pauses while one or another worked on homework while the phone line was kept open.

The funeral was Quaker style, meaning that there was no primary speaker, but rather the individuals in the audience were invited to stand up and state their memories of Lev. There were many moving speeches but to me the most touching and insightful was Zay's. Zay recounted Tegmark's variant of multiverse theory, according to which -- due to the large extent and general quasi-randomness of the universe -- it's likely that the universe contains multiple variants of Earth, each of which is similar to our own but with minor variations. Zay pointed out that, in this case, there would be many variant Earths, including many on which Lev did not get an infection and die at age 14. He said he took some small solace from the fact that, in those variant Earths, his analogue and Lev's analogue would get to grow up together and experience adulthood together.

At my sister's house, after the funeral, we each took some of Lev's things, to symbolize his memory. My 13 year old daughter Scheherazade took one of the stories he'd hand-written in a small notebook. On the front page, alongside the title, was scribbled the following marginal note: "Kick Zay in the testicles." I was reminded of a time, years ago, when I invited the 9 year old Lev to "play fight" with me, in the manner one often does with young children. He immediately initiated the fight by kicking me in the nuts as hard as he could. No particular hostility was intended -- he was just play-fighting, Lev-style. A few minutes later he wandered off in some random direction, lost in imagination, and we had to hunt him down -- a common habit when in public with Lev, especially when he was younger.

As I pointed out to Zay in a conversation after the funeral, if Tegmark's theory is correct, then it's possible some future technology could allow us to visit those variant Earths one day, so he might actually get to see the 15, 25 and 85 year old Lev after all.

It's also possible that, as Martine Rothblatt, Bill Bainbridge and some other futurists speculate (see e.g. CyberEv), we may eventually be able to reconstitute deceased humans from data such as their writings, and recordings of their voice and physical appearance and movements, and their imprints on the memories of others.

But while I take such future possibilities seriously, they don't really help mute the tragedy much. Right now, in the world we know concretely, Lev is gone -- and I can't shake the feeling he shouldn't be.

There's some room for philosophical debate about the merits of death via old age. Some say death is natural, and therefore aesthetically and morally positive. Some say it lends a particular meaningfulness and elegance to life, and that without it life would lose depth and pizazz. I don't really buy that. Of course death adds meaning to life, and of course there is a certain aesthetic charm to a life that ends in death, which wouldn't be there in an infinite life. But an infinite life would have a different kind of depth and pizazz -- and probably ultimately a much better kind. There is also a special meaningfulness and elegance to being tortured, or dying of cancer, but yet we don't crave these, we try to avoid them -- because we prefer other forms of meaningfulness and elegance.

But anyway, all that is moot in this case -- only the most cognitively-distorted religious fanatic would argue the merits of the sudden death by disease of a healthy, vibrant child.

When my grandfather Leo died at age 91, it was terribly sad, but there was a certain feeling of completeness to the story. He had anticipated his death for a while -- he had no expectations of afterlife, and he had reconciled himself to his own impending nonexistence. Ever a scientist, he came to see his limited time-scope as being roughly comparable to his limited space-scope. He'd had a long and rich marriage, children, grandchildren, great-grandchildren,…. His life had followed a meaningful arc. I certainly wish Leo had lived forever -- but, the wrongness of his death was pretty much restricted to the wrongness implicit in the general wrongness of the human condition.

But Lev dying at age 14 is another matter. It's like finding the bottom 1/5 of a Picasso painting, or an unfinished symphony with only the first of 7 movements -- and a tiny fragment of the second, cut off at a completely senseless place. There's no positive aesthetic or moral value in such a death. It just fucking sucks. There's no obscene invective, and no poetic prose nor obituary nor blog post, and no Metallica nor Beethoven song, capable to convey even fractionally the massive fucking-suckiness of such a thing.

Nietzche wrote about the merits of "dying at the right time." He felt a good death was just as important as a good life. Nietzsche himself egregiously failed to die at the right time, spending the last 11 years of his life mute and semi-insane from some sort of brain disease. Leo, perhaps, died at the right time (given the current, deeply flawed order of human existence). Lev massively did not.

I really hope to see Lev again on some variant Earth or in some computer simulation or other dimension or whatever. I have no way to confidently estimate the odds of such a thing. I do believe the world we now know and understand via science merely scratches the surface of the overall universe with its copious transhuman hidden patterns, orders and flows. But the mysteriousness of life doesn't imply that the universe will someday deliver us the various things we want.

For now, it's just really damn depressing.

And for all that, I still can't remotely imagine how I'd feel if it were one of my own kids.

I'll close with some pieces of advice from Lev, which were collected by his parents and emailed to friends and family and posted on the wall at the funeral:
  • Color outside the lines and do it quickly.
  • There is no need to be consistent.
  • Be difficult. It is a winning strategy.
  • Go outside in the pouring rain.
  • Be intensely critical of everything all the time.
  • There is nothing wrong with a good scowl. Practice if necessary.
  • Spend your money quickly.
  • If you love someone, give them your stuff.
  • Go off-trail and climb to the top of the hill.
  • Like yourself. You're awesome.
  • Buy it now because you want it. Next week you won't care about it.
  • When forced to eat vegetables, shove them all in your mouth at once, chew, and swallow. Then enjoy the rest of your meal.
  • Love your friends; ignore your enemies.
  • Don't listen to anything your parents say. They know way less than you do.
  • Say whatever is on your mind...mean or nice.
  • Wear a hat. Bare heads are boring.
  • If what you are doing is not fun, it is not worth doing.
  • You really can read 3 books at once.
  • When everyone turns in the assignment, that's a good clue that you better get started on it.
  • Live in the moment.
  • Don't let anyone break a spirit.
  • Speak out for what you believe in.






P.S.

This post-script is inspired by Lev's death rather than directly about Lev or his death. I'm sure Lev wouldn't mind.

After Lev's funeral I couldn't help imagining: what if the funeral were my own? What if I were the one struck randomly by some bizarre disease?

Of course, we always know intellectually that each day could be our last -- but we rarely live in a manner that richly incorporates this knowledge.

A few days after Lev's funeral I had a dream of my own funeral -- which was (in the dream) in the same building as Lev's, but of course with different music and different people. Also, it was winter outside, whereas Lev's occurred in summer.

Instead of "Fade to Black," my dream-funeral featured two songs in sequence: The Structure of Mind by me (the best song I've written); followed by Soothsayer, by Buckethead (the best song I wish I'd written).

Soothsayer, to me, is all about the presence of a hidden order, pattern and flow to the cosmos. It's about the presence of something else there in the world -- something bigger and wiser and crazier than us; some structured dynamic domain of being/becoming, which we can never quite understand without losing our pathetic little human selves. Buckethead's Soothsayer -- like Hendrix's Voodoo Child before him -- wants to lead us there, but we travel there at our peril. You have to choose: either retain your human form and forego the transcendent domain, except in dribs and drabs; or lose your self and open your heart and mind to the transcendent. Lev followed the Soothsayer. For now, still, here I am.

Also my daughter Scheherazade (who is now 13, but was older than that in the dream) read the following statement, which I had written before my death:

"
I'd like to thank my parents Carol and Ted for creating me and raising me. My grandfather Leo Zwell for teaching me about science. My kids Zar, Zeb and Zade for being awesome kids and giving a center to my life. My first wife Gwen and my second wife Izabela for all the good times and deep sharing. Gwen for giving me the kids as well. Cassio Pennachin for so much professional and intellectual partnership. Goodbye, and thanks for all the fish. Hope to see you all again in some other time, or some other dimension. As Jimi Hendrix said: If I don't see you no more in this world, I'll meet you in the next one, and don't be late.
"

I think there was more to the speech Zade read than that also, but I can't recall all the details.

Then Jimi Hendrix's Voodoo Chile (not Slight Return) was played, while people ate green eggs and ham (seriously!).

(Gwen and I made green eggs and ham for the kids once, back in the day, inspired by the Dr. Seuss book. The food coloring made the eggs taste funny.)

Voodoo Chile:

Well, the night I was born
Lord I swear the moon turned a fire red
The night I was born
I swear the moon turned a fire red
Well my poor mother cried out "lord, the gypsy was right!"
And I seen her fell down right dead

Well, mountain lions found me there waitin'
And set me on a eagles back
Well, mountain lions found me there,
And set me on a eagles wing
(Its' the eagles wing, baby, what did I say)
He took me past to the outskirts of infinity,
And when he brought me back,

He gave me a venus witch's ring

And he said "Fly on, fly on"

Because I'm a voodoo chile, baby, voodoo chile

Well, I make love to you,
And lord knows you'll feel no pain
Say, I make love to you in your sleep,
And lord knows you felt no pain
'Cause I'm a million miles away
And at the same time I'm right here in your picture frame
'Cause I'm a voodoo chile
Lord knows I'm a voodoo chile

Well my arrows are made of desire
From far away as Jupiter's sulphur mines
Say my arrows are made of desire, desire
From far away as Jupiters sulphur mines
(Way down by the Methane Sea, yeah)
I have a hummingbird and it hums so loud,
You think you were losing your mind, hmmm...

Well I float in liquid gardens
And Arizona new red sand
I float in liquid gardens
Way down in Arizona red sand

Well, I taste the honey from a flower named Blue,
Way down in California

And then New York drowns as we hold hands

'Cause I'm a voodoo chile
Lord knows I'm a voodoo chile



I'm a million miles away -- and at the same time I'm right here in your picture frame.

Anyway, there you go. In the unlikely event I should meet an untimely doom like Lev, you now know what music to play and what statement to read at my funeral.

(For those who are into psychic powers: no, that dream didn't have the particular flavor of a premonition. It felt like something that occurred in a certain percentage of the universes in the multiverse, but not necessarily a high percentage. Much like Lev's death. Improbable but sadly, not impossible.)

But, depressing as Lev's extraordinarily untimely death is, it hasn't turned me into a pessimist. I'm still pushing for eternal life for me and as many others as possible.

And I note that, in my dream of my funeral, my corpse was not there -- and nor were my ashes. Rather, my body was frozen at Alcor.

And my mind, in the dream, was somehow hovering over the proceedings -- watching and knowing, but not quite able to form a thought or an action.

Every since I was 6 or 7 years old, I've had a strange intuition about the nature of "life after death." You're not exactly there, but you're not really not-there either. Your mind exists, but almost melded in with the rest of the cosmos. You perceive, and sort-of know, but you don't act autonomously. You float there, superimposed. And then maybe, some future technology brings you out.

Yes, I know, that's totally unscientific -- but there you go.

Quantum theory does suggest that everything that ever happened in the universe, every structure that ever existed -- is informationally still present, encoded in the fluctuations of various wavicles as they scatter about. Could all that information be mined out, somehow, someday? A fascinating possibility. Yet science -- which Lev admired, as I do -- is great but limited.

Life remains a mystery. Sometimes mysteriously wonderful -- and sometimes mysteriously, amazingly, almost unbelievably fucking shitty.