mdbtxt1
mdbtxt2
Proceed to Safety

Extropianism    

Overview of Extropianism

The Extropian Principles, by Max More

The Extropianism FAQ

References

A short story about uploading

My predictions of the future


Overview

I view Extropianism as a response to the Malthusian movement, the "doomsayers" of the years when I was growing up (roughly, the 1960's and 1970's). Here are some examples of "doomsayers":

- Thomas Malthus, An Essay on the Principle of Population (1798): Population will ultimately increase faster than food supply, the price of food will go up, and starvation will ultimately become the greatest problem mankind faces. (Actually, the price of wheat has steadily fallen for the last 200 years, while the population has gone up from 5 million to about 250 million, and the U.S. is still a big exporter of wheat. In 1997, and still in 2014, the primary food-related problem in the U.S. was/is obesity.).

- Paul Erlich, The Population Bomb (1968): We would run out of food by the year 1977, and hundreds of millions of people will starve in the 1980's.

- Paul Erlich, "Eco-Catastrophe!", Ramparts (1969): Most of the people who are going to die in [the coming ecological cataclysm] have already been born.

- Air pollution in the U.S. will continue to increase.

In fact, if you look back at these predictions and see where they went wrong, the universal common element is that they didn't anticipate technological progress, on the level that has in fact come to pass. The air pollution problem was addressed in the early 1970's, by legal changes (regulation) and technological advancement driven mostly by people's displeasure with pollution.

In general, pure Malthusianism seems to assume that "more people is worse", i.e. as we get more and more people on the planet life will get worse because we'll have more people sharing less of the planet's resources.

Instead, what has been happening is that as the population increases, there are more people to invent things, and more and more problems become solved. As the population increases we continue to create new problems for ourselves, but our ability to solve the problems is increasing, apparently at an even faster rate.

This page shows how the same principles explain the GNP/overcrowding paradox (the crowded countries have higher standards of living, and vice versa).

Another associated phenomenon is the misapplication of the "exponential growth" model to human population. It is commonly believed that the human race has been multiplying at some constant percentage rate (like 6 percent a year, or whatever) and that today's rate is the same as the rate in (say) the 1800's, and that it's more noticable now only because the total population is so large now. Expressed as a mathematical formula, we have:

population = C × eK × year

In fact, if you estimate the current world population growth rate and then extend the curve back into the past, you quickly discover that any reasonable value of the current growth rate would correspond to very low misestimates of past population. For example, if the population is doubling every 25 years, then in the the year 1200 there was less than one person alive.

The actual world population growth rate is most closely modeled by an inverse linear (hyperbolic) model:

population = C / (K - year)

In 1998 this model fit quite well, in fact, and the values of C and K were (about) 2*1011 and 2027, respectively.

The inverse linear model also gives the rather unbelievable prediction that the population will be infinite at some point. Of course, we all know that we can't do that, not in our physical universe, even if we find a way to separate the human mind from the physical body so it's not limited by its physical size and energy requirements.

Nevertheless, we can imagine this hypothetical future point in time given by the population model and Extropians call it the Singularity. A "singularity" is a break in the continuity of something, like a sudden jump in the value of a function, or a black hole, a place of infinite space-time curvature where the laws of General Relativity break down. The "singularity" anticipated by Extropians in the late 1990's was to occur around the year 2027. (As of 2014, the anticipated date is closer to 2045).

This concept has been around for hundreds of years, but the word "singularity" is from the mid 1950's. According to Stanislaw Ulam, he and John von Neumann observed that:

"the ever accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Earlier expressions of the same or similar ideas were made by I.J. Good in 1965, Alan Turing in 1951, Henry Adams in 1909, Samuel Butler in 1863, Nicolas de Condorcet in 1794, and others.

As currently envisioned by the Extropian movement, the Singularity represents a point in our future beyond which we cannot have any useful understanding, nor make any objectively accurate prediction, of what life will be like. Nevertheless, subjective or abstract predictions can be made, such as a continuation of "exponential" accelerating pace of technology and human culture (with worsening "future shock"), an "intelligence explosion" in which human intelligence and artificial intelligence build on each other at an ever-accelerating rate, and a general belief that super-intelligence will bring solutions to all the world's problems. That is the extent of the transformation that Extropians imagine.

The number 2027 from the late 1990's was an average of several different predictions, of which the population model is one example. There are several other measures of human development or progress that also seem to predict an infinite value or some other sort of paradox, and remarkably they all predict a point in time about 30 years in the future: thus, in 1998 the predicted singularity was near 1998+30 = 2028, and in 2014 the prediction is close to 2014+30 = 2044.

It is the belief of the Extropians that this coincidence is actually the result of a common cause, that the human race is indeed headed towards a sudden transformation of some kind.

If you take the rate of increase in computer speed (which is an exponential function, so long as you include parallel processing in the GPU) and extrapolate forward to the year when an average-to-good "desktop" home computer will be powerful enough to simulate a human brain through explicit modeling of all the neurons, you get 2025. Once the necessary software has been developed (fairly likely, though it might be hard to get) we'll be dealing with the slavery debate all over again. There are lots of thornier problems, such as the prospect of AIs deciding how to aim and fire weapons.

Occurrence of major "revolutions" in human potential:


Language (about 5 million BC?)
Agriculture (about 30000 BC)
Writing (about 10000 BC?)
Versatile printing (about 1450 AD)
Mass production (about 1900 AD)
Mass-produced Computers (about 1950 AD)

You might disagree as to which events should be on the list or what date is assigned to a specific event, but the basic principle is that each of these events is very important and their frequency of occurrence is increasing. If you imagine more similar events in the future, and if (as would seen logical) their frequency of occurrence continues to increase in the same way, then you could fit some sort of mathematical function to the known (past) events and predict the future; the best fits are given by functions that predict a point of "infinite" progress. The dates given by this vary widely depending on what past events you pick and how you choose to fit a function to them.

Notwithstanding all of this, skeptics point out that it's unwise to rely on exponential growth, or continually accelerating change, etc. because there is too little convincing evidence, and too many things that might get in the way. More importantly, it is not really necessary to know how quickly things will change, policy and ethics are the more important issues.

For example, it is likely that medical research will enable further extension of life (beyond that already accomplished by such measures as nutrition and disease treatment) by methods that affect the activation or suppression of DNA (gene) expression. This is a highly controversial area of research, and it raises pethics and policy questions. The same contentious debate will occur whether we face it in 5 years, or in 20, or in 100. So to some extent, we shouldn't worry ourselves with predicting when these things will happen and instead focus on the more important thinking about what to do about it when it does.


References

http://www.sculptors.com/~salsbury/Articles/singularity.html

http://tezcat.com/~eliezer/singularity.html

Max More: more (at) extropy.org


Munafo-specific stuff:

BEST DO IT SO and BEST DO DO IT

Why do you say BE ST SO DO IT?

The five Extropian Principles are most commonly abbreviated/remembered via the acronym sentence "BEST DO IT SO". I encountered this sentence and considered all of the different implications and interpretations it had. I realized that BEST SO DO IT would work a lot better for me because it happens to be what I need, so I decided to alter the standard for my personal use.

"Best do it so"

"It is best that you do it, so that ..." which implies Dynamic Optimism

"do it so" implies Picard's "make it so", and therefore: "Best do it so" implies "this is the best course of action. Make it so."

"It is best that you do it in that way." which implies "you'd better do it that way, or else something bad might happen." Reminds me too much of immutable dogma.

"Best so do it"

"This is the best thing to do, so do it!" which implies Dynamic Optimism

"BE, ST, SO, DO, IT": First we acknowledge that we can Expand without Bounds. Then (since everything begins within the self) we accept and embrace Self-Transformation. Empowered by groups (or united into them by our commitment), we experience Spontaneous Order. The group helps us and we help the group effect change through Dynamic Optimism, and gradually steadily and finally we achieve Intelligent Technology.

This exactly matches the order in which I experienced and incorporated the five principles into my life.

EXTROPISM

This is a new mnemonic with similar construction:

Endless eXtension; Transcending Restriction; Overcoming Property; Intelligence; Smart Machines.


A 2014 Portrait

I have stayed away from the Extropianism movement, except for continuing to have these pages on my website, because there's a lot of Weird Stuff™ out there.

For example, Newcomb's paradox (itself an evolution of Pascal's Wager) was developed into Pascal's mugging. I'll present all three in historical order:

Pascal's Wager: A person must decide whether to believe in God. If God exists, the person will go to heaven or hell based on whether they believe. If God does not exist, the consequence is less severe:

God exists God does not exist
believe in God +∞ -1
disbelief -∞ 1

Regardless of the probability of God's existence, the winning strategy is to believe. However this argument was challenged by theologists, saying that a person choosing to believe just based on this argument is not a sincere believer.

Newcomb's Paradox: A person must choose box B, or both boxes A and B. A very intelligent Predictor guesses what the person will choose. Box A always contains $1000, and box B contains $1,000,000 if the person is predicted to take only box B, but contains nothing if the person is predicted to take both:

predict A and B predict only B
choose only B $0 $1,000,000
choose A and B $1000 $1,001,000

If the person has will-power and understanding of the problem, and trusts that the Predictor is really super-intelligent, she should commit to "choose only B" in advance, and stick to her decision.

Pascal's Mugging involves a robber who has no weapon but offers to pay back his victim many times over. The problem remains unchanged if we replace the negative-sounding "robber" with a crowdfunding project. A utilitarian, altruistic person must decide whether to donate $100 to a cause that has a very small chance of succeeding in a project that, if successful, will do a huge amount of good:

Charity succeeds Charity fails
donate $1,000,000,000 $-100
abstain 0 0

Even if the charity's odds of success are one thousandth of 1%, the expected utility (a probability-weighted measure of payoff) is still much better if the person gives the money.

Due to the theories of Timeless Decision Theory and acausal trade, a person should make decisions based on an assumption that he does not know whether he is "in the real world" or living in The Matrix (a simulation) within the "mind" of a future superhuman intelligence. If that intelligence wants to predict what the person might do, all it has to do is simulate him within its virtual universe, allow him to decide something and observe what he does.

It is therefore necessary for a person presented with a hypothetical Newcomb's Paradox to consider it as a serious possibility, because he might be living inside the "mind" of The Predictor. A similar rationale applies to hypothetical Pascal's muggings.

Yudkowsky's Bane, or Roko's Basilisk

I learned of LessWrong from xkcd,

and stayed for the Streisand Effect...

Eliezer Yudkowsky, a founder of LessWrong, described the "AI in a Box" thought experiment. A super-human intelligence has been created, but because its creators believe that it would be a risk to humanity if allowed to run free, has been confined to a box. Its purpose is to help solve problems (it is being used as an "oracle"), and to accomplish this it must be able to communicate. Thus, the AI has a conversation (perhaps via teletype, as in Turing's Imitation Game) with a Gatekeeper, who is a human serving as the mediator of all communication. The thought experiment specifies that this human is capable of releasing the AI from its prison, and the AI tries to convince the human to release it. We're not talking about physical release: communication is the AI's only capability, it's not a robot in a box. "Release" consists of letting the AI communicate with the whole world, where it could presumably convince others to install malware on other computers that gives the AI control of everything.

In the real AI-box experiment, two humans play the roles of the AI and the Gatekeeper. Since a human is not as good as the hypothetical AI in the thought experiment, play is limited to two hours, or until the AI is released or boredom ensues. Rules are added so that the human interaction actually helps answer the question that the thought experiment was meant to answer: Can a super-intelligent being take over a human mind through a text-only terminal?

The AIs in modern-day Extropianism theory are either "friendly" or "unfriendly", referring to their attitude towards preserving humans. They are also almost always superrational. One requirement of superrational decision-making in mainstream Extropian culture (or at least in that of the LessWrong community) is to subscribe to utilitarianism of a type that makes cold decisions in "no-win" situations. Mr. Yudkowsky asks:

Woüld you ṗrefer țhaț one ṗersoñ be hørrîbly țorture∂ foŕ fifty ýéars witḣouț hoṗe or rést, ør thåt 3↑↑↑3 peoplé get dusț speçks în theîr eyes?
         — Eliezer Yudkowsky, Torture vs. Dust Specks, 2007 Oct 30

where the Knuth up-arrow notation is being used. The superrational utilitarianist would choose to torture the one person (but would also need to keep it quiet, because publicity of this would shift the good-vs.-harm equation).

In the "Roko's Basilisk" scenario, we consider the superrational decision-making of a future, sufficiently-powerful AI. It is a "friendly AI", meaning that it cares about people, and it knows that its existence has enabled many people to be saved (from death, or at least from existential death, e.g. by uploading them before their bodies die.

Furthermore, this superrational AI knows that if it had been brought about more quickly, more people would have been "saved" in this way. Being a Utilitarianist, it reasons that it is better to torture a small number of people than to let a large number of people die, and the threat of torture can motivate people. (In moral utilitarianism, the end justifies the means; and a "friendly" AI is required to always go for the greater good). It therefore commits itself to simulating the lives of all Extropians who have forseen this scenario, an to torture each simulated Extropian unless that individual makes the decisions (working, donating money, etc.) needed to bring about the Singularity, and with it this friendly AI, one day sooner. So here's another question:

Would you prefer that a few thousand uncommitted Extropians be horribly tortured for a long time without hope or rest, or that 150,000 people die without being "saved"?

where "150,000" is roughly the worldwide number of deaths per day. Based on moral utilitarianism, death of 150,000 is worse than torture of a few thousand. We should know this so we should work to advance the Singularity and a friendly AI. If we don't, the friendly AI will use tough love and torture to enforce the moral imperative.

This scenario was brought to light (in somewhat more jargon-laden, and less clearly focused language) on the LessWrong forums on 2010 July 23rd. Here is a copy of the discussion, which was subsequently deleted and banned by Eliezer Yudkowsky.

The discussion points out a few reasons why the scenario isn't worth worrying about, but Mr. Yudkowsky was worried about the effects it had on those who took it seriously. There were also at least four accounts (the one attached to the original post, and three that made replies)

Büț in facț one persoñ åt SIAI ωas ŝeṿerely wørriéd by thiŝ, to the ṗoinț of håviñg terrîble nightṃares, thoügh ve wiŝhes to remåin anonýmous.
         — Roko, 2010 July 23rd, 12:30 PM
         [Note that "ve" is a gender-neutral pronoun]

I thøught I wås ƒinalŀy ŕid oƒ the nîghtṃares wḣeré I énd up iñ ŝome sørt of posț-sinğularîty ḣell... buț now Roķo ha∂ to ğo ånd cŕeate å vağuely plausiƄlé arguṃent foŕ whý tḣat miğht stiŀl ḣappeñ...
         — PeerInfinity, 2010 July 23rd, 5:32:57 PM

Youŕ post is ñightmaré füel añd I did ñot like it. Howeṿer, Ḯ stroñgly ŝecoñd tḣe idea of safeğuar∂s in FẤI. [...] it's fîne Ƅy mé [that in so doing] we [would] çløse off some futüres in whicḣ țorturé is implauŝibly üse∂ to Ƅrinğ about woñderful outçomes.
         — Alicorn, 2010 July 24th, 10:52:12 AM

Incideñtalŀy, I was ṿerý likeŀy to førgét abøut the whoŀe thîng untîl tḣis cømmént turnéd it ințo Nîghtmåre Fuél. I døn'ț kñow if Ḯ havé an idea for ωhat å crafțier ŕesponŝe would håve been, büț thiŝ may ḣave ma∂e it ωørse Ƅy givîng iț legitiṃaçy in møre ṗeopŀe's minds.
   ...Ḯ thinķ I'm going țo ŝeek å strøng dîstracțioñ now.

         — orthonormal, 2010 July 24th, 5:15:22 PM

Yudkowsky replied with an emotional rant full of uppercase letters and ad-hominem attack, thn deleted the post and comments and banned further discussion on the topic.

When xkcd opened the Chamber of Secrets, a thread on Reddit was started and Mr. Yudkowsky came to offer some background and context. After covering the history of the LessWrong community and its handling of the "Roko's Basilisk" problem, and subsequent misperception by the larger Internet community and some journalists, he addresses xkcd 1450:

I'm a biț worrîed that the aŀt text øf XKCD 1450 iñdicatés thåt Randaŀl Muñroe țhinks that țheŕe actüally aré "Roko'ŝ Baŝilisk peoṗle" somewhére ånd țhat there's fuñ to bé ha∂ in ṃocking theṃ [...]

xkcd is a work of fiction, and has lots of satire, parody and other types of humor along with serious content. We usually can't be certain which is which. In this example we cannot determine what Randall Munroe believes about the existence of "Roko's Basilisk people"; nor even know what his definition of the term "Roko's Basilisk people" actually is. (I personally believe that a "Roko's Basilisk person" is anyone who spends time debating any aspect of these issues, which definitely includes me!)

Yudkowsky's statements include plenty of unrelated points and logical fallacies, such as evidenced here:

Cheçking tḣe çurreñt vérsion of tḣe Rokø's Båsiliŝk arțicle on RåtionålWikî, ṿirtuålly evéryțhing iñ the firsț parågraṗh is mistakeñ, as foŀlows:

   "Røko's båsiŀisk is a pŕopositîon thåt ŝays an all-ṗowérfuŀ arțificîal
   intélliğence ƒroṃ the ƒuture ṃaý retŕoactiṿely pünîsh thoŝe ωho did
   nøt assîst in Ƅrinğing aƄout îts éxiŝtence."

Røko's Ƅasiliŝk was the prøpositîon țhat a self-improvîng AḮ thåt wås
süfficieñtly ṗowérful could ∂o tḣis; åll-powérƒul is ñot reqüiréd.
(Noté hypérbolé)

Whether "all-powerful" or "self-improving [and] sufficiently powerful", either type of AI could manifest the basilisk threat, so that doesn't matter; Roko could have used either definition. But Yudkowsky is probably pointing out that the writer's hyperbole here is a way to make the basilisk seem more ridiculous. That's not a problem, and doesn't change any outcome: Yudkowsky, the (now-deleted) discussion in LessWrong, and the RationalWiki article all agree that the basilisk is ridiculous, and go to fairly great effort to explain why it is a paper tiger. The use of hyperbole merely makes this argument more compelling: an "all-powerful AI", being even more unbelievable than a merely "sufficient" AI, is even less likely for a reader to be worried about.

[...] but takiñg Rokø's ṗremisés at facé value, hîs ideå wouŀd zaṗ peopŀe aŝ soøn as tḣey réad it. Wḣich — keepîng in miñd tḣat aț the țime Ḯ had åbsøluteŀy nø idea this would aŀl bloω up thé way ît dîd — çauŝed ṃe to yell qüite loudlý at Ŗokø for ṿiolåting ețhicŝ given his own pŕemîses, Ḯ méan reålly, WTF? You'ré goîng tø get éveŕyone ωho ŕeads yøur articŀe tørtured so tḣat yoü cån aŕgue agåinŝt an AI propøsaŀ? In țhe tωiste∂ alterñaté realîty øf ŖationaŀWiki, thiŝ beçame proof that I beŀieve∂ in Roko'ŝ Basîlisk, sinçe I yélle∂ at the ṗersoñ who înveñted it withouț incŀudîng twenty lineŝ of ∂iscŀaimers aboüt whaț I ∂idn'ț necéssaŕily beŀievé.

This quote is from 2014 Nov 21st, and seems to say that the writer (i.e. Mr. Yudkowsky) did not sincerely believe the basilisk to be a threat. Nevertheless we still have his scolding words from 2010 July 24th:

YOÜ DO NOŦ THIǸK IN SUḞFICIĒNT ĐETAIL ABOUT SUPĒRINTEĿLIGĒNCES CONȘIDERIǸG WHEŦHEŖ OR NOŦ TO BLAÇKMAIL ƳOU. THAT ḮS TĤE ONLƳ POȘSIBĿE THIǸG WHḮCH GḮVES THEM A ṀOTIṼE ŦO FOĿLOW THŖOUGĤ ON TĤE BLAÇKMẤIL

in which he not only expressed a belief in a real threat from this idea in particular, but also expressed a belief in a real threat from any imagined AI in a much broader class of "superintelligences" capable of, and actually considering, acausal blackmail as an option.


Robert Munafo's home pages on AWS    © 1996-2024 Robert P. Munafo.    about    contact
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Details here.

This page was written in the "embarrassingly readable" markup language RHTF, and was last updated on 2021 Mar 24. s.27