Proceed to Safety

Notable Properties of Specific Numbers    

First page . . . Back to page 18 . . . Forward to page 20 . . . Last page (page 25)


This is 2 to the power of the reciprocal fine-structure constant 137.0359... using the CODATA 2014 value of the latter. It is a simple use of the popular fine structure constant to produce a value close to the Dirac ratio 1040. See also 3.377×1038.

1.15868...×1042 = 64! / (32!×8!2×2!4×24)

This is (a corrected value for) the number of possible chess positions, originally given by Shannon in the 1950 article "Programming a Computer for Playing Chess." (Phil. Mag. 41, 256-275). The formula is based on the idea that you can theoretically arrange all 32 pieces in any position whatsoever (giving 64!/32!) but that all pawns of a given colour are equivalent (8! for each colour), as is each pair of rooks (22) and each pair of knights (another 22); the bishops are not interchangeable but each has only 32 squares to choose from (24). However, this is inaccurate for a number of reasons. First and most important, a pawn cannot switch columns (ranks), or move past the opposing pawn in its rank, unless it captures. The more captures take place, the more flexibility the pawns have, but that decreases the number of pieces which decreases the number of board positions. Also, the possibility of pawn promotion increases the number of combinations somewhat. A far better estimate is that by John Tromp.

The number of possible chess games is much higher. See also 765 and 2.081681...×10170.

20988936657440586486151264256610222593863921 = (2148+1)/17 ~= 2.098893665744×1043

In July 1951 Ferrier found this 44-digit prime using a mechanical desk calculator. It became the largest-known prime, breaking the record set by Lucas in 1876. This record did not stand long; it was broken by Miller and Wheeler in the same month. 34

63976656348486725806862358322168575784124416 ~= 6.397665...×1043

This is 447212, and is "nearly equal" to 398712 + 436512: it is a "near miss" for Fermat's Last Theorem. The numbers appear in the Simpsons episode "The Wizard of Evergreen Terrace". See also 8712.


The value of the number called zài in Chinese. See also 104096.

393050634124102232869567034555427371542904832 ~= 3.9305×1044

This is 141×2141+1, the smallest number of the form n2n+1 that is prime. Cullen (the same one after whom the Cullen numbers are named) investigated numbers of this form in 1905.

824792557184288824246737061810550733633916929 = 3×(7×392-1)/2 ≈ 8.247925...×1044

This is a lower bound found by Milton Green for the value of BB(8), where BB(n) is the busy beaver function.

7.4011968415649×1045 = 7!×36 × 24! × 24!/246 = 7401196841564901869874093974498574336000000000

(The 4x4x4 Rubik's cube)

The number of ways to arrange a 4×4×4 Rubik's Cube. The corner cubelets have the same number of combinations as the 2×2×2 cube (see 3674160). There are 24 edge pieces, which can be put in any of the 24!≈6.2×1023 permutations. There are 24 centre pieces — these would have 24! permutations, except for the fact that each of the four pieces of a given colour are indistinguishable from each other; so there are 24!/246 combinations for those pieces.

See also 3674160, 4.3252×1019, 2.8287×1074, 1.5715×10116, and 1.9501×10160.


Randall Munroe made this estimate of the number of "meaningfully different" 140-character Twitter messages in English, using Shannon's estimate [136] of roughly 1.1 bits of information per letter (Twitter has since increased the Tweet length limit to 280, so it would now be about 5.21×1092).

See also 2.45995..×10200, 10800, and 4×102254.


An upper bound on the number of possible chess "diagrams" (a board configuration together with the facts of whose turn it is, who still has the option of castling, and any available en passant capture), computed by John Tromp. This estimate is better than that of Will Entriken and far better than mine.


The amount of energy (in joules, or kg(m/s)^2) released in the "GW150914" event (merging of two co-orbiting black holes) detected by the LIGO gravitational wave experiment on 2015 Sep 14. This is 3 times the mass of the Sun times the speed of light squared. This amount of energy was released in less than 1 second, with a peak intensity about 50 times as great as all of the stars in the universe combined.

2054221614063184107682218077003539824552559296000 = 29×35×53×72×112×132×172×19×23×29×31×37×41×43×47×53×59×61×79×83×89×97×101 ≈ 2.054×1048

The smallest number that has at least 1010 distinct factors. See also 12, 840, 45360, 720720, 3603600, 245044800, 278914005382139703576000 and 457936×10917.


An upper bound on the number of possible chess "diagrams" (a board configuration together with the facts of whose turn it is, who still has the option of castling, and any available en passant capture), computed by Will Entriken in 2006. This estimate is more carefully considered than mine; the estimate by John Tromp is better still.


(chess diagrams, by my estimate)

This is a simple upper bound on the number of possible chess diagrams ("positions" together with the knowledge of whose turn it is, for whom castling is still permitted, and where en passant might occur). It is computed in a similar manner to Shannon's estimate of 1.15868...×1042. It allows between 2 and 32 pieces in play, with no more than 16 of one colour, including exactly one king of each colour, and up to 8 pawns of each colour (any of which might have been promoted to another piece). It is higher than Shannon's estimate because it allows pawn promotion, but is unrealistic because (among other reasons) one cannot promote all pawns without first capturing some other pieces.

The number of possible chess games is much higher. See also 765, 8.065817...×1067 and 2.081681...×10170.


The length of a (Julian) year in Planck units. This is also the length of a light year in Planck length units. This can be used to convert the universe's age and size to Planck units. (Neither the age nor the size is known to sufficient precision for the discrepancy between the Julian year and other years, such as the tropical year, to make any difference.)

See also 5878625373183.6.

808017424794512875886459904961710757005754368000000000 = 246 × 320 × 59 × 76 × 112 × 133 × 17 × 19 × 23 × 29 × 31 × 41 × 47 × 59 × 71 ≈ 8.08...×1053

(the Monster group)

This is the "order" (number of elements) in the largest sporadic finite simple group, called the "Monster group" or the Fischer-Griess group.

(Some background: A "group" can be visualised as a set of transformations, e.g. rotations and reflections, that belong to an N-dimensional geometric structure such as a crystal lattice, or Rubik's Cube. A "simple" group has no "subgroups", which are subsets that themselves form a group; a "sporadic" group is one that does not fit into one of the infinite classes (cyclic, alternating, and Lie).)

See also 196883.

1057 ± 20%

There are approximately 1057 neutrons in a neutron star. Neutron stars (by current estimates) range anywhere from 1.35 to 2.1 times the mass of the sun. The mass of the Sun is about 1.99×1030 kg, and the number of neutrons in a kilogram of neutrons is about 1000×6.02×1023, so the number of neutrons in a neutron star ranges between about 8.1×1056 and 1.26×1057. The upper and lower mass limits are a bit uncertain, so we can safely just call it "1057 plus or minus 20 percent".

Since a neutron star is made up (almost) entirely of neutrons, and about half of these were protons that have combined with electrons via a nuclear interaction similar to electron capture, one could think of a neutron star as being the nucleus of element number N, where N is anywhere in this range near 1057. Using systematic element names, we could whimsically say that a "nucleus" of

unniltripenthexhexoctuntriunquadunpentennbihexpenttrinilquad- hextritrisepttriseptoctniloctbiunseptununoctpentbihexquadhex- octbibihexnilseptseptennocthexenntrioctquadquadnilnilunium

is a neutron star.


Another large number that appears in the Lotus sutra texts of Mahayana Buddhism, where it appears as the word A-so-gi (あそぎ). See also 1011.


(age of the Universe in Planck units)

An approximate value for the age of the universe in Planck time units:

r = 13.72×109 × 365.25 × 24 × 3600 / 5.39×10-44
   ≈ 8.03×1060

For various reasons, this number is not equal to the "radius", nor is it exactly 1/3 the radius of the visible universe. However, for rough calculations of things like the current volume and space-time volume, and particularly for larger derived values like the number of alternate universes, it is more than adequate.

See also 1040.


(radius of the visible Universe in Planck units)

An estimate of the radius of the "visible" universe in Planck length units. This is 46000000000 × 9.46×1015 / 1.616252×10-35. It is not simply the "radius" in light-years corresponding to the age of the universe in years, for reasons explained in the universe size entry. It also accounts for changes in the rate of the universe's expansion, and the amount of its "curvature", according to the Lambda-CDM model. See also 10122.



Archimedes, in his writing psammites (better known as The Sand Reckoner), estimated the size of the universe according to the heliocentric model of Artistarchus, and how many grains of sand would fit in it. He arrived at a value equivalent to one vigintillion, or 1063. Even more impressive, he described a system of numbers extending as high as 108×1016. Curiously, the number of protons in those 1063 grains of sand is very nearly equal to the number of protons in the visible universe (the Eddington Number), so Archimedes got the mass right even if he was a bit low regarding the volume.

The word vigintillion is one of the number-names that had to be extrapolated by others based on the names established by Chuquet, and one of the few that appear in almost every dictionary; see this discussion, and see also 1033, 10303, 103003 and 103000003.

Probably because of it being the largest number-word in the dictionary, H. P. Lovecraft used this number in two of his stories, including the 1926 Call of Cthulhu. (See also octillion).

(Personal: For a while during 3rd grade this was the largest number I knew and on a few occasions I wrote it in the sand during recess: 1,000,000,000...,000,000. (counting out 21 sets of zeros). A mean kid would follow and wipe it out.)

2000000000000000000000000002000000000000000000000002000000002293 ≈ 2×1063

The "alphabetically last prime" found by Knuth and Miller's fall 1980 CS 204 class at Stanford[147]. With commas the number is 2,000,000,000,000,000,000,000,000,002,000,000,000,000,000,000,000,002,000,000,002,293. Its name is American style English (without using the word "and" anywhere) is "two vigintillion two undecillion two trillion two thousand two hundred ninety three". This is alphabetically (i.e. lexicographically) last only if we assume that as-yet-unnamed larger numbers would have any name that comes after "vigintillion". Now that the Conway-Wechsler naming system has been published [152], we have 2×10963 which is "two vigintitrecentillion", so there will certainly be a new winner for this idosyncratic competition if it were ever staged again.

See also 8018018851 and 2.135987...×1096.


In this Numberphile video, Tony Padilla first asks Brady Haran to estimate how large a number would need to be, such that if that number were chosen at random, it would be unequal to any number that any human has thought of:

Can you think of a number, that no-one, in the entire history of humanity, has ever thought of?

It is clear that they both agree that the domain of the problem is restricted to finite positive integers. To this first phrasing of the question Brady replies that a randomly selected integer of "nine or ten digits" should suffice. Padilla (correctly) replies that this would be too small, and re-states the problem (equivalently):

If you [...] pick a random number that is bigger than this value (showing Brady the number "1.76×1067" which he has written down), then there's a ninety-nine percent chance that that number has never been thought of by anybody ever in the history of humanity.

Padilla then works through the calculation of this estimate. During this discussion it becomes clear that he is now addressing the question of how large a number would need to be, such that if that number were chosen at random, it would be greater than any number that any human has thought of (near the end though, it seems he is drifting back to the first question).

1.76×1067 is much higher than needed for the original "unequal to" version of the question. It is also not a correct answer to the "greater than" version of the question.

To get a realistic estimate for the original question, most of Padilla's derivation can be followed: A typical person thinks of a number every 2 minutes of their waking life, and with an average lifetime of 73.2 years, they would think of 73.2×365.25×24×2/3×30 = 12833424≈1.28×107 numbers during their lifetime. About 117000000000=1.17×1011 people have ever lived, so we get about 1.5×1018 numbers that have ever been thought of. Many are repetitions, the distribution is uneven, and there will be more and larger gaps as we go up to larger numbers, but it is clear that if we pick random integers about 100 times this size, i.e. 1.5×1020, then there's less than a 1 in 100 chance that the randomly-selected integer will never have been thought of.

Regarding the second and much stronger "greater than" formulation of the question, it is clear that 1.76×1067 is nowhere near big enough, as plenty of people have thought of googol and googolplex both of which are larger. Padilla himself mentions Graham's number apparently without realising that the answer to his problem would need to be larger than that. (He goes off the rails in his estimate by proposing that numbers people have thought of can be realistically modeled by a power-law distribution ignoring outliers, citing Dorogovstev et al. [189] and extrapolating one of their trend lines until it is close enough to the asymptote).

More to the point, the task of thinking of something larger than anything anyone else has thought of is equivalent to winning a "world champion largest number contest". This is a task for the likes of Agustín Rayo and others who can handle formal logic, set theory, propositional calculus, Gödel numbering, and the like. "Googologists" continually try to formulate larger numbers building on Rayo's work and other similar results; most are only trivially larger, and among the good candidates it is hard to know which are well-formulated and which of those is largest. At any given moment there is only one person who presently holds the record for thinking of the largest (finite) integer, and it is very unlikely to be me or you.

See also 3112066128.

8.065817...×1067 = 52!

The number of ways to shuffle a deck of 52 distinct playing cards. See also 158753389900, 635013559600, 2.235197...×1027 and 4.519364...×1046.

1.0066961655...×1068 = 20!×319 × 30!×229 / 4

(the "Megaminx" puzzle)

This is the number of ways to arrange the pieces on a Megaminx, a dodecahedral puzzle similar in concept to Rubik's Cube. The reasons for the number are similar to those for the 3x3x3 Rubik's cube except for the parity factor, which is the division by 4. On the Megaminx, every turn performs an even permutation on the corners and an even permutation on the edges. Therefore, the total permutation of all the corners is always even, and likewise for the edges.

See also 9.1197×10262, 1.7989×10571, and 7.7263×10992.

2.8287094227774×1074 = 8!×37 × 12!×210 × 24! × 24!/246 × 24!/246 = 282870942277741856536180333107150328293127731985672134721536000000000000000

(5x5x5 Rubik's cube)

The number of ways to arrange a 5×5×5 Rubik's Cube. Rotation of the centre cubelets is ignored because it would be invisible. The 8 corners and the 12 central edge pieces together combine for the same number of combinations as the 3×3×3 Rubik's Cube (see 4.3252×1019). There are another 24 edge pieces, which can be freely placed into any of 24!≈6.2×1023 permutations. There are 48 movable centre pieces, in two groups (the ones closer to the corners, and the ones closer to the edge-centres). Each of these two groups of 24 has 24!/246 arrangements for the same reason as the group of 24 centre pieces of the 4×4×4 cube (see 7.4012×1045).

See also 3674160, 4.3252×1019, 7.4012×1045, 1.5715×10116, and 1.9501×10160.

670059168204585168371476438927421112933837297640990904154667968000000000000 ≈ 6.700591...×1074

Starting with x=2, find the smallest number that is larger than x and has exactly x divisors. Call this larger number x and repeat. This produces the sequence: 2, 3, 4, 6, 12, 60, 5040, 293318625600, ... (Sloane's A36460) and these first 8 values of x are all highly composite numbers. The next term in that sequence is this number 6.700591...×1074, which is not highly composite. Its factorisation is: 218×316×512×710×116×136×174×194×232×292×312×372×41×43×47×53×59×61, which has precisely 293318625600 factors because 293318625600 is 2×2×2×2×2×2×3×3×3×3×5×5×7×7×11×13×17×19: each of these numbers is diminished by 1, the order is reversed, and then they are used as the exponents of 218×316×...×59×61. Continuing the process gives 2.215011...×101428, then 5.086520...×10189682, then 1.478489...×10422328499.

In the OEIS this sequence A36460 is called an "erroneous version of A9287", but Hal Switkay pointed out that A36460 simply needs to be defined properly (as I have done here) to include the initial 2.

5.2106440156792×1078 = 180×(2127-1)2+1

This is a prime, found by Miller and Wheeler in July 1951. This discovery has the distinction of being the first time the record for largest known prime was set by electronic computer. It broke the record set by Ferrier and was soon broken by Robinson. 34

15747724136275002577605653961181555468044717914527116709366231425076185631031296 = 136×2256 ≈ 1.574772...×1079

See 3.149544×1079.

31495448272550005155211307922363110936089435829054233418732462850152371262062592 = 2×136×2256 = 2×136×2223 = 17×2260 ≈ 3.149544...×1079

The Eddington Number

According to Arthur Eddington in his book *Mathematical Theory of Relativity* (1923, London, Cambridge University press), the number of charged particles in the universe is exactly 2×136×2256. In a 1939 lecture he stated this as follows:

I believe there are 15 747 724 136 275 002 577 605 653 961 181 555 468 044 717 914 527 116 709 366 231 425 076 185 631 031 296 protons in the universe and the same number of electrons.
      (Quoted in Wikipedia, "Eddington number")

136×2256 is 1.574772...×1079, and twice this number is 3.149544...×1079. That would be the number of charged particles, but also close to the number of massive particles because there are many more protons and electrons than neutrons, (most of the matter in the universe is hydrogen). Note also that photons have no mass, and other charged particles like the muon were not yet known.

This number is notable for being the largest specific integer ever thought to have a unique and tangible relationship to the physical world. (All larger numbers in physics are estimates and approximations.)

Eddington was interested in showing that the various physical constants (the speed of light, the gravitational constant, the mass of the electron, etc.) were not accidental but were determined in some way that could be computed exactly. One of these constants was the fine-structure constant

In 1923 the fine-structure constant was known poorly enough that one could surmise that it is exactly 1/136. Eddington computed the number of particles in the universe from other measurements and observations and then found a simple mathematical formula based on integers that gave the same value. (When the fine-structure constant was later found to be closer to 1/137, Eddington repeated his work to make it fit that value. This hurt Eddington's reputation as a scientist, and he was jokingly called "Arthur Adding-one" by detractors.)

Eddington was shown to be wrong on other points. Many other estimates of the number of particles in the universe have been computed, all in the range from 1078 to 1080. Here is an example (which is way too simplistic for cosmologists but shows that the Eddington number was fairly close):

r = radius of visible universe
= age of universe × speed of light
= speed of light / Hubble constant
= 1.42 × 1026 meters

volume of universe = 4/3 π r3
= 1.2 × 1079 cubic meters

average density of universe
= 3 hydroden atoms per cubic meter
(from old models that give the minimum mass of a "closed" universe)

1 hydrogen atom = 4 particles (proton + electron, a proton is 3 quarks)
number of particles = 4.8 × 1079

If you include the various massless particles (photons, gravitons, other gauge particles, perhaps neutrinos) and virtual particles, the estimates become much greater. The only estimate I have been able to find gives the density of neutrinos in the cosmic background radiation as being 107 per cubic meter, which gives a value of 1.2×1086 neutrinos in the visible universe.

The Eddington number is approximately the square of Paul Dirac's 1040; see also 3.377×1038.

37249792307686396442294904767024517674249157948208717533254799550970595875237705 ≈ 3.724979...×1079

The number of legal positions in Go played on a 13×13 board (a popular size for beginners learning the game), computed by John Tromp (with Gunnar Farnebäck and Michal Koucký) in 2005 [234]; see OEIS sequence A94777.

See also 1.039191...×1038 and 2.081681...×10170.

2350988701644575015937473074444491355637331113544175043017503412556834518909454345703125 = 553 ≈ 2.3509887016443...×1087

In high school I wrote a special program (I think it was on a TRS-80) to calculate and display exact values of large exponents. Then I created tables of powerlogs, (which are integers of the form AAB where A and B are also integers), written by hand in a notebook. This is the largest of about 30 really big numbers in that table. See also 1.0621842147×104990856845.


(photons in the universe)

Rough estimate of the number of cosmic microwave background Photons in the visible universe. This is based on a figure of 3×108 photons per cubic meter that can be computed with Planck's law based on the cosmic background radiation temperature of 2.725 K. (See also Olbers' paradox).


The number of possible 280-character Tweets in English, using Claude Shannon's estimate of 1.1 bits of information per character; see 2.28×1046 for more.

2135987035920910082395021706169552114602704522356652769947041607822219725780640550022962086936576 = 2320 = ≈ 2.135987...×1096

The "alphabetically last power of two" found by Knuth and Miller's fall 1980 CS 204 class at Stanford[147]. Its name is American style English (without using the word "and" anywhere) is "two untrigintillion one hundred thirty five trigintillion nine hundred eighty seven novemvigintillion thirty five octovigintillion nine hundred twenty septenvigintillion nine hundred ten sexvigintillion eighty two quinvigintillion three hundred ninety five quattuorvigintillion twenty one trevigintillion seven hundred six duovigintillion one hundred sixty nine unvigintillion five hundred fifty two vigintillion one hundred fourteen novemdecillion six hundred two octodecillion seven hundred four septendecillion five hundred twenty two sexdecillion three hundred fifty six quindecillion six hundred fifty two quattuordecillion seven hundred sixty nine tredecillion nine hundred forty seven duodecillion forty one undecillion six hundred seven decillion eight hundred twenty two nonillion two hundred nineteen octillion seven hundred twenty five septillion seven hundred eighty sextillion six hundred forty quintillion five hundred fifty quadrillion twenty two trillion nine hundred sixty two billion eighty six million nine hundred thirty six thousand five hundred seventy six". With commas the number is 2,135,987,035,920,910,082,395,021,706,169,552,114,602,704,522,356,652,769,947,041,607,822,219,725,780,640,550,022,962,086,936,576.

It is interesting to consider how this would have been determined. Of course, the first part of the number's name is determined by the first digit, and the next part depends on the number's length (in digits). Notice that 2210 is 1.6455...×1063, "one vigintillion ..." while 2211 is 3.2910...×1063, "three vigintillion ..." so there is no eligible power of two starting with "two vigintillion ...". The same thing happens with 2220 and 2221 preventing any "two unvigintillion ...". That is why the winning number is 2320. The Conway-Wechsler naming was almost certainly not considered, as it was not published yet and if it were used, at the very least 24002.5822...×10120, "two viginticentillion ..." would beat this one. More importantly, as Conway-Wechsler is infinitely extensible, there would be ever-larger candidates following the pattern of using "vigintitrecentillion" which is beaten by "vigintitrecentillivigintitrecentillion" which is beaten by "vigintitrecentillivigintitrecentillivigintitrecentillion" and so on.

See also 8018018851 and 2.000...×1063.


(fundamental particles in the universe)

Some of the larger estimates of the number of particles in the known visible universe are around this value, and result from including photons, neutrinos and other invisible particles, but not the dark matter or dark energy. The actual number of particles in the universe may be much larger — for example, it might be that most of the universe is beyond our event horizon (redshift horizon). See also 7×1022 and 1.1×1089.

See also 3×1023.

7777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777573553 ≈ 7.777777777...×1099

The largest 100-digit prime whose digits are all prime, reported by Charles Greathouse in Mersenne forum. As another forum member pointed out, we can expect there to be plenty of "all-prime-digits" primes, because for any N the "density" of primes with N digits is 1/(Nln(10)), and the number of ways to combine the digits 2,3,5,7 into an N-digit number is 4N; so we expect there to be 4N/(Nln(10)) primes with N digits that are all 2,3,5,7. This expression grows quite quickly.

See also 2357.


Most pocket calculators max out at 9.9999999...×1099, which is just below a googol (10100, see next entry). See also 9.9999999...×10999 and the computer overflow values starting with 3.4028236692093×1038.



Main article: Googol and Googolplex

10100, which can be called "10 duotrigintillion", is better known by the name googol. It can also be expressed 10102, (1010)10 or 103 using the lower hyper4 operator.

The dimensionless entropy of a black hole whose mass is about half that of our Milky Way galaxy (or more precisely, 3.087x1011 solar masses) would be about 10100. As with all such "dimensionless entropy" numbers, it is proportional to the surface area of the black hole's event horizon, which is 4π times the square of the radius of the black hole, and that radius is proportional to the mass; and this is all done in Planck units, so the entropy would be 4π(3.087×1011 × 1.98892(25)×1030 / 2.176470(51)×10-8)2, which works out to 10100.

See also googolplex and footnote 27.

10341796308487334800992832804222885104773611498499997696000000000000000000000000000000000000000000000 ≈ 1.034×10100

This is the gear ratio achieved in the construction shown in this video by Brick Experiment Channel, combining most of the types of LEGO gears in a long reduction chain for a combined ratio of over a googol to 1. In the order shown in the video, the gear ratios multiply out to: 24/8 × 40/8 × 40/8 × 40/8 × 60/1 × 12/1 × 168/1 × (1 + 140/8) × 141 × 20/12 × (40/8)^{20) × 20/12 × (24/1)20 × 56/16 × (36/1)10 × (40/1)18 × 15/9 × 56/1.

1.000...×10101 = 10101 + 3

This is a prime of the form 10N+x where x is very small, and it has 102 digits of which 100 are 0's:


See also 1.000..×104180.


An estimate of the number of subatomic particles that it would take to fill all the space in the universe. (From Straight Dope)

See also 8.72×10184 and 1010115.

1.5715285840103×10116 = 7! × 36 × 24!2 × (24!/4!6)4 = 157152858401024063281013959519483771508510790313968742344694684829502629887168573442107637760000000000000000000000000

(6x6x6 Rubik's cube)

The number of ways to arrange a 6×6×6 Rubik's Cube. The corner cubelets have the same number of combinations as the 2×2×2 cube (see 3674160). There are two groups of 24 edge pieces, and 4 groups of 24 centre pieces. Within each of these groups of 24 there are 24!≈6.2×1023 arrangements. The centre piece groups each have 4!6 fewer combinations because they are in 6 different colours (with 4 of each colour) and any pieces of the same colour are indistinguishable from each other.

This is over 10 million billion googol. Notice that it is only a little less than the number of ways to play a game of chess, and far greater than the number of valid chessboard positions. When making a move, you have 3×6=18 choices of what to turn and 3 choices of how far to turn it, for a total of 54 choices. This means that in order to get a reasonably good coverage of all the possible combinations, you have to make at least 67 moves to scramble the cube. By contrast, a normal 3×3×3 Rubik's Cube can be scrambled in about 15 moves.

See also 3674160, 4.3252×1019, 7.4012×1045, 2.8287×1074, and 1.9501×10160.


(Chess games by Shannon's estimate)

This is the Shannon number, Claude Shannon's estimate of the number of chess games from his original 1950 paper on computer chess. It is calculated by the approximation 100040, based on the idea that at each move by White there are about √1000 ≈ 32 choices, to which Black has about 32 responses, and realistic games typically have about 40 moves by each player. If the players aren't trying to win, however, a game can go much longer (see 5898, 8848, and 1012500).

Numberphile discusses this in their video "How many chess games are possible?". See also 26830, 1.15x1042 and 101050.

1.75677288...×10120 ~= 1756772880709135843168526079081025059614484630149557651477156021733236798970168550600274887650082354207129600000000000000 = (24!×32!)/2 × 16!/2 × 223 × (3!)31 × 3 × (4!/2)15 × 4

This is the number of combinations of a 3×3×3×3 "Rubik's tesseract", treated as if it were coloured similarly to the familiar 3×3×3× cube, i.e. with each of the 27×8 small "faces" painted a solid colour from a set of eight colours, and only the positions of those colours are considered significant. 8 of the faces cannot move, the other 208 can move and allow for various rotations of the 81 "hypercubelets". The calculation is by Eric Balandraud [179].


An estimate of the amount of entropy in the universe, based on black hole entropy.

The amount of entropy within a black hole increases as matter and energy enter it. If this is absolute and irreversible (i.e. no Hawking radiation), this strictly increases and cannot decrease, and it increases at a rate that results in the total amount of entropy in the black hole being proportional to the surface area of the sphere called the event horizon:

SBH=A/(4 lp2)

SBH represents the (de Sitter) entropy of the black hole, A is the event horizon area (4 π r2) and lp is the Planck length. (The formula is from here.)

The total amount of entropy in the universe can be approximated by imagining the universe is like a black hole: a spherical region of a certain size, from which matter and energy cannot escape. This is true for a universe that is closed due to gravity (when using this simplification one needs to ignore dark energy which is known to be accelerating the expansion of the universe, and therefore means that the universe is not like a black hole with a fixed-size event horizon).

Using Fermi estimation, the universe entropy is therefore the square of its radius in Planck units, or about 10122. Tony Padilla works it out in this video from Numberphile. As he points out, this is the limit on the amount of information or "data" of any kind that can exist in the universe, unless there is more going on that we don't know about, due to unknowns regarding vacuum energy, the possibility that the universe extends beyond our own causality event horizon, etc.

3580 ≈ 3.35×10123

This is the the version of the Shannon number mentioned in a 2016 report on Google DeepMind AlphaGo, the Go-playing program that successfully beat a human champion in 2016. See also 250150.


Another example of innumeracy, this time in the opposite direction from the Rubik's cube example. On the TV program "Voyage to Pandora", and also seen in an NCSM newsletter (which, ironically, is an organisation purporting to represent "Leadership in Mathematics Education"), I found two slightly-different wordings of the following passage:

A flight to Alpha Centauri (closest star to Earth, other than the sun) on a regular space shuttle would take 900 years and would require a mass of fuel greater than the mass found in the visible universe, 10137 kilograms. -- NCSM

To see how wrong this is, we can Do the MathTM. Assuming that the fuel is being burned at a constant rate over the 900 years, the burn rate would be:

10137 kg / 900 years = 4.37×10128 kg every 124 seconds

I chose 124 seconds for the final figure because this is the period of time during which the actual Space Shuttle solid rocket boosters fire. If we assume that all of the fuel in the Shuttle, tank and boosters is used during this time, and knowing that its total launch mass is 2.04×106 kg, we can make a comparison:

4.37×10128 / 2.04×106 = 2.14×10122

This means that the NCSM article is hypothesizing that their "regular space shuttle" would be burning fuel over a billion trillion googol times as fast as the real Shuttle did!

Consider how big the spaceship would need to be. Clearly it would be nearly all fuel. If the 10137 kg of fuel has a density like that of ordinary rocket fuel, then it would have a volume of 10137 liters, or 10134 cubic meters. A spherical fuel tank would have a radius of 2.88×1044 meters, which is about a decillion times the radius of the inner solar system (see astronomical unit) and 7×1027 times the distance of Alpha Centauri! Clearly, fitting all this fuel mass into a spaceship small enough to actually make the trip from Earth to Alpha Centauri would require a density so great that the entire thing would collapse into a black hole.

The TV program "Voyage to Pandora" also gives this factoid and credits it to a NASA study. Looking at that source, and knowing a bit about how multi-stage rockets work, the number makes a bit more sense. The number is an estimate of the weight of a Tsiolkovsky-style multistage rocket necessary to get to a speed of about 3 million miles per hour, which would be enough to get to Alpha Centauri in about 900 years. The additional assumptions are that: each stage of the multistage rocket weighs (say) twice as much as its payload (which comprises all of the stages that come after it); each stage is designed for a burn time of 500 seconds at an acceleration rate that can be endured by human passengers. At an acceleration rate of about 20 miles per hour each second, you need about 40 hours to reach a speed of 3 million miles per hour. Since each stage only lasts about 500 seconds, the multistage rocket ends up needing to have about 290 stages. If (as I stipulated before) each stage had about twice the mass of the stages above it, then the total mass of the rocket "at launch" would be about 3290 times the mass of its ultimate payload. That's where the unbelievable mass figure came from, and the reason it is so unbelievable is because the quotes (in the TV program and by NCSM) do not mention that the craft is not actually the size of "a regular space shuttle" but is a Tsiolkovsky-style exponentially-staged design with an incredible 290 stages.


In India's ancient writings there are many references to large numbers with names; some are hard to attach to a specific value because of multiple conflicting or ambiguous uses. One of the larger numbers given a name in India is asankhyeya, commonly said to be 10140. In the Knuth -yllion naming system, 10140 is one myriad myllion quintyllion; in the more mainstream Conway-Wechsler system, it is one hundred quinquadragintillion. See also 10421 and 103.7218×1037.

6.8647976601306×10156 = 2521-1

This is the 13th Mersenne Prime and the first to be found by electronic computer. It was discovered in 1952 by Robinson and breaks the record set by Lucas in 1876, although that record was also broken by the non-Mersenne primes (2148+1)/17 and 180×(2127-1)2+1, which were found the year before. 34

1.95005511837..×10160 = 8!×37 × 12!×210 × 24!2 × (24!/246)6

(7x7x7 Rubik's cube)

The number of ways to arrange a 7×7×7 Rubik's Cube. Rotation of the centre cubelets is ignored because it would be invisible. The 8 corners and the 12 central edge pieces together combine for the same number of combinations as the 3×3×3 Rubik's Cube (see 4.3252×1019). There are two additional sets of edge pieces with 24 in each set, which can be freely placed into any of 24!≈6.2×1023 permutations. There are 144 movable centre pieces, in six groups (according to where they are in relation to the corners, and the edge-centres). Each of these six groups of 24 has 24!/246 arrangements for the same reason as the group of 24 centre pieces of the 4×4×4 cube (see 7.4012×1045).

The exact number of combinations is 19500551183­7313078353­2912675401­9748794904­9926920434­3456715213­2912323232­7061354691­8006527871­2755853360­6823285517­1913731129­9993600000­0000000000­0000000000­0000000000.

See also 3674160, 4.3252×1019, 7.4012×1045, 2.8287×1074, and 1.5715×10116, 1.417×10277, and 6.690926087×101054.


The number of legal positions in Go, computed by John Tromp (with Gunnar Farnebäck and Michal Koucký) in 2016 [234]; see OEIS sequence A94777. The exact number is 20816819938­1979984699­4786333448­6277028652­2453884530­5484256394­5682092741­9612738015­3785256484­5169851964­3907259916­0156281285­4608988831­4427129715­3193175577­3662039724­7064840935.

There are 3(19×19) ≈ 1.74×10172 ways to place white and black stones on the board, and it is easy to test any given pattern to see if it is a legal Go position. Random sampling and heuristics had been used some time earlier to compute the estimates 2.089×10170 and 4.63×10170. Finding the exact value takes a lot more work.

See also 1.039191...×1038, 1.15×1042, and 3.724979...×1079.


An estimate of the number of legal positions in Go, computed by Achim Flammenkamp and posted to the USENET newsgroup on 1992 Sep 16:

I have made with Knuth's "heuristic sammple method", Math. of Comput. 1975, a new approch to get a more precise value for the number of go configurations. For a 19x19 board my best value is: 0.01197*3^361 with a varianz < 0.00001*3^361

The value 0.01197(1)×3361 is equivalent to 2.0838(17)×10170. This improved on the 1972 estimate of "10170" found in item 96 of HAKMEM72.


An estimate of the number of possible positions in Go, played on a 19×19 board. According to Wolfran Mathworld, this estimate came from "Beeler et al. 1972, Flammenkamp" but their citations are wrong. Beeler 1972 is a reference to item 96 of HAKMEM72, which indeed contains an estimate of "10170". The "Flammenkamp" reference points to this page, which as far as I know has never included any estimate of the number of legal Go positions, but it is most probably meant to refer to this value posted to by Achim Flammenkamp in 1992. I am presently unsure of the origin of this "4.63..." estimate.

See also 1.15×1042, 10120, and 2.081681...×10170.

1.0130653244...×10177 = 2588

The number of years in the longest time-period in the cosmology of Jainism, a religion and philosophy from India in the 6th century B.C. (From an article by J J O'Connor and E F Robertson)


(volume of the visible Universe in Planck units)

The current volume of the (observable) universe in Planck units. See also 8.03×1060, 2.75×1061, 1.75×10245, and 5.1843×1022652507173.


This is 840000028; see 5.732470...×10207.


Brady Haran and the good folks over at Numberphile are always coming up with new challenges. One day they opened a tin of "Numberetti", a pasta product in which the bits of pasta are shaped like the digits 0 through 9. On the can is the enticing challenge, "What's the biggest number you can make?", and inside were the following 195 digestible digits:


As Landon Curt Noll's English name of a number site will readily tell you, this number in words is:

nine hundred ninety nine tresexagintillion,
nine hundred ninety nine duosexagintillion,
... (51 lines omitted) ...
one hundred eleven decillion,
one hundred nonillion

You can see Numberphile's video all about this here: spaghetti numbers.

As the video mentions, you can make a far greater number by making the digits into a big tower of exponents, like my 2345... example at 6pt1.86×103148880079. By the power-tower method, you'd be able to make the monster 161pt1017 (which has 222... at the bottom and ...1010100000000000000000 at the top). This is "a bit" shy of Steinhaus's "Mega".


This is 27140, the number of possible 140-character Twitter messages when you're allowed just 26 letters and a blank space. Most, however, are meaningless garbage, and 2.28×1046 is a much more relevant estimate. See also 10800.


Occurs 6 times in Pascal's triangle: 714!/(272!×442!) = 713!/(273!×440!) ≈ 3.537835...×10204. It is a similar situation to 3003 and 61218182743304701891431482520, in this case the relevant Fibonacci products are 21×34 = 714, 13×21 = 273, and 21×21 = 441.

5.7324701932...×10207 = 75600000000000 × 840000028

In Jain measurement of time, a purvi is 22×33×7×1011 = 7.56×1013 years, and a shirsha prahelika is 840000028 purvis, which works out to 5.7324701932...×10207 years.

1.267650600228...×10230 = 200100

The value of a number given the name "googoc" in Michael Halm's pioneering article on "Googology". For a lot more, see my section on Michael Halm's googologisms which is part of a larger and more general discussion of such inventions.


(space-time volume of the visible Universe in Planck units)

The four-dimensional volume (in space + time) of the known universe using the formula for the volume of a hypercone (with spherical cross-section) and the universe's age, expressed in Planck units. The hypercone has a 4-dimensional volume given by

V = (1/4 h) (4/3 π r3)

where h is the height of the hypercone and r is the radius of the sphere that forms the hypercone's base. h is the age of the universe in Planck time units and r is its current radius in Planck length units. Due to complexities of relativity and the way the universe expands, h and r are not the same. This gives 1.75×10245 for the 4-D volume.

Real models of the universe used by astrophysicists and cosmologists are much more complex and do not admit to such a simple calculation of volume, but most models would arrive at a figure close to this one.

See also 1.41×10408.

9.1197528826...×10262 = 20!×319 × 30!×229 × 60!/(5!12) × 60!/(5!12) × 60! / 25

(the "Gigaminx" puzzle)

This is the number of ways to arrange the pieces on a Gigaminx, a dodecahedron-shaped combinatorial puzzle with 230 movable pieces. The Gigaminx is to the Megaminx as the 5x5x5 cube is to the normal 3x3x3 Rubik's Cube. This video shows one in action, and here is another..

See also 1.0067×1068, 1.417×10277, 1.7989×10571, and 7.7263×10992.

#1.417039239...×10277 = 24!3 × (24!/246)((72-1)/4) × 12! × 211 × 8! × 37 / 2

This is the number of ways to arrange the pieces on a 9×9×9 Rubik's cube.

Rotation of the centre cubelets is ignored because it would be invisible. The 8 corners and the 12 central edge pieces together combine for the same number of combinations as the 3×3×3 Rubik's Cube (see 4.3252×1019). There are three additional sets of edge pieces with 24 in each set, which can be freely placed into any of 24!≈6.2×1023 permutations. There are 6×(72-1) movable centre pieces, in 12 groups (according to where they are in relation to the corners, and the edge-centres). Each of these 12 groups of 24 has 24!/246 arrangements for the same reason as the group of 24 centre pieces of the 4×4×4 cube (see 7.4012×1045).

The exact number of combinations is 14170392­3905426129­1524639391­6889970752­7329463845­1483058927­6833655387­4446676098­2106803407­9045039617­2166350752­1976501256­6330942990­3025179039­7178769978­3519265329­2880486030­8313486157­3075573092­2240824168­6601088248­6829056000­0000000000­0000000000­0000000000­0000000000­0000000000­0000000000.

See also 1.9501×10160, 9.1197×10262, 6.690926087×101054, and 1.8696×104099.


(base of Hypercalc level-index representation)

This is the base of the level-index representation used by Hypercalc. I chose it because it is close to the limit of the IEEE 754 double type. The bit of extra space above 10300 makes addition and roundoff algorithms much simpler.

It is also the basis of my ASCII-lexicographic ordering of arbitrarily high positive numbers, used internally by the source file of this web page. Here is an illustration of the system through examples:

ASCII representation Value
0 0
0_0000000000000234 0.0000000000000234 = 2.34×10-14    (All 0's are shown, negative exponents in scientific notation are not supported. A big deficiency in the system, but it hasn't impacted me mainly because small numbers have never interested me much.)
0_1 0.1
9_34 9.34
a10_2 10.2    (Numbers with 2, 3, 4, or 5 digits use the letters a through d)
b256 256
c1 1000    (Trailing 0's can be left off.)
c1000 1000
c1729 1729
d19683 19683
e005_1 100000 = 1×105    (Numbers from 6 to 300 digits use e prefix, a 3-digit exponent and a mantissa)
e005_100 100000    (Extra 0's can be left off at the end provided it doesn't result in a different value from that which is intended.)
e005_1000007 100000.7
e005_13 130000    (Again, extra 0's left off)
e023_602 6.02×1023
e100_1 10100
e299_9 9×10299
p1_b300_0 1×10300    (Called "1 P.T. 300", which means "One Power of Ten, 300". The "300_0" part is the logarithm of the number being represented.)
p1_c2345_6 4×102345 ≈ 102345.6 = "1 P.T. 2345.6" (The ".6" is approzimately the logarithm to base 10 of 4. 10 to the power of 2345.6 is 4 times 10 to the power of 2345.)
p1_e009_345 103.45×109 = "1 P.T. 3.45×109"
p1_e299_999 109.99×10299 = "1 P.T. 9.99×10299"
p2_b300_0 1010300    (Called "2 P.T. 300" because it is "two powers of ten" with 300 above the two 10's)
pa10_c2345_6 10^(10^(...^(102345.6)))    (with a total of ten 10's. This is "10 P.T. 2345.6", that is, ten powers of 10 with 2345.6 at the top.)
pa99_c2345_6 99 P.T. 2345.6
pb100 100 P.T. 1    (100 powers of 10 with a 1 at the top. Since 101=10, the 1 can be ignored so it's just 100 powers of 10, and the "_1" can be left off.)
pb100_1 100 P.T. 1    (100 powers of 10 with a 1 at the top. Since 101=10, the 1 can be ignored so it's just 100 powers of 10.)
pb100_b234.5 100 P.T. 234.5
pb100_c1000 100 P.T. 1000
pb101_3 101 P.T. 3    (101 powers of 10 with a 3 at the top. Since 103=1000, this is the same as 100 P.T. 1000. This is the only potential ordering problem in the system. You have to be consistent, and not use "pb100_c1000" and "pb101_3" in the same collection of data.)
pb256_b619_299 256 P.T. 619.299    (255 powers of 10 with 1.99237...×10619 at the top. This is the large number called mega by Hugo Steinhaus and Leo Moser)
pd99999_c2345.6 99999 P.T. 2345.6
pe005_100 100000 P.T. 1    (Again, in this case, extra 0's can be left off at the end provided it doesn't result in a different value from that which is intended)
pe005_100000 100000 P.T. 1
pe005_100000_2 100000 P.T. 2    (However if the exponent at the top of the power-tower is anything other than 1 (in this case a "2"), then all the 0's in 100000 need to be shown.)
pe005_100000_c2345_6 100000 P.T. 2345.6
pe010_10000000000_2 10000000000 P.T. 2    =    (1010) P.T. 2 (Again, all 0's need to be shown in this case because there is a precise exponent at the top.)
pe299_9999999 (9.999999×10299) P.T. 1
pp1_b300_0 (10300) P.T. 1    (A power tower of 10's, of height 10300.)
pp1_e006_1 (101000000) P.T. 1    (A power tower of 10's, of height 101000000 = 10106.)
pp2_b300_0 (1010300) P.T. 1    (A power tower of 10's, of height 1010300.)
ppb27_1 (27 P.T. 1) P.T. 1    (A power tower of 10's, of height X, where X is a power tower of 10's of height 27).)
. . . (Contact me if you really need to go higher, but note you can't get much further before definitive ordering is nonobvious)

These ASCII strings can be used as labels, variables or function names in popular programming languages (including C, Perl and PHP) and are valid search keywords (for example, a Google search for "e023_602" finds this page).

First page . . . Back to page 18 . . . Forward to page 20 . . . Last page (page 25)

Quick index: if you're looking for a specific number, start with whichever of these is closest:      0.065988...      1      1.618033...      3.141592...      4      12      16      21      24      29      39      46      52      64      68      89      107      137.03599...      158      231      256      365      616      714      1024      1729      4181      10080      45360      262144      1969920      73939133      4294967297      5×1011      1018      5.4×1027      1040      5.21...×1078      1.29...×10865      1040000      109152051      101036      101010100      — —      footnotes      Also, check out my large numbers and integer sequences pages.

Robert Munafo's home pages on AWS    © 1996-2024 Robert P. Munafo.    about    contact
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Details here.