# Answers to Questions on Stack Exchange Sites

Contents

How does the hashlife alg go on forever in Golly?

How to parse the Manifest.mbdb file in an iOS 4.0 iTunes Backup

Logical consequence of Euclid's theorem

How to compute antilogarithmic and superlogarithmic spaced values?

Polynomial-time algorithm to compare numbers in Conway chained arrow notation

What happens when numbers become large... really large?

### Why This Page Exists

On the Philosophy Stack Exchange, one is allowed to make "comments" only after earning a sufficient number of "reputation" points.

Earning points is accomplished only by providing answers to existing questions, or by asking new questions (and in either case, your contribution but be considered good enough by other readers for them to click the little "like" button).

Either activity (answering or asking) is equally available to newcomers, though one quickly finds that all the good questions and answers have already been given. It is tempting to correct the errors in existing answers, or at least put a mark on an existing answer to indicate it is flawed; or similarly to just add "a little improvement" to an existing question or answer — but none of those actions are allowed. "small" contributions are comments, and you need reputation to do that. To get reputation, you need to start with "big" contributions: new questions, or full answers.

One may easily find the guidelines for questions, which includes the following as the second criterion (right after "on topic for this site"):

You should only ask practical, answerable questions based on actual problems that you face.

Now read this again, and think about it for a moment.

Here we have a Philosophy discussion forum in which an essential requirement for asking a question is that the question must be practical and answerable, and involve an actual problem that you face.

Consider the definition of "philosophy" (from the New Oxford American Dictionary on my MacBook Pro):

the study of the fundamental nature of knowledge, reality, and existence, especially when considered as an academic discipline.

By its nature, "philosophy" is almost the exact opposite of the criteria given by the forum.

When I read the Philosophy Stack Exchange guidelines, I laughed at the irony and hypocrisy, realizing that the whole website was a ridiculous exercise. Rather than fighting for the chance to post an answer before others do and hope for the coveted reputation points that never come, I decided to put my comments here, on my own website, where I don't need to worry about having my poor answers lower the value of the stack exchange.

### The original stackoverflow

### Parse Command Line Arguments

Question by verhogen, 2009 May 14:

What is the best way of parsing command-line arguments in C++ if the program is specified to be run like this:

prog [-abc] [input [output]]

Is there a library in STL to do this?

. . .

My answer, 2011 Sep 23 (Note that I ignored the request for a library solution, and instead provided a self-contained or "roll your own" solution):

/* Based on Oliver Nina's version, handles several error cases */ #include <iostream> int main(int argc, char * * argv) { char * a_string; int an_integer; float a_float; for (int i = 1; i < argc; i++) { if (i+1<argc) { if (strcmp(argv[i],"-s")==0) { i++; a_string = argv[i]; std::cout << "string: " << a_string << "\n"; } else if (strcmp(argv[i],"-i")==0) { i++; an_integer = atoi(argv[i]); std::cout << "integer: " << an_integer << "\n"; } else if (strcmp(argv[i],"-f")==0) { i++; a_float = atof(argv[i]); std::cout << "a_float: " << a_float << "\n"; } } else if (argv[i][0] == '-') { std::cerr << "Unregognized argument or missing parameter after '" << argv[i] << "'\n"; exit(-1); } else { std::cerr << "Unrecognized argument: " << argv[i] << "'\n"; exit(-1); } } }Comment by phresnel, 2012 Apr 10:

-1 for for posting a non-compilable example without proper formatting and preferring exit over just return, which additionally does include too much and at the same time includes not enough.

phresnel did not tell how to fix these problems, so I made my best guess. I also selected the "delete" link, indicating that I think my answer should be removed (because it doesn't actually answer the question, which asked for a library, not self-contained code).

. . .

Answer by Keith Thompson, 2011 Aug 14:

for (int i = 1; i < argc; i++) { if (strcmp(argv[i],"-i")==0) { filename = argv[i+1]; printf("filename: %s",filename); } else if (strcmp(argv[i],"-c")==0) { convergence = atoi(argv[i + 1]); printf("\nconvergence: %d",convergence); } else if (strcmp(argv[i],"-a")==0) { accuracy = atoi(argv[i + 1]); printf("\naccuracy:%d",accuracy); } else if (strcmp(argv[i],"-t")==0) { targetBitRate = atof(argv[i + 1]); printf("\ntargetBitRate:%f",targetBitRate); } else if (strcmp(argv[i],"-f")==0) { frameRate = atoi(argv[i + 1]); printf("\nframeRate:%d",frameRate); } }(This was later edited for formatting)

Comment by Otto Allmendinger, 2011 Dec 26:

-1: this fetches array elements without bound checks

Comment by phresnel, 2012 Apr 10:

-1: because of no bounds checking

My comment, 2012 July 2:

Otto Allmendinger and phresnel probably did not realize that argc is defined to be the array bounds of argv. See the C language specification definition of main() (which is inherited by C++).

Keith Thompson corrected my comment within the hour:

@RobertMunafo: The references to argv[i+1] can easily go outside the bounds of the argv array. Think about running the program with "-i" as the last argument.

... as did Marc Mutz :

@RobertMunafo: correct, argv[argc] is required to be 0. Assigning that to filename and passing it to printf() will crash at runtime, though. And even if we don't fall off the end, the code doesn't sufficiently advance i after it parses an argument. It will try to parse the argument value as an option name next time around the loop.

It was at this point that I finally realized that my original answer was inappropriate because the question asks for a library in STL that will parse the command-line arguments. I then -1'd the Keith Thompson answer and added this comment to explain my action:

-1 for being a "roll-your-own" answer, whereas the question asks specifically for a "library in STL" that will do the task. Also, as noted by Keith Thompson and mmutz, there are some bugs in bounds checking.

This thread is what led me to creating this webpage: I realized that most of what I want to contribute is either an off-topic answer, a question about an answer, or a comment on someone else's answer (the questions and comments are usually not allowed, because you have to have lots of "reputation points".)

### How does the hashlife alg go on forever in Golly?

Question by Naximus, 2009 Dec 21:

How does the hashlife alg go on forever in Golly?

"In hashlife the field is typically treated as a theoretically infinite grid, with the pattern in question centered near the origin. A quadtree is used to represent the field. Given a square of 22k cells, 2k on a side, at the kth level of the tree, the hash table stores the 2k-1-by-2k-1 square of cells in the center, 2k-2 generations in the future. For example, for a 4x4 square it stores the 2x2 center, 1 generation forward; and for an 8x8 square it stores the 4x4 center, 2 generations forward."

So given a 8x8 initial configuration we get a 4x4 square 1 generation forward centered w.r.t. the 8x8 square and a 2x2 square 2 generations forward (1 generation forward w.r.t the 4x4 square) centered w.r.t the 8x8 square. With every new generation our view of the grid reduces, in-turn we get the next state of the automata. We canot go any further after getting the inner most 2x2 square 2k-2 generations forward.

So how does the hashlife in Golly go on forever? Also its view of the field never seems to reduce. It seems to show the state of the whole automata after 2k-2 generations. More so given a starting configuration which expands with time, the view of the algorithm seems to increase. The view of the grid zooms out to show the expanding automata?

. . .

My answer, 2011 Oct 21:

To be clear (because your ^ symbols were missing), you are asking:

Given a square of 2^(2k) cells, 2^k on a side, at the kth level of the tree, the hash table stores the 2^(k-1)-by-2^(k-1) square of cells in the center, 2^(k-2) generations in the future. [...]

So given a 8x8 initial configuration [...] With every new generation our view of the grid reduces, in-turn we get the next state of the automata. We canot go any further after getting the inner most 2x2 square 2^k-2 generations forward.

So how does the hashlife in Golly go on forever? Also its view of the field never seems to reduce.

Instead of starting with your 8×8 pattern, imagine instead that you start with a bigger pattern that happens to contain your 8×8 pattern inside it. For example, you could start with a 16×16 pattern that has your 8×8 pattern in the center, and a 4-row margin of blank cells all around the edges. Such a pattern is easy to construct, by assembling blank 4×4 nodes with the 4×4 subnodes of your 8×8 start pattern.

Given such a 16×16 pattern, the HashLife algorithm can give you an 8×8 answer, 4 generations in the future.

You want more? Okay, start with a 32×32 pattern that contains mostly blank space, with the 8×8 pattern in the center. With this you can get a 16×16 answer that is 8 generations into the future.

What if your pattern contains moving objects that move fast enough that they go outside that 16×16 area after 8 generations? Simple — start with a 64×64 start pattern, but instead of trying to run it for a whole 16 generations, just run it for 8 generations.

In general, all cases of arbitrarily large, possibly expanding patterns, over arbitrarily long periods of time, can be handled (and in fact are handled in Golly) by adding as much blank space as needed around the outside of the pattern.

### How to parse the Manifest.mbdb file in an iOS 4.0 iTunes Backup

Question by Padraig, 2010 Jun 21:

How to parse the Manifest.mbdb file in an iOS 4.0 iTunes Backup

In iOS 4.0 Apple has redesigned the backup process.

iTunes used to store a list of filenames associated with backup files in the Manifest.plist file, but in iOS 4.0 it has moved this information to a Manifest.mbdb

You can see an example of this file by making a backup with your iOS 4.0 devices and looking in your ~/Library/Application Support/MobileSync/Backup folder (Look inside the subfolders with the most recent date)

Here's a screenshot of what the file looks like in a text editor:

*Manifest.mbdb, full of semi-readable "garbage"*

How do I parse this into a Cocoa application so that I can update my (free) iPhone Backup Extractor app (http://supercrazyawesome.com) for iOS 4.0?

Answer by galloglass, 2010 Jun 28:

Thank you, user374559 and reneD — that code and description is very helpful.

My stab at some Python to parse and print out the information in a Unix ls-l like format:

#!/usr/bin/env python import sys def getint(data, offset, intsize): """Retrieve an integer (big-endian) and new offset from the current offset""" value = 0 while intsize > 0: value = (value<<8) + ord(data[offset]) offset = offset + 1 intsize = intsize - 1 return value, offset def getstring(data, offset): """Retrieve a string and new offset from the current offset into the data""" if data[offset] == chr(0xFF) and data[offset+1] == chr(0xFF): return '', offset+2 # Blank string length, offset = getint(data, offset, 2) # 2-byte length value = data[offset:offset+length] return value, (offset + length) def process_mbdb_file(filename): mbdb = {} # Map offset of info in this file => file info data = open(filename).read() if data[0:4] != "mbdb": raise Exception("This does not look like an MBDB file") offset = 4 offset = offset + 2 # value x05 x00, not sure what this is while offset < len(data): fileinfo = {} fileinfo['start_offset'] = offset fileinfo['domain'], offset = getstring(data, offset) fileinfo['filename'], offset = getstring(data, offset) fileinfo['linktarget'], offset = getstring(data, offset) fileinfo['datahash'], offset = getstring(data, offset) fileinfo['unknown1'], offset = getstring(data, offset) fileinfo['mode'], offset = getint(data, offset, 2) fileinfo['unknown2'], offset = getint(data, offset, 4) fileinfo['unknown3'], offset = getint(data, offset, 4) fileinfo['userid'], offset = getint(data, offset, 4) fileinfo['groupid'], offset = getint(data, offset, 4) fileinfo['mtime'], offset = getint(data, offset, 4) fileinfo['atime'], offset = getint(data, offset, 4) fileinfo['ctime'], offset = getint(data, offset, 4) fileinfo['filelen'], offset = getint(data, offset, 8) fileinfo['flag'], offset = getint(data, offset, 1) fileinfo['numprops'], offset = getint(data, offset, 1) fileinfo['properties'] = {} for ii in range(fileinfo['numprops']): propname, offset = getstring(data, offset) propval, offset = getstring(data, offset) fileinfo['properties'][propname] = propval mbdb[fileinfo['start_offset']] = fileinfo return mbdb def process_mbdx_file(filename): mbdx = {} # Map offset of info in the MBDB file => fileID string data = open(filename).read() if data[0:4] != "mbdx": raise Exception("This does not look like an MBDX file") offset = 4 offset = offset + 2 # value 0x02 0x00, not sure what this is filecount, offset = getint(data, offset, 4) # 4-byte count of records while offset < len(data): # 26 byte record, made up of ... fileID = data[offset:offset+20] # 20 bytes of fileID fileID_string = ''.join(['%02x' % ord(b) for b in fileID]) offset = offset + 20 mbdb_offset, offset = getint(data, offset, 4) # 4-byte offset field mbdb_offset = mbdb_offset + 6 # Add 6 to get past prolog mode, offset = getint(data, offset, 2) # 2-byte mode field mbdx[mbdb_offset] = fileID_string return mbdx def modestr(val): def mode(val): if (val & 0x4): r = 'r' else: r = '-' if (val & 0x2): w = 'w' else: w = '-' if (val & 0x1): x = 'x' else: x = '-' return r+w+x return mode(val>>6) + mode((val>>3)) + mode(val) def fileinfo_str(f, verbose=False): if not verbose: return "(%s)%s::%s" % (f['fileID'], f['domain'], f['filename']) if (f['mode'] & 0xE000) == 0xA000: type = 'l' # symlink elif (f['mode'] & 0xE000) == 0x8000: type = '-' # file elif (f['mode'] & 0xE000) == 0x4000: type = 'd' # dir else: print >> sys.stderr, "Unknown file type %04x for %s" % (f['mode'], fileinfo_str(f, False)) type = '?' # unknown info = ("%s%s %08x %08x %7d %10d %10d %10d (%s)%s::%s" % (type, modestr(f['mode']&0x0FFF) , f['userid'], f['groupid'], f['filelen'], f['mtime'], f['atime'], f['ctime'], f['fileID'], f['domain'], f['filename'])) if type == 'l': info = info + ' -> ' + f['linktarget'] # symlink destination for name, value in f['properties'].items(): # extra properties info = info + ' ' + name + '=' + repr(value) return info verbose = True if __name__ == '__main__': mbdb = process_mbdb_file("Manifest.mbdb") mbdx = process_mbdx_file("Manifest.mbdx") for offset, fileinfo in mbdb.items(): if offset in mbdx: fileinfo['fileID'] = mbdx[offset] else: fileinfo['fileID'] = "<nofileID>" print >> sys.stderr, "No fileID found for %s" % fileinfo_str(fileinfo) print fileinfo_str(fileinfo, verbose)

My answer, 2011 Nov 12:

In iOS 5, the Manifest.mbdx file was eliminated. For the purpose of this article, it was redundant anyway, because the domain and path are in Manifest.mbdb and the ID hash can be generated with SHA1.

Here is my update of galloglass's code so it works with backups of iOS 5 devices. The only changes are elimination of process_mbdx_file() and addition of a few lines in process_mbdb_file().

Tested with backups of an iPhone 4S and an iPad 1, both with plenty of apps and files.

#!/usr/bin/env python import sys import hashlib mbdx = {} def getint(data, offset, intsize): """Retrieve an integer (big-endian) and new offset from the current offset""" value = 0 while intsize > 0: value = (value<<8) + ord(data[offset]) offset = offset + 1 intsize = intsize - 1 return value, offset def getstring(data, offset): """Retrieve a string and new offset from the current offset into the data""" if data[offset] == chr(0xFF) and data[offset+1] == chr(0xFF): return '', offset+2 # Blank string length, offset = getint(data, offset, 2) # 2-byte length value = data[offset:offset+length] return value, (offset + length) def process_mbdb_file(filename): mbdb = {} # Map offset of info in this file => file info data = open(filename).read() if data[0:4] != "mbdb": raise Exception("This does not look like an MBDB file") offset = 4 offset = offset + 2 # value x05 x00, not sure what this is while offset < len(data): fileinfo = {} fileinfo['start_offset'] = offset fileinfo['domain'], offset = getstring(data, offset) fileinfo['filename'], offset = getstring(data, offset) fileinfo['linktarget'], offset = getstring(data, offset) fileinfo['datahash'], offset = getstring(data, offset) fileinfo['unknown1'], offset = getstring(data, offset) fileinfo['mode'], offset = getint(data, offset, 2) fileinfo['unknown2'], offset = getint(data, offset, 4) fileinfo['unknown3'], offset = getint(data, offset, 4) fileinfo['userid'], offset = getint(data, offset, 4) fileinfo['groupid'], offset = getint(data, offset, 4) fileinfo['mtime'], offset = getint(data, offset, 4) fileinfo['atime'], offset = getint(data, offset, 4) fileinfo['ctime'], offset = getint(data, offset, 4) fileinfo['filelen'], offset = getint(data, offset, 8) fileinfo['flag'], offset = getint(data, offset, 1) fileinfo['numprops'], offset = getint(data, offset, 1) fileinfo['properties'] = {} for ii in range(fileinfo['numprops']): propname, offset = getstring(data, offset) propval, offset = getstring(data, offset) fileinfo['properties'][propname] = propval mbdb[fileinfo['start_offset']] = fileinfo fullpath = fileinfo['domain'] + '-' + fileinfo['filename'] id = hashlib.sha1(fullpath) mbdx[fileinfo['start_offset']] = id.hexdigest() return mbdb def modestr(val): def mode(val): if (val & 0x4): r = 'r' else: r = '-' if (val & 0x2): w = 'w' else: w = '-' if (val & 0x1): x = 'x' else: x = '-' return r+w+x return mode(val>>6) + mode((val>>3)) + mode(val) def fileinfo_str(f, verbose=False): if not verbose: return "(%s)%s::%s" % (f['fileID'], f['domain'], f['filename']) if (f['mode'] & 0xE000) == 0xA000: type = 'l' # symlink elif (f['mode'] & 0xE000) == 0x8000: type = '-' # file elif (f['mode'] & 0xE000) == 0x4000: type = 'd' # dir else: print >> sys.stderr, "Unknown file type %04x for %s" % (f['mode'], fileinfo_str(f, False)) type = '?' # unknown info = ("%s%s %08x %08x %7d %10d %10d %10d (%s)%s::%s" % (type, modestr(f['mode']&0x0FFF) , f['userid'], f['groupid'], f['filelen'], f['mtime'], f['atime'], f['ctime'], f['fileID'], f['domain'], f['filename'])) if type == 'l': info = info + ' -> ' + f['linktarget'] # symlink destination for name, value in f['properties'].items(): # extra properties info = info + ' ' + name + '=' + repr(value) return info verbose = True if __name__ == '__main__': mbdb = process_mbdb_file("Manifest.mbdb") for offset, fileinfo in mbdb.items(): if offset in mbdx: fileinfo['fileID'] = mbdx[offset] else: fileinfo['fileID'] = "<nofileID>" print >> sys.stderr, "No fileID found for %s" % fileinfo_str(fileinfo) print fileinfo_str(fileinfo, verbose)

### Visual C# 2010 MSVCR100.dll missing when opening a project

Question by user245015, 2010 Jan 6:

Visual C# 2010 MSVCR100.dll missing when opening a project

I installed Visual C# 2010 Beta 2 and I get this error every time I open a project:

'This application has failed to start because MSVCR100.dll was not found. Reinstalling the application may fix the problem'

I uninstalled every VC runtime, .NET framework, C# on this computer. Then reinstalled Visual C# 2010 and the install went smoothly. Then I ran Microsoft Update. Still the same problem when I open a project. The project was created with VC# 2008.

I'm running Windows 7 64-bit.

Any idea how to fix this? I could only find people with the same problem while trying to Uninstall VS2010 and use a previous version.

Comment by Pada, 2010 Mar 12:

After installing Visual Studio Express 2010 Beta 2 I also got the missing msvcr100.dll. I never got this issue in Windows XP x86, it only happened to me in Win7 x64.

My solution was simple: Place/Copy the DLL in the System32 folder. I found a local copy of the DLL in the SysWOW64 folder. Hence the following command:

copy %windir%\SysWOW64\msvcr100.dll %windir%\System32\msvcr100.dll

My answer, 2011 Dec 21:

I was curious as to the reason for not putting MSVCR100.dll into System32 or SysWOW64. The MSDN article [1] does not say this outright, and no-one else here has explained it, but the reason seems pretty clear:

Applications built with different versions of Visual C++ or C# expect to find different versions of MSVR*.dll. If you put MSVCR100.dll into a system folder, you might fix your application, but you stand a good chance of breaking other apps either now or later (whenever you decide to run them) and you probably won't know it's going to happen until that other app reports some other strange error resulting from loading what it thinks is a broken library.

The user can fix this by copying the proper version of the library into the "Program Files/Application-Name" folder for the app in question (which probably contains the app's main EXE, and maybe others like Uninstall.exe). The developer can fix it by adding the library to the project in the directory where the EXE will be created (and arrange for it to be installed alongside the EXE by whatever installer you create); the developer can also avoid using the DLL entirely by statically linking (which makes the EXE bigger, but won't try to load the DLL), details at [2].

It was not clear from the question which application is trying to load the DLL: Visual C# 2010 itself, or the application being created by means of the MSVC project.

(The basics: MSVCR100.dll is the C Runtime Library (abbreviated CRL) for Visual C++. The "R" however stands for "Redistributable"; the full acronym is Micro Soft Visual C Redistributable Dynamically Loaded Library. To reduce the size of the .exe in the application, the library can be dynamically loaded, in which case the function(s) you need get loaded into memory when your app calls them. There is potential for namespace conflicts both in the name of the DLL file itself, and in the (C++-mangled) names of the individual functions and their parameters. A particular DLL file only belongs in the system path if every app that might try to load it will get the functions it expects.)

Please correct anything I've gotten wrong. I am a beginner in the world of cross-platform development.

[1] support.microsoft.com/kb/326922

[2] www.rhyous.com/2010/09/16/avoiding-the-msvcr100-dll-or-msvcr100d-dll/

### What is the optimal algorithm for the game, 2048?

Question by nitish712, 2014 Mar 12:

What is the optimal algorithm for the game, 2048?

I have recently stumbled upon the game 2048. You merge similar tiles by moving them in any of the four directions to make "bigger" tiles. After each move, a new tile appears at random empty position with value of either 2 or 4. The game terminates when all the boxes are filled and there are no moves that can merge tiles, or you create a tile with a value of 2048.

One, I need to follow a well-defined strategy to reach the goal. So, I thought of writing a program for it.

My current algorithm:

while (!game_over) {

for each possible move:

countnoofmergesfor_2-tiles and 4-tiles

choose the move with large number of merges

}

What I am doing is at any point, I will try to merge the tiles with values 2 and 4, that is, I try to have 2 and 4 tiles, as minimum as possible. If I try it this way, all other tiles were automatically getting merged and the strategy seems good.

But, when I actually use this algorithm, I only get around 4000 points before the game terminates. Maximum points AFAIK is slightly more than 20,000 points which is way larger than my current score. Is there a better algorithm than the above?

Commant by cegprakash, 2014 Mar 12:

This might help! ov3y.github.io/2048-AI

Comment by 500 - Internal Server Error, 2014 Mar 12:

This is how they should have made the Windows 8 home screen ;) — Interesting game. The guy who wrote an AI solver claims that he uses alpha-beta searching with adaptive deepening. I wonder how you can apply alpha-beta (or its un-optimized ancestor, minimax) to a non-adversarial game like this one.

Comment by Charles, 2014 Mar 14:

@500-InternalServerError: If I were to implement an AI with alpha-beta game tree pruning, it would be assuming that the new blocks are adversarially placed. It's a worst-case assumption, but might be useful.

Comment by ratchet freak, 2014 Mar 19:

now on xkcd

Answer by ovolve (Matt Overlan), 2014 Mar 13:

I'm the author of the AI program that others have mentioned in this thread. You can view the AI in action or read the source.

Currently, the program achieves about a 90% win rate running in javascript in the browser on my laptop given about 100 milliseconds of thinking time per move, so while not perfect (yet!) it performs pretty well.

Since the game is a discrete state space, perfect information, turn-based game like chess and checkers, I used the same methods that have been proven to work on those games, namely minimax search with alpha-beta pruning. Since there is already a lot of info on that algorithm out there, I'll just talk about the two main heuristics that I use in the static evaluation function and which formalize many of the intuitions that other people have expressed here.

Monotonicity

This heuristic tries to ensure that the values of the tiles are all either increasing or decreasing along both the left/right and up/down directions. This heuristic alone captures the intuition that many others have mentioned, that higher valued tiles should be clustered in a corner. It will typically prevent smaller valued tiles from getting orphaned and will keep the board very organized, with smaller tiles cascading in and filling up into the larger tiles.

Here's a screenshot of a perfectly monotonic grid. I obtained this by running the algorithm with the eval function set to disregard the other heuristics and only consider monotonicity.

8 32 64 512 4 8 16 256 2 4 8 32 . . 4 8Smoothness

The above heuristic alone tends to create structures in which adjacent tiles are decreasing in value, but of course in order to merge, adjacent tiles need to be the same value. Therefore, the smoothness heuristic just measures the value difference between neighboring tiles, trying to minimize this count.

A commenter on Hacker News gave an interesting formalization of this idea in terms of graph theory.

Here's a screenshot of a perfectly smooth grid, courtesy of this excellent parody fork.

1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024Free Tiles

And finally, there is a penalty for having too few free tiles, since options can quickly run out when the game board gets too cramped.

And that's it! Searching through the game space while optimizing these criteria yields remarkably good performance. One advantage to using a generalized approach like this rather than an explicitly coded move strategy is that the algorithm can often find interesting and unexpected solutions. If you watch it run, it will often make surprising but effective moves, like suddenly switching which wall or corner it's building up against.

Edit:

Here's a demonstration of the power of this approach. I uncapped the tile values (so it kept going after reaching 2048) and here is the best result after eight trials.

2048 Score 73804 16 2048 4096 4 32 1024 32 2 . 4 16 4 2 . . 8Yes, that's a 4096 alongside a 2048. =) That means it achieved the elusive 2048 tile three times on the same board.

Comment by FryGuy, 2014 Mar 16:

Even though the AI is randomly placing the tiles, the goal is not to lose. Getting unlucky is the same thing as the opponent choosing the worst move for you. The "min" part means that you try to play conservatively so that there are no awful moves that you could get unlucky.

Comment by SztupY, 2014 Mar 17:

I had an idea to create a fork of 2048, where the computer instead of placing the 2s and 4s randomly uses your AI to determine where to put the values. The result: sheer impossibleness. Can be tried out here: sztupy.github.io/2048-Hard

### Logical consequence of Euclid's theorem

Question by Mahmud, 2013 Aug 14:

Logical consequence of Euclid's theorem

Are there any far reaching non-trivial consequences of Euclid's infinitude of primes where theorems make use of it? Wikipedia does not have the list of applications of this theorem, rather modern proofs of this theorem.

. . .

[A few comments to the effect that a large supply of primes is useful to cryptography, however this does not address the distinction between "a huge but still finite number of primes" and "an infinite number of primes", which is the point of the question]

Comment by André Nicolas, 2013 Aug 1:

I doubt there would be a problem with any application if there were
primes with about the right distribution up to say 10^{10000}, and then
no more.

Comment by Alecos Papadopoulos, 2013 Aug 7 :

Have you thought that if primes were not infinite, then natural numbers would not be infinite (as a consequence of the fundamental theorem of arithmetic)? Can we imagine how far-reaching would be to have a mathematical system where the natural numbers would be finite? Or, can we imagine how different the mathematical system would be if natural numbers were infinite but they were not uniquely factorized into a product of primes (since primes would be finite)? I believe this surpasses our imagination — and this is a good intuitive definition of "far-reaching".

Comment by Mahmud, 2013 Aug 7 :

@AlecosPapadopoulos I actually did think in a similar manner. Intuitively, if there is an endless supply of prime numbers, then one can always find a prime number which cannot be compressed and written given all the particles in the universe. Am I off? So a better question would have been: is there a Kolmogorov complexity of prime numbers?

Comment by mick, 2013 Aug 9 :

"application" is not a math word !

. . .

The author of the question asked for my opinion through email, here are my comments (2013 Aug 14) :

- The "stack exchange" websites are an oligarchy. A privileged few can make comments, the rest of us need to wait until someone gives us "reputation points". The only way to get points is to come up with relevant, meaningful questions that have not yet been asked — and hope the question isn't removed or closed by an oligarch for one of the many reasons they have found for disliking questions.

- The word "application" is indeed a maths word. Anyone who works in an "applied mathematics" department at university can explain why.

- The rest of the conversation is silly (and yet, it is not a silly question). There are an infinite number of primes, and that phenomenon is linked (directly or indirectly) to all the other facts/phenomena in mathematics. Essentially, every theorem is a consequence of every other. If you try to use formal maths techniques to show what would result if a given statement were false, you've set a contradiction as premise, and pretty soon the whole rest of the maths will fall apart.

### How to compute antilogarithmic and superlogarithmic spaced values?

Maths StackEcahange member Tommaso is also a reader of my webpages (perhaps the tetration page caught his attention) and asked me to comment on his question:

Question by Tommaso, 2015 Jan 5:

How to compute antilogarithmic and superlogarithmic spaced values?

Let's suppose I have a range, e.g. [100,900]. I want to compute 8
logarithmic spaced values *x_{i}=100, ..., 900. I use the following
formula:

x_{1}=log(S) + [(i−1)log(S/L)] / (n−1)

In the formula, S is the lower extreme of the range (here S=100);
L is the higher extreme of the range (here L=900); n is the number
of spaced values (here n=8); i is the i^{th} spaced values.

So in my example, the 5^{th} value is x_{5}=350.9, while the 6^{th}
value is x_{6}=480.4.

How to compute antilogarithmic and superlogarithmic spaced values, in the same range, with the same n=8?

The solution can be found in this article [1] (table 1), which uses these scales to compute spaced stimulis used in their experiment.

(keywords: algebra-precalculus metric-spaces logarithms)

[1] Gordon Brown et al., Identification and Bisection of Temporal Durations and Tone Frequencies: Common Models for Temporal and Nontemporal Stimuli. Journal of Experimental Psychology: Human Perception and Performance 31(5) pp. 919–938 (2005) DOI: 10.1037/0096-1523.31.5.919 (PDF at www.psych.qub.ac.uk/Publications/McCormack/McCormackJEPHPP2005.pdf)

. . .

My comments:

First of all, I'll point out that I think your formula should be:

log(x_{i}) = log(S) + [(i−1)log(L/S)] / (n−1)

(in which I have changed "x_{1}" on the left to "log(x_{i})", and
inverted the ratio "S/L" to "L/S" so that the numbers grow
from S to L rather than shrinking from S to S^{2}/L.)

The term superlogarithmic can mean lots of things, ranging from a precise closed-form mathematical formula to a broad category-label for anything "beyond logarithmic". The paper [1] is not particularly clear about what their closed-form expressions are, merely stating:

The topmost distribution contains negatively skewed stimuli, the second illustrates arithmetically spaced stimuli, the third illustrates logarithmically spaced stimuli, and the fourth illustrates more positively skewed stimuli. We refer to the first distribution as antilogarithmic spacing because the distribution is as negatively skewed relative to arithmetic spacing as a logarithmic distribution is positively skewed. The most positively skewed distribution is dubbed superlogarithmic distribution because it is as positively skewed relative to a logarithmic distribution as an arithmetic distribution is negatively skewed.

However the table you refer to gives some three-digit numbers, ranging from 100 to 900 with six more intermediate values just as you suggest in your question. (There is a second set of example values ranging from 333 to 666). I'll reproduce the 100-to-900 part of the table here:

Antilogarithmic Arithmetic Logarithmic Superlogarithmic Large Ratio 100 100 100 100 343 214 137 114 520 329 187 134 649 443 256 162 744 557 351 203 813 672 480 274 863 786 658 420 900 900 900 900In the "arithmetic" row, of course, the first differences are roughly 800/7 ~ 114.286. We can express these numbers in a closed form, similar to the (as-corrected) formula you gave but just removing "log" from three places and changing an L/S to L-S:

x_{i} = S + [(i−1)(L-S)] / (n−1)

The "logarithmic" values are increasing in a way more commonly called "geometric" or "exponential" growth. As you suggest with your formula we can take logarithms of the numbers, then take the first differences of those, and get the same value each time. I'll use logarithm to base e:

first value ln(value) differences 100 4.605 0.3148 137 4.920 0.3111 187 5.231 0.3141 256 5.545 0.3156 351 5.861 0.3130 480 6.174 0.3154 658 6.489 0.3132 900 6.802Let's take care of the "antilogarithmic" numbers now. I looked at the last two (863 and 900) and noticed the difference is 37, which is the same as the difference between the first two "logarithmic" numbers (100 and 137). So let's try adding pairs like this:

Antilogarithmic Logarithmic Sum (antilog. (reverse order) + logarithmic) 900 100 900 + 100 = 1000 863 137 863 + 137 = 1000 813 187 813 + 187 = 1000 744 256 744 + 256 = 1000 649 351 649 + 351 = 1000 520 480 520 + 480 = 1000 343 658 343 + 658 = 1001 100 900 100 + 900 = 1000If we use "Antilog(n+1-i)" and "Log(i)" to refer to the two sequences of numbers that add up to 1000, then the relationship between them is

Antilog(n+1-i) + Log(i) = S + L

noting that the i index is reversed for the Antilog sequence. So the formula they used for the "antilogarithmic" sequence is simply:

log(S+L-x_{(n+1-i}) = log(S) + [(i−1)log(L/S)] / (n−1)

With the "superlogarithmic" numbers we might do the
first-differences-of-logarithms approach, and find that the logarithms
have something like an x^{2}-type increasing pattern, or the
logarithms might themselves be increasing "geometrically". So we could
take differences and ratios:

Unfortunately, the first differences do not seem to be growing in a linear, quadratic, or geometric way, nor do the successive ratios.

Let's use another popular heuristic: look at the numbers (100, 114, 134, ...) and ask "How long does it take for it to double?". You can see that it takes 4 steps to go from 100 to 203, then 2 steps to go to 420, then just 1 step to get to 900. So the time it takes to double is half as long each time. That does seem "superlogarithmic", doesn't it? The values grow exponentially, and the time taken for each exponential step shrinks exponentially.

But actually there is nothing exponential or logarithmic about this sequence. It is a very common type of sequence, appearing all over the place in math and science and engineering. It was described at least as far back in history as polynomial and geometric growth rates. The simplest example of this type of growth is seen in the sequence:

^{1}/_{8}, ^{1}/_{7}, ^{1}/_{6}, ^{1}/_{5}, ^{1}/_{4}, ^{1}/_{3},
^{1}/_{2}, ^{1}/_{1}

It takes 4 steps to double from ^{1}/_{8} to ^{1}/_{4}, then 2 steps
to double again to ^{1}/_{2}, and just 1 step to double to 1. This is
called a harmonic progression,
and is simply a sequence of values whose reciprocals are in arithmetic
progression. (It is called "harmonic" because of its relationship to
music, particularly an overtone sequence. Overtones are the notes that
can be played on a trumpet with all valves open: all wavelengths of
the notes you can play are 1/N times the fundamental wavelength
which is the wavelength of the instrument's lowest note.)

Now we have a strong lead for a possible explanation for the "superlogarithmic" numbers. If we take the reciprocals, then the first differences, we should get roughly a constant:

first value 1/value differences 100 0.01000 -0.001228 114 0.00877 -0.001309 134 0.00746 -0.001290 162 0.00617 -0.001247 203 0.00493 -0.001276 274 0.00365 -0.001269 420 0.00238 -0.001270 900 0.00111The first differences of reciprocals look pretty close to being constant, so let's make a harmonic form of the above closed-form expressions:

1/x_{i} = 1/S + [(i−1)(1/L-1/S)] / (n−1)

and use it on the known S and L values. We get:

"superlogarithmic" (reciprocal arithmetic) (with S=100,L=900) (with S=333,L=666) i in paper our formula in paper our formula 1 100 100 333 333 2 114 114.545... 359 358.615... 3 134 134.042... 389 388.5 4 162 161.538... 424 423.818... 5 203 203.225... 466 466.2 6 274 273.913... 518 518 7 420 420 583 582.75 8 900 900 666 666which matches the desired values quite closely. It seems the authors of the paper erroneously rounded "114.54545..." down rather than up. I also double-checked the formula by using S=333, L=666 to reproduce the "Small ratio" numbers from table 1 of the paper.

### mathoverflow

### Polynomial-time algorithm to compare numbers in Conway chained arrow notation

Question by khaaan, 2013 Jan 21:

Polynomial-time algorithm to compare numbers in Conway chained arrow notation

(This is revision 2 of the question; revision 1 is below)

I am looking for a polynomial-time algorithm which, given a character string containing two numbers in Conway chained arrow notation, indicates whether the first number is less than, greater than, or equal to the second.

Suppose the function is called C. Then

C("1→2→3→4 ? 4→3→2") should yield "<",
since 1→2→3→4 = 1 < 4^{256} = 4→3→2,

C("2→4→3 ? 4→3→2") should yield ">",
since 2→4→3 = 2^{2.. 2} (a tower of exponents of height 65536)
> 4^{256} = 4→3→2, and

C("2→2→3 ? 2→2→2→6") should yield "=", since 2→2→3 = 4 = 2→2→2→6.

Does such a thing exist?

. . .

Answer by joro, 2013 Jan 21:

Not sure this will do the job, but try Robert Munafo's hypercalc, the moto is:

Go ahead — just TRY to make me overflow!

Hypercalc can work with quite big numbers, here is a sample session:

C1 = scale=100 C1 = 27^(86!) - (27^86)! R1 = 10 ^ (3.467778644301262713584883219782046054843086208195414740688065133320263642461739090290922141022702407 x 10 ^ 130 )Home page: http://mrob.com/pub/perl/hypercalc.html#versions

Download: http://mrob.com/pub/perl/index.html

Online javascript version: http://www.ylmass.edu.hk/~mathsclub/HyperCalc/HyperCalc.html

I love hypercalc, but it doesn't support Conway notation. – khaaan 2013 Jan 21

Robert Munafo writes that he knows Hypercalc cannot compute an ordering (or even a partial ordering) for chained-arrow notation, but he would love to learn of an algorithm that does. – Charles 2013 Mar 5

. . .

My Answer (which wasn't an "answer", but just a comment, so instead of posting I sent it as a private message to Charles) :

I am the author of Hypercalc and I love Conway's chained-arrow notation. Most numbers expressible in that notation are well beyond the abilities of Hypercalc, and I've never seen a good way to address it. But I, too, would love to have an algorithm that could put a well-ordering on Conway arrow-chains, as the OP suggests.

. . .

(Revision #1 of khaaan's question, earlier on 2013 Jan 21:)

I am looking for a polynomial-time algorithm which, given a character string containing two numbers in Conway chained arrow notation, indicates whether the first number is less than, greater than, or equal to the second.

Suppose the function is called C. Then

C("2→2→4 ? 4→3→2") should yield "<", since 2→2→4 < 4→3→2,

C("2→4→3 ? 4→3→2") should yield ">", since 2→4→3 > 4→3→2, and

C("2→2→3 ? 2→2→2→6") should yield "=", since 2→2→3 = 2→2→2→6.

Does such a thing exist?

### Philosophy StackExchange

### What happens when numbers become large... really large?

Question by Mahmud, 2012 July 2:

What happens when numbers become large... really large?

Thought Experiment: Suppose I type a single digit '1' and then I die with my thumb 'forever' locked on '0'. Is it possible that when the number

1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...

keeps increasing some interesting change happen to our understanding and philosophy of number system?

Background: This is related to my earlier question on Kunnen inconsistency in Math.SE. However, I am still having trouble grasping the behavior of large numbers. A definitive website here on large numbers ends the discussion with the following:

Because different systems are useful for different things and none can generate all useful results (due to incompleteness as demonstrated by Gödel) we end up with several different 'right answers' to the question. None is the 'best' answer, but some are more popular than others. (The term transfinite itself is a result of this — it was Cantor's effort to avoid using the term infinite for certain quantities that were definitely not finite, but did not share all the properties of what he considered truly infinite. In most axiomatic systems, there is no difference.)

One of the consequences is that, in the discussion to follow, it is often difficult or even meaningless to compare the various definitions of infinities to each other, trying to determine which is larger. However, within any one number theory system the infinities can usually be put into a clear order.

Georg Cantor developed two different systems of infinities, called ordinal and cardinal, out of his work in set theory during the 1870's and 1880's. His work still suffices for most purposes (much as Newton's physics is sufficient for most engineers).

I forget the title, but I was reading a paper by Douglas Hofstadter on large numbers, but again the argument veered towards philosophical interpretation.

Question: How to understand the behavior of large numbers? My motivation is from perspective of Poincare recurrence theorem a la Don Page's alternate universe count, or Skewes' number. Does logic as we know it 'break down'?

EDIT:

The paper I was referring to by Douglas Hofstadter is On Number Numbness (1982).

. . .

Answer by Michael Dorfman:

Your thought experiment has a simple answer: It's not a number until you take your thumb off the keyboard. Up until then it is just a string of digits. The placement of the "1" (and thus its meaning) cannot be interpreted until then.

Notice that this is different than the case where you being by typing a decimal place; in that case, the ongoing series of digits serves as an approximation of the intended number, because each digit stays in its proper place.

My comment:

Is is possible to define "number" in many ways: consider for example John Horton Conway's "Surreal numbers".

In this particular case, the original question proposes a definition of "number" as a quantity whose value varies with time: what one might call a function with one argument (the amount of time one has held down the 0 key on the keyboard) and a scalar value (namely, 10 to the power of the number of "0" digits that have been typed).

Given the broadening of the concept of "number" that we see in the likes of Conway's surreal numbers, and Cantor's Transfinite numbers and so on, I do not believe one should exclude the possibility of "numbers" that grow with the passage of time.

. . .

I am going to state a more precise form of this question after corresponding off-line with the OP. I hope this still captures the intent of the question:

In a perfect world, where I will never grow old or go hungry, I am watching a giant computer display screen with enough space to show trillions, or quadrillions, or even centillions of characters or digits.

I have set up the computer to show "1" for a moment, then "10" for a moment, and then "100", and then "1000", and so on. Every moment (perhaps once per second) another 0 appears. Every time a 0 appears, there is a new number being shown.

Can I watch this "forever", and perceive a new number every time a 0 is added? Or is there a limit to my ability to perceive, comprehend, or remember what I am seeing? To what extent does this limit how we as humans can understand numbers and number systems?

I believe that there is a limit to the ability of human beings to perceive, comprehend what they are seeing, and remember what they have seen.

Every time a new "0" appears, it is clearly different from what was there a moment ago. I also know that every number I am seeing is different from each of the numbers I saw before. But as time goes on, I will repeatedly experience the feeling of "what I am seeing is very large, and I have been watching a very very long time". That feeling will be more and more common as time goes on, and eventually I'll be in exactly the same mental state that I was in at some earlier time.

Suppose I try to keep count of how many zeros there are? I can train myself to remember lots of facts, things that can be written out in letters and words.

The mind can remember a lot of information. Perhaps you have enough
space in your mind, that if it were all written out it would take a
billion=10^{9} letters. That means you can have about 26^{109}
distinct mental states, because there are that many different
combinations of a billion letters with a 26-letter alphabet.

With my mental capacity of 26^{109}, I am "counting" the 0's as
they get displayed, and I keep track of it with my mental state. When
there are 876 zeros, I have the number "876" in my mind. [876 is close
to 10^{3}, so] there are about 10^{3} zeros on the huge computer
screen, and 3 digits in my mind. [When there are 8363742824 zeros on
the screen, I'll have 10 digits in my mind, and I can keep going until
my mind's capacity is reached.] Since I can hold "about a billion
letters" in my mind, that means I can "count" the 0's until there are
about 26^{109} zeros on the screen. Then, because of the limited
capacity of my mind, I must lose count. Beyond that, any perception of
exactly how big the number is, must be subjective. I will eventually
have the same "really big" mental state twice. The largest number I
can comprehend, without being confused that it was some other number,
is less than 10^{26109}.

This is like the "Poincare Recurrence Theorem" that the OP linked to, applied to minds. It is one of the natural limits that affect how well we can think about large numbers. I merely use it as a metaphor: if a field is of limited finite size, and you walk around in the field indefinitely, you will eventually step on a spot where you have stepped before.

In our off-line discussion, the OP suggested that we can get larger
numbers by programming the computer to display 2, then 2^{2}, then
(2^{2})^{2}, then ((2^{2})^{2})^{2}, or (using words) it could display
"zwei", then "zweizenzic", then "zweizenzizenzic", and so on (these
are archaic words meaning "squared", "square of squared", and so on,
see Zenzizenzizenzic). The screen fills up with 2's, or with the
letters "zenzi". Once again, there will come a point where I am no
longer able to see anything changing, or perhaps I'll see it changing
but my state of mind will eventually wander back to a point where it
was at some time earlier. I know it is getting bigger every moment,
but even that state of knowledge will eventually recur in exactly the
same form.

We can so the same thing with any mathematical notation, like g(1),
g(g(1)), g(g(g(1))), ... where g(N) is the "g_{N}" function
described in the Wikipedia Graham's number article. This time
instead of squaring each time, the numbers are getting bigger in a
much faster way. Perhaps I have trained myself to understand what this
means. If so, I could then watch the computer screen display "g(1)",
then "g(g(1))", then "g(g(g(1)))", and so on... but again, eventually
my mind would reach its "Poincare recurrence".

No amount of effort using more sophisticated or elaborate notation, or methods of abstraction and understanding, will overcome the finite limit of the human mind to perceive, comprehend, and remember.

This is all very similar to what my large numbers "superclasses" discussion is all about, in particular "Superclass 6".

*EDIT*: I added a simple analogy for the "Poincare" reference, and pointed out that the mathematical Poincare theorem is not relevant. This is about the concept of re-visiting the same spot in a finite space.

Added the "To what extent..." bit at the end of the restatement to try to encompass more of the original question.

Footnotes

Using Hypercalc we can express 10^{26109} in the
form of 10^{10x}, which is about
10^{101414973348}. This is a lot more than
a googolplex.

License: All contributions to the StackExchange discussion websites are licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license. This is similar to, but not quite the same, as the license for most of mrob.com.

This page was last updated on 2015 Jan 07. s.11