RIES - Find Algebraic Equations, Given Their Solution
First page . . . Back to page 5
Details and Surprises
This section goes into some more detail about how ries works and gives some examples of ries results that might seem surprising.
A Detailed Example of the RIES Algorithm
For an illustrated introduction to the ries algorithm, read the algorithm section above.
To perform its bidirectional search, ries builds up a list of expressions, which is kept sorted in numerical order. These expressions include constant expressions like "1+√2" and "variable" expressions like "x2+1". If two different expressions E1 and E2 happen to have the same value, then the equation E1=E2 might be a solution. That hardly ever happens, but ries also looks for cases where E1 and E2 have close to the same value. In such cases, the solution to the equation E1=E2 will be a number close to the user's specified target.
Let us suppose that ries has been given the target number t=2.5063. The first solution it finds is "2x=5":
~# ries -p 2.5063 Your target value: T = 2.5063 mrob.com/ries 2 x = 5 for x = T - 0.0063 {49} 8 x = e^3 for x = T + 0.00439212 {66} x^2 = 2 pi for x = T + 0.000328275 {55} . . .Here is how ries gets to that solution.
Initially, the list of expressions is empty:
{ }
ries starts with expressions involving t. The simplest is just "t", so this is added to the list:
{ t=2.5063 }
Expressions containing t are "variable expressions", while those not containing t are "constant expressions". ries wants to maintain a list that is "balanced", in the sense that it contains an equal number of both types of expressions.
So, ries finds the simplest constant expression, which is "1", and adds it to the list:
{ 1=1.0000, t=2.5063 }
The ries goes back to expressions in t, and finds two more. The next-most-complicated expressions involving t are 1/t and -t. After adding these, the list has 4 elements:
{ -t=-2.5063, 1/t=0.39899, 1=1.0000, t=2.5063 }
With 3 variable expressions and only one constant, the list is pretty out-of-balance, so we need some constant expressions. ries adds 2 and π:
{ -t=-2.5063, 1/t=0.39899, 1=1.0000, 2=2.0000, t=2.5063, π=3.1415 }
Then it adds the variable expressions t2 and √t:
{ -t=-2.5063, 1/t=0.39899, 1=1.0000, √t=1.5831, 2=2.0000, t=2.5063, π=3.1415, t2=6.2815 }
And then ries adds 3, e, and 4:
{ -t=-2.5063, 1/t=0.39899, 1=1.0000, √t=1.5831, 2=2.0000, t=2.5063, e=2.7182, 3=3.0000, π=3.1415, 4=4.0000, t2=6.2815 }
At this point we have 5 variable expressions (-t, 1/t, √t, t, and t2), and 6 constants for a total of 11 expressions in the list. The closest match between two neighbouring items is between t=2.5063 and e=2.7182, but this isn't very close: it's off by 0.2119, which is an "error" of almost 10%.
The next two expressions in t to get added are ln(t) and et; then ries adds the constants 5 and -1:
{ -t=-2.5063, -1=-1.0000, 1/t=0.39899, ln(t)=0.91880, 1=1.0000, √t=1.5831, 2=2.0000, t=2.5063, e=2.7182, 3=3.0000, π=3.1415, 4=4.0000, 5=5.0000, t2=6.2815, et=12.259 }
Then t+1, 1/-t, Φ, 6 and 7 are added:
{ -t=-2.5063, -1=-1.0000, 1/-t=-0.39899, 1/t=0.39899, ln(t)=0.91880, 1=1.0000, √t=1.5831, Φ=1.6180, 2=2.0000, t=2.5063, e=2.7182, 3=3.0000, π=3.1415, t+1=3.5063, 4=4.0000, 5=5.0000, 6=6.0000, t2=6.2815, 7=7.0000, et=12.259 }
At this point there are 20 items in the list, and the closest match is between √t and Φ, which differ by 0.0349, or about 2% of their magnitude. This is not close enough to be reported as a match by ries, but we're getting closer.
ries next adds several expressions in t, all of which are considered "equally complex": t-1, -t2, 1/t2, -√t, and 1/√t:
{ -t2=-6.2815, -t=-2.5063, -√t=-1.5831, -1=-1.0000, 1/-t=-0.39899, 1/t2=0.15919, 1/t=0.39899, 1/√t=0.63166, ln(t)=0.91880, 1=1.0000, t-1=1.5063, √t=1.5831, Φ=1.6180, 2=2.0000, t=2.5063, e=2.7182, 3=3.0000, π=3.1415, t+1=3.5063, 4=4.0000, 5=5.0000, 6=6.0000, t2=6.2815, 7=7.0000, et=12.259 }
Then the constants 8, 9, -2 and 1/2:
{ -t2=-6.2815, -t=-2.5063, -2=-2.0000, -√t=-1.5831, -1=-1.0000, 1/-t=-0.39899, 1/t2=0.15919, 1/t=0.39899, 1/2=0.50000, 1/√t=0.63166, ln(t)=0.91880, 1=1.0000, t-1=1.5063, √t=1.5831, Φ=1.6180, 2=2.0000, t=2.5063, e=2.7182, 3=3.0000, π=3.1415, t+1=3.5063, 4=4.0000, 5=5.0000, 6=6.0000, t2=6.2815, 7=7.0000, 8=8.0000, 9=9.0000, et=12.259 }
And then come t+2 and 2t:
{ -t2=-6.2815, -t=-2.5063, -2=-2.0000, -√t=-1.5831, -1=-1.0000, 1/-t=-0.39899, 1/t2=0.15919, 1/t=0.39899, 1/2=0.50000, 1/√t=0.63166, ln(t)=0.91880, 1=1.0000, t-1=1.5063, √t=1.5831, Φ=1.6180, 2=2.0000, t=2.5063, e=2.7182, 3=3.0000, π=3.1415, t+1=3.5063, 4=4.0000, t+2=4.5063, 5=5.0000, 2t=5.0126, 6=6.0000, t2=6.2815, 7=7.0000, 8=8.0000, 9=9.0000, et=12.259 }
When 2t is added, 5 has already been in the list for a while. The match between 5 and 2t is 0.0126, which is closer than anything so far. It is about 1/2 of one percent of the size of t, or even less than this when compared to the values of the expressions themselves.
Given the possible match "2t=5", ries then solves the equation 2x=5 for x using the Newton's method iteration, with an initial approximation x0=2.5063; this gives the solution x=2.5. The difference between this root and the target value t is 2.5063-2.5=0.0063, which is about 1/4 of one percent. This is good enough to be reported as the first match (the first match must be closer than 1%).
This match was found after putting a total of 31 expressions into the list. Already the efficiency of this approach is clear: with 31 values, with an average spacing between values of (12.259+6.2815)/31 ≈ 0.6, ries found an equation that matches within 0.006 (100 times closer than 0.6).
Now ries "raises the bar": the existing solution 2x=5 is the standard to beat. It continues inserting more expressions until it finds pairs of expressions that give even closer matches. It finds:
8t=e3 (off by 0.00439) when the list has 168 expressions,
t2=2π (off by 0.000328) when the list has 179 expressions,
tt=1+9 (off by 0.000115) when the list has 241 expressions,
and so on.
This search technique is extremely efficient. After the step that discovers the tt=1+9 solution, ries's list of expressions ends up containing 119 expressions involving t and 127 constant expressions. These expressions can be combined in 119×127=15113 different ways to make equations, but ries does not actually have to check all 15113 equations to find the closest matches. Keeping the expressions sorted in numerical order allows ries to simply look at the neighbouring values in the list every time it adds something new. For example, when it added 2t=5.0126, the closest neighbours in the list were the constant expressions 5 and 6. If there is going to be a solution involving 2t, the other side of the equation will either have 5 or 6. Anything further away would be a poorer match.
There is a lot that I haven't described here, like how ries avoids working with redundant expressions like 1-1 or x/x. Some of this is described in the ries manual and most of the rest is fairly thoroughly covered by comments in the ries source code.
Effects of Changing the Symbol Set
The first few approximations to π from the convergents of its continued fraction are:
3 + 1/7 = 22/7 = 3.142857142... = π + 0.00126448...
3 + 1/(7 + 1/15) = 333/106 = 3.141509433... = π - 8.32196..×10-5
3 + 1/(7 + 1/(15 + 1/1)) = 355/113 = 3.141592920... = π + 2.66764..×10-7
To find rational approximations of a number with ries we can use the -S option to restrict the set of symbols that it uses. The -Ox option must also be used to avoid a solution like x*x*x = 5*6+1.
A straightforward approach is -S123456789+-*/, and many variants are possible. The symbols n (negate) and r (reciprocal) could be added, or some digits and symbols could be removed. In theory, any fractional x could be represented by the equation Nx=D where N and D are integers, and since all integers can be derived by adding 1 to itself repeatedly, it is possible in theory to restrict the symbol set to just 3 symbols with -S1+*:
ries 3.141592653589 '-S1+*' -Ox -l4 --explicit-multiply Your target value: T = 3.141592653589 mrob.com/ries (1+1+1)*(1+1)*x = (1+1+1)*(1+1+1)*(1+1)+1 for x = T + 0.025074 {207} ((1+1+1)*(1+1)+1)*x = ((1+1+1+1+1)*(1+1)+1)*(1+1) for x = T + 0.00126449 {235} (for more results, use the option '-l5')The two results are equivalent to x=19/6 and x=22/7.
Let's compare several ries commands that give rational approximations to π. All are run to level 4 (using -l4) and equivalent solutions are shown side by side, along with the solution of the equation as a reduced fraction.
|
Comparing the first two columns, we see that there are far more results when the full set of digits can be used. Also, ries reaches the 355/113 solution in a level 4 search, whereas with -S123+/ it requires a level 6 search to find that solution. This kind of makes sense because more digits offer more possibilities.
If adding more digits increases the number of results, adding more operations seems to do the opposite. Looking at columns 3 and 4, we see fewer results when all four operations +-*/ are used, and even fewer when using negate and reciprocal as well. Each time there are also some new approximations that were not seen before.
In the fraction column, the fractions are shown in bold if they are a "best rational approximation" (are closer to π than any other fraction with a smaller denominator) and shown in black if they are convergents of the continued fraction for π. (Contrary to common belief, the convergents of the continued fraction do not include all of the best rational approximations).
Links
Here I describe other resources on the Internet that solve the same problem, or problems related to those solved by RIES.
Search Engines
If you have, say, 10 or more digits of a number it doesn't hurt to try your favourite search engine. For example, consider 0.77323954473516268. RIES will tell you this is 4/π-1/2, but if you didn't have RIES you could just put it into Google, and remove digits one at a time from the end until you get something. You'll soon learn that it is the answer to the "infinite resistor network" problem in xkcd 356.
When doing this sort of search, it also helps to try changing the last digit a bit to see if roundoff occurred, or if someone else rounded it the other way. For example, as of this writing I get a relevant answer for 0.773239545 but not for 0.773239544. (After the search engines re-index this page, that will change :-)
Normal Equation Solvers
If you want to check ries's answers, or get a more precise answer for one of its solutions, you need a normal equation solver rather than an ries-like "inverse" equation solver.
Normal Equation Solver : QuickMath
QuickMath will solve unbalanced equations. For example, ries with the target number 6174 gives the following among its answers:
ries -i 6174 gives x/72 = 53+1
ries -r 6174 gives x/(7*9) = 7*2*7
ries -a 6174 gives x/(2*7) = (3*7)2 (and x/74 = 4/7+2)
and an older version of ries gave √x/(3*7) = 7√6. These are all equivalent, but you can make sure using QuickMath. All give the solution x=6174.
Normal Equation Solver : Wolfram Alpha
Of course, Wolfram Alpha can solve equations and give high-precision values. For example, give it "x^2+e = 9" and it will tell you the equation has two solutions, x=±√9-e. Give it "sqrt(9-e)" and it will give you the value to 50 or more digits.
MESearch
The "mathematical expression searcher" MESearch, formerly at www.xuru.org/mesearch/MESearch.asp, was created by "Xuru" (Jon Zurutuza Salsamendi, who formerly had a website at www.xuru.org/Index.asp). It implements a bidirectional search algorithm quite different from that of RIES, but achieving similar results. Unlike RIES it comes with many more functions and constants built-in. It also runs in Java, so it is not necessary to compile source code to use it, and it has a friendly interactive user interface. Unfortunately, most Java programs were designed to run in browsers, and browsers stopped supporting Java several years ago.
Inverse Symbolic Calculator
More serious mathematical users might prefer the Inverse Symbolic Calculator. (It is a bit lacking in documentation, but this hint will help: It strives to match however many digits you give. So if you get too many matches, supply one or two more digits.)
ISC with fsolve Input
The older Inverse Symbolic Calculator accepts Maple expressions so long as they evaluate to a number. For a simple example, the expression "Re((2+3*I)^(3/7))" is approximately 1.581184851...; and here is an ISC lookup with that expression.
You can use ISC to solve an equation and look up the result in the ISC database. These examples use one form of the fsolve function with an explicit range to select which root is wanted. Put either of these into the ISC input box:
fsolve(x->(sqrt(x/(3*7))-7*sqrt(6)),6173...6175);
equation based on "ries 6174" solution "sqrt(x/3*7) = 7*sqrt(6)"
fsolve(x->(x*x+E-9),2...3);
equation based on "ries 2.5063" solution "x^2+e = 9"
ISC Glossary
I have compiled a small glossary of constants and functions in ISC results on my ISC quibbles page
Maple's ''identify'' Command
If you have access to the (non-free) symbolic maths program Maple, you can use its identify command, or "identify()" function in Maple programs, for example:
> identify 3.142857143 22/7 (1) > identify 3.146264370 √2+√3 (2) > identify 3.141701399 arcsinh(231/20) (3) > identify 3.141592654 π (4) > identify 2.596684952 3 + π − 2 π (5) > identify 0.7692389013,all=false,FuncPoly=true cos ln 2 (6)
Integer Relations (aka "PSLQ" or "LLL") Algorithms
With just two real numbers, like π and e, one can use continued fractions to find approximations to their ratio. For example, π/e = 1.1557273... ≈ 141/122 = 1.1557377... Treating π as the unknown and e as a fundamental constant, and solving for π we get π ≈ 141e/122. 141/122 is a member of a sequence of approximations: 1/1, 7/6, 15/13, 37/32, ... that can be found using the "continued fractions algorithm" which is more generally referred to as the Euclidean algorithm.
Integer relation algorithms are an extension of the Euclidean algorithm to more than two real numbers. That link describes the history of their development, starting with the first by Ferguson and Forcade in 1979.
In PARI/GP you can use the lindep (linear dependency) function:
? e = exp(1) %1 = 2.718281828459045235360287471 ? phi = (1+sqrt(5))/2 %2 = 1.618033988749894848204586834 ? pi = Pi %3 = 3.141592653589793238462643383 ? lindep([e, phi, Pi, 1], 6) %4 = [-8, -2, 14, -19]~ ? lindep([e, phi, Pi, 1], 20) %5 = [16748, -45483, 2961, 18765]~The first parameter should be a vector of real numbers, treated as independent factors. The answer is a vector of integers [-8, -2, 14, -19] such that its dot product with your supplied vector will come within 6 digits (as chosen by the second parameter to lindep) of zero. In other words, PARI/GP has figured out that -8e-2Φ+14π-19 = 0 + O(10-6). In fact it is about -2.55×10-5.
If we solve for e we get an approximation e ≈ -Φ/4 + 7π/4 - 19/8 = 2.7182786... In the second example we get a 20-digit approximation; again solving for e we get the approximation e ≈ 45483Φ/16748 - 2961π/16748 - 18765/16748 = 2.7182818284590452353595...
lindep can also be used to find "multiplicative" solutions, using logarithms. Noting that 1 is the natural log of e:
? lindep([1,log(phi),log(pi)],5) %6 = [-7, -14, 12]~which tells us that -7 ln(e) - 14 ln(Φ) + 12 ln(π) is approximately zero, or equivalently e-7 Φ-14 π12 is approximately 1. Solving we get e ≈ π12/7 / Φ2 = 2.71820147...
Or, looking for a linear equation in which terms like "3πΦ" or "π2" are allowed:
? lindep([e, pi, phi, pi*phi, 1], 5) %7 = [5, 2, -6, -2, 0]~ ? lindep([e, pi, phi, pi*pi, pi*phi, 1], 5) %8 = [3, -2, 1, -2, 3, 1]~The first tells us that 5e + 2π - 6Φ - 2πΦ is near zero, and the second tells us that 3e - 2π + Φ - 2π2 + 3πΦ + 1 is nearly zero.
Maple has a function called "PSLQ()" that does something similar. At one point in time, Mathematica had a function called "FindIntegerNullVector" to do this, but it is not available in Wolfram|Alpha. However, Wolfram|Alpha will try to find approximations for any real number you give it, using a variety of methods. For example, putting in 2.50618414558876926 I got the rational approximation 426654027/170240494, and a few "possible closed forms" including 44945393π/56340679 and "root of 4591x3-9737x2-3369x-2667".
Of course, if you just want to approximate Φ using π, you can't beat (√5π²+π)/2π. When using ries to generate approximations, you sometimes must exclude a lot more than the thing you're approximating. My Pi Day 2013 examples show some techniques.
Vaguely-Related Resources
Online Encyclopedia of Integer Sequences
The OEIS is a wonderful resource, and although it focuses mainly on integers, OEIS also includes some decimal expansions, such as A1622, the digits of phi. There are even some whimsical ones such as Jenny's constant, A182369.
Inverse Graphing Calculator
This list would not be complete without mentioning the Inverse Graphing Calculator, formerly at www.xamuel.com/inverse-graphing-calculator.php?phrase=IGC. You gave it a set of letters and it produced an equation in x and y, such that the locus of points {x, y} for which the equation is true is a graphical representation of those letters. It works based on the principle that f()g()=0 is true whenever either of the functions f() or g() is zero, combined with the fact that any line segment can be expressed in the form f(x,y)=0 for functions f that are built up out of lots of squares and square roots.
Now that it's gone, I guess I should explain how it worked:
- √x*x is the absolute value function (because I guess you can assume that it's always the positive square root, not the equally valid negative square root). In IGC output this was always abbreviated simply "|x|".
- √x*x + √(x-1)*(x-1) is zero only when x is in the range from 0 to 1. Thus, if you set that to zero you get an equation that is true only for x in that range, and a "1-dimensional plot" of that (perhaps called a "number-line plot" if you're old enough to have had such things in school) is a line segment. On an x-y plot, it's the region of the plane where x is in that range, and y can be anything: a vertical stripe of width 1.
- If two functions that are always non-negative are added together, the result is zero only when both functions are zero. This can be used to turn the one-dimensional function into a constraint on a two-dimensional relation. For example, (y-x) is zero only on the diagonal line satisfying x=y. We need that to be always non-negative, so square it: (y-x)*(y-x). Now take this and add it to the previous example: (y-x)*(y-x) + √x*x + √(x-1)*(x-1), which is zero only when x=y and x is in the range [0..1]. That's a line segment.
- Substitute a linear function for every "x" and another linear function for "y" to move the line segment somewhere.
- If two functions are multiplied together, the result is zero whenever either one is zero. This can be used to get two different line segments to appear in the plot.
- Of course, a unit circle is all points (x,y) such that (x*x+y*y-1) is zero, and again square this to make a non-negative version: (x*x+y*y-1)*(x*x+y*y-1) is a building block which can be combined with other things in the above ways.
- Finally, I think the IGC did a lot of expanding, regrouping, combining terms, simplification, etc.
See Also
If you like ries, you might also be amused by my web pages on numbers and large numbers. There is also Hypercalc (the calculator that doesn't overflow) and this program which computes the Riemann zeta function and turns the real and imaginary components into left and right audio waveforms, so you can listen to the Zeta function on your stereo (it actually sounds pretty good :-)
Since it's sort of related, I'll also link to my discussion of algebraic manipulation.
This page is meant to counteract the forces of Munafo's Laws of Mathematics. If you see room for improvement, let me know!
References
[1] J. F. Ritt, On the Integrals of Elementary Functions, Transactions of the American Mathematical Society Vol. 25, No. 2 (Apr., 1923), pp. 211-222
[2] J. Ritt, Integration in Finite Terms: Liouville’s Theory of Elementary Models, Columbia Univ. Press, 1948.
[3] Donald J. Newman, A Problem Seminar (Sources in the History of Mathematics and Physical Sciences)., 1982 (Springer). ISBN 0387907653
[4] Timothy Y. Chow, What is a closed-form number?, 1998. At arXiv.org, arXiv:math/9805045.
[5] David Poole and Alan Mackworth, Bidirectional Search (part of their book Artificial Intelligence: Foundations of Computational Agents), 2010.
Contents: ries overview Benchmarks History Nerdy Math Tricks Semiserious Math Tricks Links and miscellaneous
s.27