ries 1.047246 -NSC Your target value: T = 1.047246 3 x = pi for X = T - 4.84488e-05 x^5 = 3 root-of 2 for X = T + 4.81228e-05 1/ln(sqrt(x)) = 4^e for X = T + 1.77179e-05 ...We see that 1.047246 is an equally good solution for the following: x = pi / 3 (too high by 0.00005) x = 15th root of 2 (too low by 0.00005) but if you express the solutions as the #ries# output does: 3 x = pi (3.14172, too high by 0.00013) x^5 = cuberoot(2) (1.25992, too low by 0.00029) suddenly it looks like the 3 x = pi solution is more than twice as good. #ries# notices this and compensates for it regardless of the form in which the equation is actually found. If you run RIES on the value 1.047246 it will present both solutions, in the following form: X * 3 = pi X ^ 5 = 3 v 2 The philosophy adopted by RIES is that the "true" form for evaluating the closeness of a match is the form where there is just one X, and nothing else, on the left-hand-side of the equation, and just numbers and symbols, but no X's, on the right-hand-side. (Let's call this the "normalized form".) Put all equations into normalized form, calculate the value on the right and look at the difference between X and this value to determine how good the match is. But now go back to our 2.5063 example above -- one of its solutions was: X^X = 10 This equation *cannot* be reduced to something with just one X on the left-hand side -- there is no "inverse-of-X-to-the-X" function fn[1]. What does RIES do? fnd[1] Actually, there is, but it uses an obscure function called the "Lambert W function" which is the inverse of *y*=*xe^{x}*. More details are [here|+numbers:xxy_root2] if you are interested. RIES calculates derivatives. By using the value of 2.5063 for X and calculating the derivative of X^X for this value of X, you get (about) 19. A derivative of 19 means that any small change in X will cause a 19-times-bigger change in X^X. That's important, because it allows us to compare the closeness of the match on "equal footing" with other, normalized equations like X = sqrt(2 pi). Derivatives work so well, in fact, that RIES does not even have to bother solving its equations for x. ven if it were easy to do this (which it is not), leaving the equations unsolved is still an important speed improvement. Part of the reason RIES is so fast is that it generates left-hand-sides and right-hand-sides separately (like a wl[Bidirectional_search] in graph theory) and tries all the combinations to find possible solutions. It can do this a lot quicker because a half-equation is smaller than a full equation, and therefore there are less possibilities to check out. Furthermore, derivatives allow RIES to quickly and easily check a possible equation to discover the value that X would have to be in order to both sides to match exactly (it is essentially performing one step of Newton's method). Examples of LHS and RHS expressions for the test case 2.5063. These are shown in groups that correspond to the pairs that would actually generate matches in a search: expression value deriv. 5/2 2.5 0 hyprt(10) 2.506184 0 x 2.5063 1.0 ,/25 5 0 2 x 5.0126 2.0 x^2 6.281540 5.01 2 pi 6.283185 0 x^2+e 8.999822 5.01 9 9 0 9+1 10 0 x^x 10.00222 19.19 The derivative is based on the concept of an imagined error-bar in x that is assumed to be small enough so that all reported matches are relevant, but which is not zero. Thus, it is in units of the infinitesimal quantity "epsilon". Let us now imagine that there is a constant "g" equal to (pi-1) * 2.5063 ~= 5.367473. Then we would have the following grouping: expression value deriv. pi^2-2 7.869604 0 x+g 7.873774 1.0 pi x 7.873774 3.14 The matching "pi x = pi^2-2" constitutes a closer match than "x+g = pi^2/2" because in the former case, x would only need to be decreased by about 1/pi as much to make it an exact match. So, the closeness of a match is measured as |LHS-RHS|/derivative, where smaller is better. To enable actual error bars to be provided with data, a value of "epsilon" must be adopted for use with notionally precise supplied values. This can be gleaned from the number of supplied digits in the input, or the precision of the floating point format can be used. In the latter case we would take the data value divided by 2 to the power of the number of digits in the mantissa. */ /* Future enhancements: See "UNFINISHED WORK" section above */ /* REVISION HISTORY 20000207 Begin (it does not do much except parse the parameters) 20000208 Add parsing of level adjust and more design comments 20000209 Write code to generate forms. "Discover" the Motzkin numbers. 20000210 Optimize gf.1() by making it compute the stack depth as it goes along rather than repeating the whole stack history on each test. This improves time to generate all forms of length <=19 from 188 seconds down to to 37.9 seconds. Add tracking of min and max weights, and start writing expression generator. 20000215 It now generates forms within min and max possible complexity limits, to avoid generating expressions from forms that cannot possibly fall within the current complexity limit. Add a little optimization; speeds up a deep search from 53 seconds to 37 seconds. 20000217 It generates expressions, but doesn't prune for complexity or evaluate. For the initial groups of 5, the numbers of expressions evaluted are: {0, 12, 12, 72, 1368, 1368, 17928, 345672, 5875272,...} 20000217 Prune for complexity limits; each expression is now generated exactly once. The numbers are down to: {0, 1, 3, 18, 69, 182, 1046, 5358, 27123,...} Still need to prune foolishness like "11+" and "nn". 20000217 Prune almost all obvious trivial patterns. The numbers are now down to {0,1,3,13,43,122,486,2186,9775,...}. Then prune [JK+] and [JK-] for small integers, [jK*] and [jK+] for any j