Robert P. Munafo, 2023 Jun 21.
"Perturbation methods" are a subclass of the orbit-based optimizations that work using the principles and methods of perturbation theory. They have revolutionised the art of Leavitt navigation through the use of Newton-Raphson zooming and related techniques to locate embedded Julia sets of far higher orders than what was previously possible.
In general, they avoid iterating all points in a pixel grid, by performing the full iteration calculation on one or more reference points and then using an approximation model to infer values for the rest of the points, to an acceptable level of precision. Reference points can also be called "key" points.
Perturbation methods are fairly well known in science and engineering. They have been rediscovered and independently developed multiple times for fractal-related calculation.
Area of Greatest Usefulness
For the Mandelbrot set, perturbation methods are most useful for any images that require a very high number of iterations and for which the lowest-period island is of comparatively high period. Such images sometimes occur at low magnification levels — particularly in the filament structure near generalised Feigenbaum points or any place that has enough levels of compound tuning to ensure that island periods are large. In these cases, perturbation methods can be carried out in "uniform precision", where all calculations are done in a "native" (built-in and fast) format e.g. double precision floating point.
However, perturbation methods are most commonly used for very highly magnified views, notably including exploration of high-order embedded Julia sets. Here the reference points are iterated at the full precision needed for the particular amount of magnification using extended precision. Most other calculations (derivatives, scaled deltas, and so on) can be done in the native floating point format. Mixed-precision perturbation techniques have been used for multibrot sets and many other recurrence relations.
Perturbation methods can be classified by the method of inference, and then by the number of reference points.
Models can be linear, quadratic, cubic, etc.. Model coefficients are usually multiplied by derivatives (of first or higher degree) possibly scaled (such as the derivative of the recurrence relation multiplied by the starting distance between two reference points).
Most interpolation methods are patch iteration methods; see that article for details. The number of reference points will depend on the type of model.
Extrapolation methods include cubic deltas and bivariate linear. These can use just a single reference point, but more often they create reference points when needed and/or calculate new reference point values during iteration, depending on the presence of iteration periodicity (islands) in the area being viewed.
The first to be published was the Synchronous-Orbit algorithm, which has its own article. It is a patch iteration method with four key points and linear interpolation. It was developed in 1992 by Steven Stoft and others contributing to the program Fractal Witchcraft, with details published in Amygdala; I learned about it on Sep 19th.
Four reference points are iterated, generally the corners of the view rectangle for an image to be generated. The iterates for all other pixels in the image can be interpolated between those four until that is no longer practical because one of the reference points escapes or (more generally) because the four reference points cease to fit a simple linear model. When that happens the image can be subdivided into smaller rectangles, or individual pixels can be iterated the rest of the way at high precision from the last "good" interpolated high-precison iterate. Using linear interpolation in two dimensions (see two-dimensional interpolation for details), a linear model works for a rectangle, parallelogram, trapezoid (keystone), or other quadrilateral, any of which allow rotation and scaling.
One notable and recent example is the cubic deltas method by Kevin Martin, first published in 2013. A single reference point x is iterated at full precision, with its iterates called xn. Any other desired point y can be iterated by first noting its difference from x, i.e. ∂=y-x. Each iterate yn differs from xn by an iterated delta ∂n. The ∂n values are approximated by third-order polynomials in ∂ and xn. See the cubic deltas article for more details.
Most recently the best deep zoom results have been produced with bivariate linear approximation.
revisions: 20230617 first version; 20230618 organise the article partly as a taxonomy; 20230619 best use at lower precisions is near Feigenbaum points, not tips; 20230620 mention applications; 20230621 bivariate linear
From the Mandelbrot Set Glossary and Encyclopedia, by Robert Munafo, (c) 1987-2023.
This page was written in the "embarrassingly readable" markup language RHTF, and was last updated on 2023 Jul 03. s.27