Friday, August 3, 2012

alternative relativistic automata

The most general expression of my previous post on relativistic automata can be summarised as follows.
Use any automata rule which updates a cell based on its nearest neighbours, but subject it to this condition:

 For any rule acting on a small volume of space, represented by a 4x5 matrix which defines a position and 4-parellelotope (4d parallelogram) volume, then the same rule must apply to every possible 4x5 matrix and act on the equivalently transformed cell.

The resulting automata has no specific reference frame so is relativistic by design. The direction of the line of time will vary depending on the frame of the observer, and the arrow of time is based on which side has the higher entropy. A simpler way to say this is:

 If the automata turns image A into image B then it would also turn the image of a transformed (sheared, scaled, rotated) image A into an equally transformed image B.

This idea can be partially achieved with regular grid automata as discussed in that post. However regular automata may not allow arbitrary rotations and scale, possible just 90 degree rotations and scale by powers of two.

So a different form of automata may be better in this role.

stochastic automata:


1. place an arbitrary number of 'cells' down randomly anywhere in a 4d space, and set each to either on or off.

2. for each cell, pick n other cells at random to be its neighbours or inputs.

3. pick an arbitrary rule to choose the cell's bit based on its inputs. We can then trial many rules and see what behaviours are common.
4. the automata will then evolve simply by iterating all the cells repeatedly.

Interestingly this basic scheme is all the functionality that is required, the remainder comes down to reinterpreting this data in a way that fits with how we interpret the real world... Here is a first design, lets call it..


Eigenvector Scheme:

In this design the rule is a function of the number of input bits, not the individual bits, so is like Conway's life in that sense:

 a. place each cell in 20 dimensional space, of which 4 dimensions are the usual space-time.
 b. move each cell to the centre of its inputs, in 20d space
 c. the new position of the cell requires re-scaling, stretching and shearing the distribution of inputs so that the 20d cell position is an accurate covariance matrix of its inputs.
 d. repeat for all cells and iterate until convergence.


The expectation is that the cells will order themselves into a network where each cell's neighbours represent nearby points in 20d space. i.e. volumes attach to volumes rather than points attaching to points in normal automata.
We can then decide on some simple rule, such as cell bit = >50% of inputs set. And we can consider the behaviour as reaching some sort of ideal as the number of cells increases and the number of inputs per cell increases to infinity.

The main difficulty with this might be finding the least-work transform to change the inputs into the required covariance distribution. We might want to say that a covariance is only defined by its eigenvectors and values, but that means different points in 20d space have identical meaning. A solution might be to constrain cells to lie only on the eigenvector/value manifold of 20d space.

Reconnect Scheme:
Instead of moving points around in 20d space, we could alternatively try re-connecting the graph into one that satisfies the rule at the top of this post.

 a. place each cell in a 20 dimensional space, representing the 4x5 skew shape for the cell connections.
 b. for each cell, find the closest neighbours in the 16 'covariance' dimensions, 2 for each dimension.
 c. move the cell to the centre of these neighbours.
 d. for each cell, find the closest along-axis inputs for its required matrix shape, 2 for each dimension, so the eight unique inputs closest to where they should be.
 e. move the cell to the centre of this shape and move the inputs to exactly match that required.

The rule then acts on 40 individual inputs per cell, allowing more rules than the above scheme.
Unlike the previous scheme, this one doesn't have ambiguity with several covariances having the same meaning because we are dealing directly with a grid-like scheme. The problem is that reconnecting inputs creates a non-smooth iteration which could jump around and potentially never converge.

Grid-annealing Scheme:

 a. at equidistant points in 16 dimensional space, place and connect points in a grid which is shaped by the covariance matrix at that 16d point. This results in a 20d set of cells with 8 inputs each.
 b. for each cell, find the 32 unique nearest neighbours in 20d and make then inputs
 c. the 40 inputs all have a preferred 4d position, iterate shifting the cell positions to descend towards an approximation.

This scheme is simple to implement and will definitely converge. But it may not be ideal as the grid-like connections probably don't make it optimal.
It seems sensible that not only should the 4d spacing be affected by the covariance, but also the 16d spacing. So that matrix variations are closer together on small grids. But this gets harder to picture when we consider the stretched and rotated spacing for a general 4x4 matrix.

Convolution automata:

Although the idea of relativistic automata is simply stated (top of post), it is hard to find an algorithm that will implement it. Grid-based methods seem too constrained, you can't rotate by a random angle and get the same results for example. Stochastic methods may get around this by breaking the rigid grid symmetry, but there is no clear best approach as yet.
More-over, the idea of expanding the Minkowski symmetries (rotation, boost, translation, scale) to full 4x5 transforms is speculative. It relies on certain of those matrices naturally being somehow not seen by the user.

A different approach is to expand convolution, so move from bit fields to real numbers.
Convolutions usually convolve over space, but here we make them convolve over volume too:

1. take a 4d grid of real values, and a convolution kernel, for example a 3x3x3x3 grid of real coefficients.
2. convolve the grid (i.e. apply the filter to every position on the grid), this could be considered as a convolve for a kernel scale matrix of diagonal (1,1,1,1).
3. repeat the convolve for a different scale matrix of (0,0,0,2), this is equivalent to scaling the kernel by two in the 4th dimension. Add this new convolve on top of the previous.
4. repeat again for diagonal matrix (0,0,0,3) and again up to (0,0,0,n)
5. in fact repeat this for every integer 4x4 matrix (not just diagonal matrices), each one equates to a transform of the original kernel.

And there we have it, a matrix-independent automata.
One problem is that the size of the result will grow as the convolution range increases. So it might need a reducing scale factor with size... so that it appears the same at all possible scales and transforms. You could weight each convolution based on the 4-volume of the transform, or perhaps more efficiently, you could use non-integer transforms, varying the gap between each transform based on its volume, such that the total effect at each volume is probably proportional to the 4th root of that volume.
An unanswered question is whether higher resolution (small gaps between transforms) converges to a solution or whether it somehow averages out to just give a blurry mess.

Relativistic Folding Fractals

Although this may not class as an automata, a simpler scheme to generate 4d shapes with some transform symmetry is to extend the idea of Kaleidoscopic Iterated Function Systems. It works like so:

 1. for each point in 4d space-time, apply some transform to the point, composed of:
    a. rotations
    b. translations
    c. scale (enlargement)
    d. boosts
    e. folds
 2. repeat up to n times (larger n is more accurate)
 3. draw the points that stay within a certain 4d Minkowski distance, which is defined as sqrt(x^2 + y^2 + z^2 - t^2)

The folds can be a reflection in just the x axis or the t axis. The boosts, translations and rotations allow the equivalent of any possible fold. Equivalently you could define fold operations about any 4d axis and 4d point.

Here I have a simple implementation: http://tglad.powweb.com/relativisticFractals.html

This is the bare minimum relativistic version, we could also try expanding the fractal to include any 4x5 transform (plus folds), this fractal makes it easy to try out different combinations.