Friday, October 31, 2014

Lorentz invariant lattices

In a previous post I was looking at relativistic cellular automata, which requires grid data structures that are invariant to boosts, in addition to their discrete translation and rotation invariance. The method I used worked in 1+1 Minkowski space-time, and sort of worked in 2+1 space-time, boosts in 3 directions were fine, but the intermediate direction boosts caused unwanted dilation of the cells.

Since then I have discovered two techniques which work properly. More broadly than cellular automata, these 'Minkowski lattices' would be useful ways to order data in any simulation that is relativistic, and probably serve as good 'toy' models for quantum field theory or dynamics such as gas models under special relativity. Existing gas models typically have Lorents invariance emerging at large scales, in fact many sites imply that Lorentz invariance is impossible on lattices.

This is somewhat intuitive as a continued Lorentz boost will squash lattice points unendingly down one diagonal axis, so one might expect any lattice to become irrevoquably squashed in this lattice and so cannot maintain its finite point separation. However, as this clip shows, it is in fact possible:
Though I am mainly interested in 3+1 space-time rather than this 1+1 example..

The main trick is to ensure that no lattice points ever touch the light cone (t2 = x2 + y2 + z2), since such points will indeed get bunched up.

The first method comes from this 2010 paper, it provides several possible configurations but the main one has basis lattice vectors (t,x,y,z) as: 


In fact, all the lattices in this method are the same shape, but with different irrationals for time, the simplest being: √3, √2, √5/3, √6, √4/3, √3/2, √7/5. They are therefore all rectangular lattices, the different time values represent different stages where the lattice returns to its original shape. So, although √4/3 is a more square lattice it has to boost further than the √3 version in order to regain its shape.
The problem with this lattice, which is not made clear in the paper is that, while one can boost this lattice in each of the 3 dimensions separately, it doesn't allow any boost to remain in a bounded gap lattice, for example a boost in the x+y+z direction has events located directly on the light cone, so these will stretch and squash without bounds.

Another technique as highlighted on a John Baez post, uses a compiled list of possible Minkowski lattices, these are encoded as Dynkin diagrams. Of all the rank 4 ones (for our 3+1 space-time) it is interesting that there is only one which is both symmetrisable and compact. I'm not really sure what this means, but compactified Minkowski space is interesting as it adds extra symmetries to the group of Poincare transforms, and is used for example by Roger Penrose in his Road to Reality book. In any case, both conditions sound like they add extra symmetries to this group in one form or another, this Dynkin diagram is labelled sr136 (simply the 136th diagram to be labelled), or H13(4) (the 13th Hyperbolic Dynkin diagram of rank 4).
Calculating the root vectors is not simple, and I had to assume that we use a Minkowski metric (t2 is negative) in the angle calculations. One can either find a set of vectors with angles between them defined by the diagram rules and known relative lengths, or equivalently you can find them by generating the Cartan matrix from the diagram and using each element as the dot product between the two vectors divided by the first vector's square magnitude. It gives basis vectors (for diagram elements 1-4) as:
Using linear combinations and Lorentz transforms we can simplify this to:

(√1/12, 1/2, 1/2, √1/2+1/12)

which is very similar to the other method, but apparently has the symmetrisable and compactness properties which would be interesting to investigate. In both cases the sum of the square vector lengths is 0.

Saturday, September 27, 2014

Black rings

Assuming black holes exist, it is true that every one of them must spin, since the chance of one forming with exactly 0 angular velocity is 0. Black holes are thought to create a singularity in their centre, which is hidden behind their Schwartzchild radius, or event horizon. It seems to me (without thinking too hard about the maths) that a spinning black hole should produce an infinitely dense ring rather than a single point. A ring spinning around its axis should hold itself open.

What is interesting about this is that a singularity in the form of a ring is an allowable kind of analytic function of 3d space with two or more layers of volumes. By this I mean that you could pass through the ring and into a different version of 3d space without I think violating general relativity. This is a bit like ideas about space-time wormholes but it is different, it doesn't connect two distance areas of space by a shortcut, it is just a particular geometry of space where everywhere is continuous (apart from the ring singularity) but the world through the ring is different from the world if you pass by the ring without going through it. Moreover, there are potentially an unbounded number of layers each time you circle around the ring and through it.

Would make a nice concept for a sci-fi movie.

I wonder if such a physical setup could be simulated... the ring doesn't have to be massive, it could be the size of a door, so long as the ring is infinitely dense (which doesn't mean massive).

Friday, September 12, 2014

Reaction Diffusion Fractals

Previous work with fractal automata has the disadvantage that it lacks continuous rotational symmetry (it also lacks continuous translational and scale symmetry). An interesting idea is to instead work with reaction-diffusion systems which are already continuous in rotation and translation symmetry, and try to add scale symmetry. The normal formula is as follows (and is explained in the link above):

Scaling Du and Dv scales the size of the patterns formed, it seems that the pattern size is proportional to the square root of this scale. Therefore we choose to scale by t2 where t is our scaling factor.

Next we change all of our variables to be vectors rather than scalars. This represents a list of independent reaction diffusion systems. We then choose vector t to be (1,t,t2,....), in other words the difference between each reaction diffusion system is a geometric increase in scale. If we view the first three components of v in the red,green and blue channels respectively, then the result is simply a superimposed set of three reaction diffusion systems (using Du = 1, Dv = 0.5):
Here the red scale is the smallest, the green is noticeably twice the scale, and blue twice again.
The final ingredient is to have these separate scales interact with eachother, we do that in the reaction part of the formula, which is the uv2...
each u is reacting with two vs, we replace the v vector (v for each scale) with a weighted average of all the vs where the weighting is an exponential dropoff s|x| around each component. Effectively the vector v has been convolved with an exponential dropoff function, or low pass filtered. This gives us an extra parameter s, when s=0 we have no interaction like the image above, s=1 gives equal interaction which prevents any separation of the different components. Values in between are interesting.

Here we choose a small value of s = 0.05. t = 0.5 so we double the scale of each component (red,green,blue). I plot the reaction diffusion system with varying parameters F and k on the horizontal and vertical axes respectively (increasing right and up). 
zoomed out

s=0.05. zoomed in slightly. Varying the parameters in x,y shows the different patterns possible.
Next are with s = 0.1. The three components are more correlated. Notice that the thin bridges are more red (small scale) and the large areas more blue. 

zoom around 0.027, 0.058, range 0.08

zoom around 0.026, 0.057, range 0.04

zoom around 0.023, 0.055, range 0.02

zoom around 0.023, 0.051, range 0. (unvarying in x,y)

similar area, unvarying parameters. Notice the similar worm shapes at each scale.

The system does not require the scales are doubled for each component, here we show a zoomed out image for t=0.7, so green is roughly 1.4 times the width of red.   
In fact the results are an approximation of a continuous scale symmetry as t1 from below. Make sure to change the dropoff to s|x|log2(1/t)

Friday, July 18, 2014

Symmetric binary fractals

In a previous post I discussed fractal automata, which are like cellular automata (e.g. Conway's game of life) but include larger and smaller scales as neighbours as well as nearby locations. This is summarised here:

A subset of these systems are static fractals, generated by applying binary rules to lower resolution maps.
It is useful to try and find a setup with as much symmetry as possible and with as few degrees of freedom as possible. The default system operates on a 2d grid and so exhibits square symmetry. The 'type 7' fractals displayed in the above link use a semi-cheat to allow for octagonal symmetry. It works very well but has some drawbacks; it cannot be animated and isn't really octagonal in its symmetry, just close. It also doesn't extend to 3d which up to now has just cubic symmetry.

A new idea, which is the topic of this post, is to use extra dimensions to provide extra symmetry. If I use a 3d grid, and apply binary rules to convert low resolution voxels progressively to higher resolution ones, doubling the resolution each step, then take a long-diagonal cross section (the plane defined by x+y+z = 0) then the resulting 2d image has hexagonal symmetry. If I make the low resolution data just a 2x2x2 block of set voxels then the rules result in images such as these:


Of course, one can easily make a grid with hexagonal symmetry (a triangular lattice), but the problem is that we need to elegantly register each cell to its lower resolution cell on a coarser version of the grid. For triangular lattices the obvious choice is for 4 triangles to refer to one parent triangle, however these triangles have different geometric locations relative to their parent, it can be the centre triangle or one of the three corner triangles, consequently you need different rules for each case and the rule set becomes larger which defeats the goal of minimising degrees of freedom (the number of possible rules).

By contrast, for 3d voxels, each of 8 child voxels occupy an identical corner of the parent 2x2x2 scaled voxel in my implementation. The ruleset I chose for the above images is therefore quite succinct... I base it only on the seven closest parent voxels, which, assuming cubic symmetry, gives 40 possible configurations of these seven voxels. The ruleset provides an on/off of the child voxel for each of these combinations, therefore 2^40 possible rulesets.

I can reduce this large search space down further by enforcing bit symmetry, this specifies that on and off are arbitrary so if a particular low res shape produces a particular high res fractal, then the negative (on and off swapped) low res shape produces a negative of the fractal. This reduces the search space down to 2^20 rulesets.

A last improvement is to acknowledge that a ruleset and its negative are functionally no different if you have bit symmetry, so we can normalise by choosing the version which converts all seven parent voxels being off to an off state for the child. Resulting in 2^19 rulesets.

You can of course also view the 3d shape generated, but in 3d you miss the details behind the surface:


For the 2d images, we only need a single plane of high resolution voxels, and these trace back to a thin slice through the coarser resolution voxels. Therefore the processing required is much less than for viewing the 3d output and is almost definitely O(n^2) for resolution nxn unlike O(n^3) for the 3d case. This also means that the initial seeds are only needed in a thin slice of the 3d voxel grid.

Random seeds

I can also seed randomly, which gives a better overview of how each ruleset looks on arbitrary input:




There are clearly variations on the rulesets that I'm using, one could look at a larger neighbourhood, or remove bit symmetry etc, I could also change the scaling factor (if the parent to child scaling is 3 rather than 2 then I am sure one can reconstruct the Koch snowflake for instance), but more degrees of freedom almost always gives less symmetric and less interesting (more noisy) results.

However, this method has several advantages, it probably can be made to animate (fractal automata), and it can be extended to higher dimensions.
3d fractals could be generated which have what is probably octahedral symmetry rather than cubic symmetry, by using a 3d long-diagonal cross section of a 4d binary fractal. 
I am also wondering whether one can make use of the highly symmetric D4 lattice in 4d to create even more symmetric 3d fractals... however I'm not sure how elegantly you can register one 24-cell to a parent larger 24-cell.


I'll finish with some examples from a simpler system where I just look at the 4 closest parents. In this case I haven't used bit symmetry. You can sort of see there is more simplicity in the rules. The last image uses a random seed.


Friday, June 6, 2014

Mary Morris Diaries

This is quite unlike my previous posts but I wanted to keep a record of the events of the publication of my Grandma's diaries.
It is a fascinating story of a rebellious young Irish nurse who is thrown into the deep end as the war starts, having to care for gravely ill children with hopelessly inadequate supplies and in the midst of bombing raids (her sister hospital takes a direct strike). She follows the soldiers into France with the Normandy landings where she tends a field hospital on the frontline. Her patients are axis and allies alike and, in the face of daily tragedy and disease, she finds her strength in her patients with their varied and charming personalities. As a nurse and perhaps being Irish (who were neutral) she brings a culturally un-sided and stark account of the major events of WWII, dispairing at the poor treatment of German prisoners in British run PoW camps, and relaying British foot soldiers' dismay at not getting the air backup they were promised. Through the chaos she meets a man who is to become her husband as the war is ending, she tells of his role to protect the German civilians from Russian soldiers who are conducting revenge attacks.

Family blog:
Old BBC entry by Grandpa:
Irish times:
Daily Express:
Irish mail on Sunday, June 1st, 2014
Irish times book review:
Connacht tribune:
Daily mail:
Essay by Carol Acton:
Irish times book review:

Tuesday, March 11, 2014

Operator before addition

I'm sure this idea is not new but, well it's a new idea to me. The question is, what comes before the plus operator on numbers? i.e. if we have + then * then power, then tetration... what do we get when going in the other direction?
These can be defined by the three valued Ackermann function phi(m,n, x) so phi(m,n, 0) = m+n, phi(m,n, 1) = m*n etc. The question may then be what is phi(m, n, -1)?
I'm going to call this operator # (or 'with').
I say 'may' because the extension of these numerical operators is perhaps ambiguous and so it may be that there are different correct answers depending on your interpretation of the existing operators. In a previous post I thought that # is the equality operator, it returns m if m==n, otherwise it returns void.

Today's idea is to make:
 m # n = log_2(2^m + 2^n)
So we get a sequence
a#a = a+1
a+a = a*2
a*a = a^2

The identity value for all four:
a#-inf = a
a+0 = a
a*1 = a
a^1 = a.

The operators are different because they consider the arguments in a different context... for addition the values m and n are considered as points along a line, for multiplication they are considered to be magnitudes, and for the # operator they are considered to be entropies. This is important because entropy is a very fundamental concept so having # as an operator as fundamental as + and * is useful in learning to deal with entropy and information.
For example, the information needed to store either a 16 bit colour or a 16 bit alpha is 16#16 = 17bits. Anyway, # is a sum 'as though the arguments are entropies'. It also has an inverse:
m -# n which is just log_2(2^m - 2^n).