New Software Releases, and yet another plea.

I have just released new versions of two Mathematica packages I have written only today:

Carlson 1.1 (computing Carlson integrals)
DOI

MoleculeViewer 3.0 (molecule visualization)
DOI

The bulk of the work for these two Mathematica packages was already done many months ago, but was greatly hampered by not having a working computer of my own to use. Recently, kind friends (including some readers of this blog) have helped contribute to getting my computer fixed, which enabled me to put the finishing touches to these packages.

Nevertheless, it seems my little netbook is not long for this world, as the technicians at the repair center have advised me that getting it fixed in the future would be even more difficult due to parts availability. Thankfully, I now have backups of my research (all in various stages of completion and rigor) and artwork on another hard disk, so the situation is not like the time this very same computer was saved from the pawn shop.

Additionally, altho my software is and will always remain free for use, the electricity and Internet conection needed for me to do research and experimentation is not. Thus, I have set up a Ko-fi page for people who may want to contribute to letting me do more research, and facilitate the release of my unpublished work. (Perhaps if I get enough help, I might be able to finally work on a 64-bit computer!)

So, for users of previous versions of my packages, please do try out these new releases, and if you can perhaps spare some change, your help would be very much appreciated!

(As always, people with my personal e-mail address can drop a line for more details.)

I owe a lot of what I know to the kindness of the Internet, and I hope I can keep on contributing back.

~ Jan

On solving differential equations with continued fractions

Recently, I was again looking through the book Continued Fractions with Applications by Lorentzen and Waadeland. Among many interesting things presented in that book is a method for solving linear differential equations using continued fractions. I shall present a slightly modified example in this entry, and refer you to the book for more details.

Read More »

Evaluating functions of quaternion argument

A recent Mathematica Stack Exchange thread got me thinking again about the evaluation of an arbitrary analytic function f(z) of quaternion argument. Working through the derivation resulted in a lovely formula, and I have decided to write this post to put the resulting formula on record.

The key idea was to exploit the equivalence of the quaternion \mathfrak{q}=(a,\mathbf v)=a+b{\mathfrak i}+c{\mathfrak j}+d{\mathfrak k} (i.e., the quaternion with scalar part a and vector part \mathbf v=b{\mathfrak i}+c{\mathfrak j}+d{\mathfrak k}) with the complex matrix

\displaystyle \mathbf P=\begin{pmatrix}a+bi&c+di\\-c+di&a-bi\end{pmatrix}

and then consider what happens if you evaluate f(\mathbf P). Using either the Jordan normal form or the Cauchy integral definition of a matrix function (see e.g. this paper by Rinehart) results in the following formula:

\displaystyle f(\mathfrak{q})=\left(s,t\frac{\mathbf v}{\|\mathbf v\|}\right)

where s+it=f(a+i\|\mathbf v\|).

For elementary analytic functions like \exp or \sin, this gives results consistent with formulae based on the quaternionic version of the Euler formula,

\displaystyle \exp(\mathfrak{q})=\exp(a)\left(\cos \|\mathbf v\|,{\mathbf v}\,\mathrm{sinc}\|\mathbf v\|\right)

(see also the book of Morais, Georgiev, and Sprößig) and the formula still remains valid for analytic (holomorphic or meromorphic) functions f. For functions with branch cuts, however, care is necessary, since the result might not be consistent with the result of a different method.

Modifying the Turbo colormap

Recently, I came across this Wolfram Function Repository entry that implements the Turbo rainbow colormap of Mikhailov. There is a look-up table for the colors here, which can of course be used with Blend[] in Mathematica.

However, there is also a polynomial approximation due to Ruofei Du listed here. I decided to see if this polynomial approximation can be improved, and I managed to come up with one by adding a sixth and seventh order term to the polynomial approximation of the red component, and modifying the polynomial for the blue component. The following routine is the result:

turboApprox[t_?NumericQ] := With[{x = Clip[t, {0, 1}]},
     RGBColor[Fold[(#1 x + #2) &, 0,
                   {-3.88640322, -23.44894394, 141.551844, -240.965669, 
                    172.35660841, -50.24955545, 4.9960608, 0.14434465}], 
              Fold[(#1 x + #2) &, 0,
                   {2.82956604, 4.27729857, -14.18503333, 4.84296658,
                    2.19418839, 0.09140261}], 
              Fold[(#1 x + #2) &, 0,
                   {21.57469569, -72.72155837, 92.00658112, -52.17062334, 
                    11.13054206, 0.17597339}]]]

Here is a comparison of the original 256-color gradient (top) and the polynomial approximation (bottom):

Turbo colormap comparison

on unpublished work

I still have a whole pile of unpublished work just sitting on a 1 TB external hard drive currently gathering dust. This includes extensions and sequels to some of the stuff posted here, in Stack Exchange, and various other places, as well as programs and code in various states of completion. It would be awfully nice to have a computer of my own to be able to work on these again, but life has not been very kind as of late, both in financial and personal matters.

However, I write this short bit to let people know that despite these troubles, I am more or less still in one piece, albeit struggling mightily. Hopefully I can write again under much better conditions someday.

~ Jan

On inverting a tridiagonal matrix

It is well-known that in general, the inverse of a sparse matrix is dense. Nevertheless, it can still happen that the inverse of a sparse matrix with special structure also has special structure itself.

In two papers, Usmani gives a (relatively) simple expression for inverting a tridiagonal matrix,

\displaystyle\begin{pmatrix}b_1&c_1&&\\a_1&b_2&\ddots&\\&\ddots&\ddots&c_{n-1}\\&&a_{n-1}&b_n\end{pmatrix}

A Mathematica implementation of Usmani’s method is pretty short:

tridinv[a_?VectorQ, b_?VectorQ, c_?VectorQ] /;
Length[b] - 1 == Length[a] == Length[c] := 
    Module[{n = Length[b], p = a c, f, t},
           t = FoldList[Dot, IdentityMatrix[2], 
                        Transpose[{{Table[0, {n}], Table[1, {n}]},
                                   {-Append[p, 0], b}}, {2, 3, 1}]
                        ][[All, 2, 2]];
           f = Reverse[FoldList[Dot, IdentityMatrix[2], 
                                Transpose[{{Table[0, {n}], Table[1, {n}]},
                                           {-Append[Reverse[p], 0],
                                            Reverse[b]}}, {2, 3, 1}]
                                ][[All, 2, 2]]];
           Table[(-1)^(i + j) t[[Min[i, j]]] f[[Max[i, j] + 1]]
                 Product[Switch[Sign[i - j], -1, c[[l]], 0, 1, 1, a[[l]]],
                         {l, Min[i, j], Max[i, j] - 1}],
                 {i, n}, {j, n}]/t[[n + 1]]]

Some comments are in order. The arrays t and f in tridinv[] correspond respectively to the \theta_i and \phi_i in Usmani’s notation. As noted in his papers, they can be generated through a three term recurrence; thus, another way to implement tridinv[] would perhaps use RecurrenceTable[] or even DifferenceRoot[]. However, I instead opted to implement the three-term recurrence by successively multiplying 2\times 2 matrices built from the diagonal and off-diagonal elements of the tridiagonal matrix, partly because I wanted to demonstrate this method.

Here is a way to verify the routine:

With[{a = Array[α, 7], b = Array[β, 8], c = Array[γ, 7]},
     tridinv[a, b, c].SparseArray[{Band[{2, 1}] -> a, Band[{1, 1}] -> b,
                                   Band[{1, 2}] -> c}] ==
     IdentityMatrix[Length[b]] // Simplify]
   True

Timings indicate that tridinv[] is much faster than evaluating Inverse[SparseArray[(* diagonals *)]].

On constructing an elliptic function from its zeroes and poles

ellhex

A few days ago, I posted this lovely domain coloring image of an elliptic function at Mathstodon. I had been asked where this function came from, so here is a write-up on a lovely piece of classical mathematics.

It is well-known that any rational function can be expressed in terms of its zeroes and poles:

\displaystyle r(z)=\dfrac{(z-r_1)(z-r_2)\cdots (z-r_m)}{(z-p_1)(z-p_2)\cdots (z-p_n)}

where the r_k are the zeroes, and the p_k are the poles. Less well-known is the fact that any doubly periodic (i.e., elliptic) function can also be decomposed into its zeroes and poles.

In his book, N. I. Akhiezer displays an explicit formula for an elliptic function in terms of its zeroes r_k and poles p_k, which uses the Weierstrass sigma function:

\displaystyle g(u)=\dfrac{\sigma(u-r_1^\prime)\sigma(u-r_2)\cdots \sigma(u-r_n)}{\sigma(u-p_1)\sigma(u-p_2)\cdots \sigma(u-p_n)}

where \sigma(u)=\sigma(u|\omega,\omega^\prime) and \omega,\omega^\prime are the half-periods, such that their ratio \tau=\dfrac{\omega^\prime}{\omega} has positive imaginary part. Additionally, the quantity r_1^\prime is defined to be r_1-2j\omega-2k\omega^\prime, where j and k are integers chosen such that \displaystyle \sum_{\ell=1}^n p_\ell=r_1^\prime + \sum_{\ell=2}^n r_\ell.

The documentation for Mathematica gives an implementation of this formula, using WeierstrassSigma[].

Much later in Akhiezer’s book, he presents an alternative formula based on the relationship between the sigma function and theta functions. Since the theta functions are quicker to compute in Mathematica, this is a more useful representation. (The theta function version and sigma function version of the decomposition will only differ by a multiplicative constant.)

Thus, I only needed to slightly modify the implementation given in the documentation:

makeEllipticFunction[z_, {ω1_, ω3_}, zeros_?VectorQ, poles_?VectorQ] := 
    Module[{c = π/(2 ω1), q = Exp[I π ω3/ω1]},
           (Product[EllipticTheta[1, c (z - r), q], {r, zeros}]/
            Product[EllipticTheta[1, c (z - s), q], {s, poles}])
           (EllipticTheta[1, c (z - Last[poles]), q]/
            EllipticTheta[1, c (z - Last[poles] -
                                (Total[zeros] - Total[poles])), q])] /;
    Length[zeros] == Length[poles] &&
    With[{wl = {ω1, ω3, Total[zeros] - Total[poles]}},
         PossibleZeroQ[wl.FindIntegerNullVector[wl]]]

Now that we have a method for constructing arbitrary elliptic functions, let’s make one, keeping in mind the constraints on the configuration of the zeroes and poles. A rhomboidal lattice looks nice, so I will use {ω1, ω3} = {1, (1 + I Sqrt[3])/2}.

Somewhat capriciously, I wanted an elliptic function with two triple zeroes and one triple pole. This leaves limited choices for the remaining three poles. Here is what I got:

w[z_] = makeEllipticFunction[z, {1, (1 + I Sqrt[3])/2},
            {0, 0, 0, 2 + (2 I)/Sqrt[3], 2 + (2 I)/Sqrt[3],
             2 + (2 I)/Sqrt[3]},
            {1, (1 + I Sqrt[3])/2, (3 + I Sqrt[3])/2,
             1 + I/Sqrt[3], 1 + I/Sqrt[3], 1 + I/Sqrt[3]}]

This one has, not surprisingly, a lovely domain coloring plot, due to how the poles and zeroes were arranged:

ellhex0

I then had the further idea to take even and odd parts of this elliptic function. Of course, such functions will also be elliptic functions themselves, and one could, in principle, build them directly with makeEllipticFunction[]. I’ll leave that as an exercise for the interested reader. In particular, taking the odd part, w[z] - w[-z], led me to the lovely picture that prompted this entry.


To generate these pictures, I had to implement my own domain coloring function. (Someday, I might even release it publicly.) Users of the just-released version 12 of Mathematica can of course use the new function ComplexPlot[] instead to visualize these functions.

faking a bird’s colors, and miscellanea

Hopefully people enjoyed reading the last few scheduled posts. They were pretty much almost finished in my set of drafts, modulo cosmetic changes and improved pictures. There should be more coming out in the next few weeks, possibly including sequels of earlier posts.

In the meantime, here is another quickie from old unpublished stuff I have. I present a minimal set of colors that almost, but not quite, looks like MATLAB’s parula. I previously provided some functions in this previous entry, but this one should be much shorter:

birdie = Blend[{RGBColor[0.242, 0.15, 0.66], RGBColor[0.27, 0.2135, 0.83],
                RGBColor[0.281, 0.298, 0.939], RGBColor[0.271, 0.385, 0.99],
                RGBColor[0.202, 0.479, 0.99], RGBColor[0.172, 0.564, 0.94],
                RGBColor[0.13, 0.639, 0.892], RGBColor[0.048, 0.7025, 0.827],
                RGBColor[0.07, 0.746, 0.726], RGBColor[0.203, 0.78, 0.61],
                RGBColor[0.36, 0.8, 0.463], RGBColor[0.58, 0.79, 0.29],
                RGBColor[0.786, 0.757, 0.16], RGBColor[0.946, 0.7285, 0.217],
                RGBColor[0.995, 0.79, 0.2], RGBColor[0.961, 0.891, 0.153],
                RGBColor[0.977, 0.984, 0.08]}, #] &

As an example, here’s a plot of a Zernike function using it:

DensityPlot[Cos[3 ArcTan[x, y]] ZernikeR[5, 3, Norm[{x, y}]], {x, y} ∈ 
  Disk[], ColorFunction -> birdie, Exclusions -> None, 
 PlotPoints -> 75, PlotLegends -> Automatic]

zernp

Not quite like MATLAB, but hopefully usable.


On another unrelated note, I notice version 12 of Mathematica just came out, and it can now do domain coloring with its new ComplexPlot[] function, in addition to a new function for generating Wegert‘s “analytic landscapes” in ComplexPlot3D[]. I had been planning to make a package for domain coloring for a while, but it looks as if I got scooped again. However, what I was writing also had features that the new-in-12 functions don’t have (e.g. plotting on the Riemann sphere, and some shading tricks, at least from my cursory reading of the online docs), so it might be interesting to do a head-to-head comparison if I ever get a computer with Mathematica again.

On the roots of a trinomial expressed as a Meijer G function

In a paper by M. L. Glasser, expressions for the n roots of the trinomial equation

\displaystyle x^n-x-t=0,\;t\in\mathbb C

are given in terms of a finite series of hypergeometric functions. I have found the formulae given in the paper a little unwieldy, so I had the thought that one could use the Meijer G-function to represent Glasser’s solutions. (In particular, formula 16.17.2 in the DLMF indicates that it ought to be possible.)

Read More »

A differential equation for the inverse Dixon elliptic function

I am writing this post to put on record a differential equation I had derived for the inverse of the Dixon elliptic function \mathrm{sm}(z,\alpha). I will post details at another time, and will just mention that Mathematica is not able to come up with a closed-form solution for this DE.

In brief, if we have the function w(z) defined by the relation z=\mathrm{sm}(w,\alpha), then w(z) satisfies the differential equation

\displaystyle (z+1)\left(z^2-z+1\right)\left(z^6-2\left(2 \alpha^3+1\right) z^3+1\right)w^{(3)}(z)+6 z^2\left(z^6+\left(1-\alpha^3\right) z^3-3\alpha^3-2\right)w^{\prime\prime}(z)+2z\left(3z^6+\left(\alpha^3+7\right)z^3-5\alpha^3-2\right) w^\prime(z)=0

with initial conditions w(0)=0,\;w^\prime(0)=1,\;w^{\prime\prime}(0)=-\alpha.

One way to verify the inverse relationship with \mathrm{sm}(z,\alpha) in Mathematica involves series-expanding the DifferentialRoot[] corresponding to the DE above, and then composing that series with the series for \mathrm{sm}(z,\alpha), using the expression in terms of Weierstrass elliptic functions given in my earlier entry.

I do not know if other computing environments can give a symbolic solution for this DE, so please post any relevant additional information in the comments.

On plotting the Wente torus

wente

Quite a while back, when I was doing some research on surfaces of constant mean curvature, I found this paper by Walter where he gives explicit parametric equations for the Wente torus. However, the formulae were a bit messy, and not suitable for use in Mathematica. So, I adjusted the formulae to match the parameter convention preferred by Mathematica’s built-in elliptic integrals, as well as explicitly evaluating the integrals in Walter’s formulae. Of note is that the results involve all three kinds (complete and incomplete) of elliptic integrals.

The implementation is a bit messy to include as a code block in a WordPress entry, so you can download the notebook with the code here (unfortunately, WordPress does not allow Mathematica notebooks as attachments). I note that this implementation is quite a bit faster than the one in this Wolfram Demonstration, as well as being more general.