A short note on the generalized “smoothstep” function.

August 29, 2017

The Wikipedia page on Renderman’s “smoothstep” function,


features an entry on higher-order generalizations of this function, where each succeeding member has more of its higher order derivatives being set to zero at the endpoints 0 and 1.

Among other things, a formula for these generalized functions is presented, which I find rather unwieldy. I have thus written this short entry to present a much conceptually simpler formula.

One can use e.g. confluent divided differences to derive the following (IMNSHO cleaner) formula for n-th order smoothstep \mathrm{S}_n(x):

\displaystyle \mathrm{S}_n(x)=x^{n+1}\sum\limits_{k=0}^n\binom{n+k}{k}(1-x)^k

Here is a Mathematica demonstration of this identity for the first 21 members:

And @@ Table[InterpolatingPolynomial[{PadRight[{{0}, 0}, n + 2],
                                      PadRight[{{1}, 1}, n + 2]}, x] ==
             x^(n + 1) Sum[Binomial[n + k, k] (1 - x)^k, {k, 0, n}]
             // Simplify, {n, 0, 20}]

which should yield True.

In implementations for other languages (e.g. C++, JavaScript), one would want to use a suitable modification of Horner’s rule to evaluate the polynomial factor expressed in terms of 1-x, as well as exploiting the recurrence


for generating the coefficients.


Quick-and-dirty numerical inversion of the Laplace transform II

February 1, 2017

In this post, I will present an extension of the method previously posted in Quick-and-dirty numerical inversion of the Laplace transform.

The method, due to Piessens, is effectively a Gauss-Kronrod extension of the complex Gaussian quadrature method by Salzer. Recall that the Gauss-Kronrod idea is to use two different quadrature abscissa/weight pairs, where one’s abscissas is a subset of the other, to be able to estimate the accuracy of the higher-order method.

Here is the implementation of the extended version of NInverseLaplaceTransform[]:

SalzerRuleData[n_Integer, prec_:MachinePrecision] :=
      Transpose[Map[{#, (-1)^(n + 1) # ((2 n - 1)/
                        HypergeometricPFQ[{1 - n, n - 1}, {}, #])^2/n} &,
                    \[FormalX] /.
                    NSolve[HypergeometricPFQ[{-n, n}, {}, \[FormalX]],
                           \[FormalX], prec]]]
piessensQ[n_Integer, x_] := Module[{cof},
        cof = LinearSolve[SparseArray[{{j_, k_} /; j <= k :> 1/((k - j)!
                                      Binomial[2 n + k - j, n + k - j])},
                                      {n + 1, n + 1}],
                          -Table[1/(j! Binomial[2 n + j, n + j]),
                                 {j, n + 1, 1, -1}]];
        Expand[FromDigits[Prepend[Reverse[cof], 1], x]]]
SalzerPiessensRuleData[n_Integer, prec_: MachinePrecision] :=
      Module[{c, pk, pl, sg, sk, sl},
             sl = \[FormalX] /.
             NSolve[HypergeometricPFQ[{-n, n}, {}, \[FormalX]], \[FormalX], prec];
             pl = \[FormalX] /.
             NSolve[piessensQ[n, \[FormalX]], \[FormalX], prec];
             sk = HypergeometricPFQ[{1 - n, n - 1}, {}, sl];
             sg = (-1)^(n + 1) sl ((2 n - 1)/sk)^2/n; c = n! Binomial[2 n, n];
             sk = sg - (2 n - 1) sl/(c n sk piessensQ[n, sl]);
             pk = 1/(c pl HypergeometricPFQ[{-n, n}, {}, pl]
                     (D[piessensQ[n, \[FormalX]], \[FormalX]] /.
                      \[FormalX] -> pl));
             {Join[sl, pl], Join[sk, pk]}]
Options[NInverseLaplaceTransform] = {Method -> Automatic,
        "Points" -> 6, WorkingPrecision -> MachinePrecision};

NInverseLaplaceTransform[f_, s_, t_, opts : OptionsPattern[]] :=
        Module[{met, xa, wt},
               met = Switch[OptionValue[Method],
                            "SalzerPiessens" | Automatic, SalzerPiessensRuleData,
                            "Salzer", SalzerRuleData, _, Return[$Failed]];
               {xa, wt} = met[OptionValue["Points"],
               Chop[(wt.Function[s, f][1/xa/t])/t]]

For error estimation purposes, one could then generate the approximant corresponding to both Salzer and Salzer-Piessens for the same setting of "Points" and look at their difference.

Using the Bessel function example from the previous post,

f1[t_] = NInverseLaplaceTransform[1/Sqrt[1 + s^2], s, t, 
          "Points" -> 9, WorkingPrecision -> 30];
f2[t_] = NInverseLaplaceTransform[1/Sqrt[1 + s^2], s, t, 
          Method -> "Salzer", "Points" -> 9, WorkingPrecision -> 30];
GraphicsGrid[{{Plot[{BesselJ[0, t], Re[f1[t]]}, {t, 0, 19}, 
                    Frame -> True, ImageSize -> Medium, 
                    PlotLegends -> {{TraditionalForm[BesselJ[0, t]],
                    PlotStyle -> {AbsoluteThickness[4], Automatic}],
              {Plot[Re[f1[t] - f2[t]], {t, 0, 19}, Frame -> True, 
                    PlotLabel -> Style["Difference between Salzer-Piessens
                                        and Salzer methods", Small],
                    PlotRange -> All], 
               Plot[Re[f1[t] - BesselJ[0, t]], {t, 0, 19}, Frame -> True, 
                    PlotLabel -> Style["Difference between Salzer-Piessens and
                                        true transform", Small],
                    PlotRange -> All]}}]


Quick-and-dirty numerical inversion of the Laplace transform

May 4, 2012

I present here a set of Mathematica routines implementing a quick-and-dirty method for numerically evaluating the inverse Laplace transform. As it is well-known, the problem is a numerically ill-conditioned one, and thus checking the sensibility of your results is very much important here. The value of the following method, however, compared to more elaborate transform inversion methods, is that if one wants a quick estimate for arguments not too far from the imaginary axis, the approximation derived from this method does not take too much effort to evaluate.

The method, due to Herbert Salzer, is a complex-valued n-point Gaussian quadrature, where the nodes are the (complex!) roots of (a special case of) the Bessel polynomial. Due to this, if you want to use this method, the function you wish to transform should be able to evaluate complex arguments.

The method below does not give an error estimate; what you can try for verifying the accuracy of your results is to evaluate for an initial number of points (the default is ten points in the routine below), and then increase the number of points taken, and then compare the results you have. (You might also wish to use a sufficiently high value of the option WorkingPrecision.)

Here are the the Mathematica routines:

SalzerRuleData[n_Integer, prec_:MachinePrecision] :=
Block[{x}, Transpose[Map[{#, (-1)^(n + 1) # ((2 n - 1)/HypergeometricPFQ[{1 - n, n - 1}, {}, #])^2/n} &,
x /. NSolve[HypergeometricPFQ[{-n, n}, {}, x], x, prec]]]]

Options[NInverseLaplaceTransform] = {Points -> 10,
WorkingPrecision -> MachinePrecision};
NInverseLaplaceTransform[f_, s_, t_, opts:OptionsPattern[]] := Module[{xa, wt},
{xa, wt} = SalzerRuleData[OptionValue[Points], OptionValue[WorkingPrecision]];
Chop[(wt.Function[s, f][1/xa/t])/t]]

As an example, we try the inverse Laplace transform of the function \frac1{(1+s)^2}, which is t\exp(-t):

f[t_] = NInverseLaplaceTransform[(1 + s)^(-2), s, t];

With[{t = N[2]}, {t Exp[-t], f[t], t Exp[-t] - f[t]}]
{0.270671, 0.270671 + 5.68434*10^-14 I, 2.91759*10^-8 - 5.68434*10^-14 I}

(The tiny imaginary part is an artifact of the evaluation; you can use Chop[] if you wish.)

As another example, here is a comparison of the approximate inverse transform of \frac1{\sqrt{1+s^2}} with the true inverse transform, J_0(t) (where J_0(t) is the Bessel function of the first kind):

f[t_] = NInverseLaplaceTransform[1/Sqrt[1 + s^2], s, t, WorkingPrecision -> 25];

GraphicsRow[{Plot[{f[t], BesselJ[0, t]}, {t, 0, 15}, Frame -> True],
Plot[Re[f[t] - BesselJ[0, t]], {t, 0, 15}, Frame -> True, PlotRange -> All]}]

Bessel function via inverse Laplace transformation

The plot on the left shows that the true function and the approximation are almost indistinguishable within the range plotted. The plot on the right shows the difference between the approximate and the true function.

Fly me to the moon

February 6, 2012

I had always wanted a nice animation depicting periodic solutions of the restricted three-body problem, and was always a skosh underwhelmed by the animations I’ve found. I then decided, on a quiet afternoon, to see if the new (to me) capabilities of Mathematica were up to the task, and I was not at all disappointed:

visualizing the Arenstorf orbit(click on the image to see a bigger version of the animation)

This is one of my favorite Arenstorf orbits (see also these slides); I went with the route taken in Hairer/Nørsett/Wanner and used the Bulirsch-Stoer extrapolation method for numerically solving the underlying differential equation (Method -> "Extrapolation" in NDSolve[]). (Note that for this animation, I used the Arenstorf orbit depicted in the Bulirsch-Stoer paper instead of the slightly more complicated path depicted in Hairer/Nørsett/Wanner; I can be persuaded to produce an animation for that version, though… ;) )

It wasn’t much trouble to depict the Earth, the Moon, and a spaceship. For ease of visualization, I (grudgingly) decided to exaggerate the sizes of the Moon and the spaceship; if I went with the actual scale relative to the Earth, you’d probably have a hard time seeing those. Mathematica‘s Texture[] function, with texture maps obtained from here, as well as ExampleData[{"Geometry3D", "SpaceShuttle"}] for the spaceship, were useful in this regard.

The one thing I did have trouble with is depicting the starry backdrop of outer space in Graphics3D[], so I elected to skip it. Still, the animation looks quite picturesque, no?

Added 2/7/2012:

I finally decided to do the “four loop” version depicted in Hairer/Nørsett/Wanner as well:

four-looped Arenstorf orbit(click on the image to see a bigger version of the animation)