How to calculate a derivative in Python the smart way

I used to do a lot of work with linear position transducers for velocity-based training and needed to write an algorithm to process raw displacement data. However, there are several other contexts in which people may need to compute derivatives for their analysis.

In this post, I want to share an exercise I had gone through to write a flexible derivative calculator for computing derivatives in Python when working with linear position transducers. In addition to providing some code to use, hopefully, this write-up makes it clear where those coefficients for computing derivatives actually come from when looking up equations for your raw time series data!

Below I will only provide details pertaining to the math I feel is necessary to get a basic understanding of what is going on "under the hood" (there is more to this, but I'd recommend a numerical methods textbook if you're interested in more of the technical details). If you want to skip right to the code, it can be copy and pasted from a Juypter Notebook on my GitHub page.

The Problem

Let's start with the biomechanics 101 example. Usually, in an introductory biomechanics course you may be given some tabular data that looks like this:

Frame (\( x \))Position (\( f(x) \))
00
12
24
36
412
515
619

where \(x\) represents the frame of data and \(f(x)\) the position. You might then be asked to compute the derivatives of this signal to calculate velocity, acceleration, etc. Remember that the formal definition of the derivative of \(f(x)\) is:

\[ \lim_{t \to 0}f^{'}(x) = \frac{f(x+t) - f(x)}{t}  \]

When we're dealing with biomechanical data, we do not know the underlying function governing the change in position over time and we are only able to sample it at some frequency. Therefore, numerical methods are used to approximate what the derivative (i.e. the instantaneous slope) of a signal would be to a specified order of accuracy. When first learning these techniques, you might first solve them by hand and then use software that allows you to quickly automate the computations on a real dataset with thousands of points.

For example, a commonly used introductory technique to approximate the derivative of a sampled signal is to use a (symmetric) finite difference algorithm with an order of accuracy of 2. The equation of this would be:

\[ f^{'}(x)=\frac{\frac{-1}{2}f(x-t) + \frac{1}{2}f(x+t)}{t}  \]

where\( f′(x)\) represents the first derivative (or velocity, in this case) and \(t\)  represents the time between each frame of data (\( x-t \)= one frame prior to \(x\)). Note that the equation above uses one frame of data prior to and following the frame of interest.

For example, let's say we wanted to calculate the instantaneous rate of change at frame 2 in our table above. The equation would be represented by:

\[ f^{'}(x)=\frac{\frac{-1}{2}(2) + \frac{1}{2}(6)}{1} = 2  \]

This makes sense as we can infer from the table that the slope appears to be 2 at this frame of data (i.e., the previous and following points increase linearly by +2).

However, we may want to approximate the derivative with an order of accuracy higher than what this equation provides. To obtain a higher order of accuracy, we use more data points surrounding the instantaneous frame of data (this doesn't always increase the actual accuracy of the derivative estimation as the higher orders of accuracy just have a faster convergence rate. See this Stack Exchange post for some more details). Instead of using a finite difference equation with an order of accuracy = 2, we could instead use an equation with an order of accuracy = 4 (i.e. using 2 points pre and post the frame of interest):

\[ f^{'}(x)=\frac{\frac{1}{12}f(x-2t) + \frac{-2}{3}f(x-t) + \frac{2}{3}f(x+t) + \frac{-1}{12}f(x+2t)}{t}  \]

This would have resulted in the expression:

\[ f^{'}(x)=\frac{\frac{1}{12}(0) + \frac{-2}{3}(2) + \frac{2}{3}(6) + \frac{-1}{12}(12)}{t} = 1.667  \]

Moving on, what if we wanted to compute the acceleration (i.e., the derivative of velocity)? Should we just iterate the above equations twice, or is there a more general way of obtaining accelerations from position data? As it turns out, there is a simpler way as we can use a different set of coefficients to approximate the second derivative (acceleration) by using the equation:

\[ f^{''}(x)=\frac{\frac{-1}{12}f(x-2t) + \frac{4}{3}f(x-t) + \frac{-5}{2}f(x) + \frac{4}{3}f(x+t) + \frac{-1}{12}f(x+2t)}{t^2}  \]

The problem, though, is that it's not clear how to obtain these coefficients, and therefore how this procedure could be automated in your own code.

In the rest of this article, I'll present how we can compute coefficients for finite difference equations for any derivative and order of accuracy without the need to constantly re-iterate any functions. This can make it convenient to quickly compute velocity, acceleration, jerk, etc. to a specified order of accuracy for one-dimensional data (for more information pertaining to the accuracy and order of finite difference equations, please see this Wikipedia link).

The Solution

Underlying math

Below I show the two functions to make this code "work": the first will solve for the coefficients of the finite difference equation for the specified order of accuracy and the second will use these coefficients to compute the derivative of a signal.

There are links to other articles that explain more the details of obtaining coefficients and understanding the order of accuracy in more depth (e.g. see here and here), but for the purpose of this notebook I will just outline the system of equations we solve for to compute the coefficients:

\[ \begin{bmatrix} c_1 \\ \vdots \\ c_N \end{bmatrix} = \frac{1}{t^d} \begin{bmatrix} s^{0}_1 & \dots & s^{0}_N \\ \vdots & \ddots & \vdots \\ s^{N-1}_1 & \dots & s^{N-1}_N \end{bmatrix}^{-1} \begin{bmatrix} 0 \\ \vdots \\ d!\\ \vdots \\0 \end{bmatrix} \]

Where \( d \) represents the derivative and \( N \) the total number of coefficients. Let's illustrate this using an example (I will borrow the example from a page that inspired me to create this function).

Suppose we want to compute the fourth derivative of a time-series using an order of accuracy equal to 2. To compute the coefficients, we could set up a general equation like so:

\[ f{''''}(x) = Af(x-2t) + Bf(x-t) + Cf(x) + Df(x+t) + Ef(x+2t) \]

and solve for \( A, B, C, D, \) and \(E\) (i.e. our coefficients). We will do this by using the Taylor Expansion. Recall that the Taylor Expansion of a function is:

\[ f(b) = f(a) + \frac{f^{'}(a)}{1!}(b-a) + \frac{f^{''}(a)}{2!}(b-a)^2 + … + \frac{f^{n}(a)}{n!}(b-a)^n  \]

So, if we expand the coefficient terms from the fourth derivative equation above using the Taylor Expansion (setting \(b\) in the expansion to be equal to \(x−2t \), \(x−t\), \(x\), etc. and \(a=x\)), we obtain the expanded equation:

\[Af(x) + Af^{'}(x)(-2t) + A\frac{f^{''}(x)}{2}(-2t)^2 + A\frac{f^{'''}(x)}{6}(-2t)^3 + A\frac{f^{''''}(x)}{24}(-2t)^4 + A\frac{f^{'''''}(x)}{120}(-2t)^5 + \dots \]

\[+ Bf(x) + Bf^{'}(x)(-t) + B\frac{f^{''}(x)}{2}(-t)^2 + B\frac{f^{'''}(x)}{6}(-t)^3 + B\frac{f^{''''}(x)}{24}(-t)^4 + B\frac{f^{'''''}(x)}{120}(-t)^5 + \dots \]

\[  + Cf(x)   \]

\[ + Df(x) + Df^{'}(x)(t) + D\frac{f^{''}(x)}{2}(t)^2 + D\frac{f^{'''}(x)}{6}(t)^3 + D\frac{f^{''''}(x)}{24}(t)^4 + D\frac{f^{'''''}(x)}{120}(t)^5 + \dots \]

\[+  Ef(x) + Ef^{'}(x)(2t) + E\frac{f^{''}(x)}{2}(2t)^2 + E\frac{f^{'''}(x)}{6}(2t)^3 + E\frac{f^{''''}(x)}{24}(2t)^4 + E\frac{f^{'''''}(x)}{120}(2t)^5 + \dots \]

Which we will use to solve for the coefficients of the finite difference equation. (I've written all the expansions on separate lines and calculated the factorials in the denominator in an attempt to make this specific example easier to follow. There is also an error term associated with truncating the Taylor Expansion that I have omitted, but more information pertaining to the error from this method can be found elsewhere).

To actually solve for the coefficients, the next step is to set up a system of equations whereby the summation of \(A, B, C, D, \) and \(E \) (our coefficients) results in the cancellation of all terms except for \( f⁗(x)\) (since the fourth derivative is what we are solving for), wherein these terms must sum to \(\frac{d!}{t^d} \). This results in the system of equations:

\[ \begin{array}{rcl}A + B + C + D + E = 0 \\ -2A - B + D + 2E = 0 \\ 4A + B + D + 4E = 0 \\ -8A - B + D + 8E = 0 \\ 16A + B + D + 16E = \frac{24}{t^4} \end{array} \]

Which can be represented in matrix form as:

\[ \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ -2 & -1 & 0 & 1 & 2 \\ 4 & 1 & 0 & 1 & 4 \\ -8 & -1 & 0 & 1 & 8 \\ 16 & 1 & 0 & 1 & 16 \end{bmatrix} \begin{bmatrix} A \\ B \\ C \\ D \\ E \end{bmatrix} = \frac{1}{t^4} \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\24 \end{bmatrix} \]

In this matrix, each row represents a derivative order (1st row= 0th derivative, 2nd= 1st derivative, and so on...) and the values in each column represent the \((b−a) \) constant for each derivative. Hopefully, you can see the pattern emerge that the center column of the matrix will correspond to the 0th frame (i.e. the "frame of interest"), each column to the left corresponds to -1, -2, etc. frames and each column to the right corresponds to 1, 2, etc. frames. The terms in each row are then raised to the power of the row number (starting at row 0 and ending at row \(N-1\). This pattern becomes useful later to write the code).

As this matrix is invertible, we can solve this system of equations. By inverting this matrix we now have the same general form as was presented at the start of this section:

\[ \begin{bmatrix} A \\ B \\ C \\ D \\ E \end{bmatrix} = \frac{1}{t^4} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ -2 & -1 & 0 & 1 & 2 \\ 4 & 1 & 0 & 1 & 4 \\ -8 & -1 & 0 & 1 & 8 \\ 16 & 1 & 0 & 1 & 16 \end{bmatrix}^{-1} \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\24 \end{bmatrix}\]

Which, after solving, provides us with the coefficients:

\[ \begin{bmatrix} A \\ B \\ C \\ D \\ E \end{bmatrix} = \frac{1}{t^4} \begin{bmatrix} 1 \\ -4 \\ 6 \\ -4 \\ 1 \end{bmatrix}\]

And thus our finite difference equation that we would use to approximate the fourth derivative with an order of accuracy = 2 is:

\[ f^{''''}(x)= \frac{f(x-2) - 4f(x-1) + 6f(x) - 4f(x+1) + f(x+2)}{t^4} \]

Python Code

Finally, I'll go over how to automate this using Python. First, the libraries must be imported.

import numpy as np
from math import pi
from math import sqrt

def (x):
	return x**2

The next step is to define a function that computes the coefficients (i.e., automate the process outlined above):

def get_finite_difference_coefficients(d, order_accuracy):
        '''
        Compute the coefficients for the finite difference equation corresponding to the derivative, d, to a 
        specified order of accuracy.
        

        Parameters
        -----------------
        d: int
            The derivative in which we are approximating (e.g. 1st, 2nd, 3rd, etc.).
        
        order_accuracy: int
            The order of accuracy in which the finite difference equation will approximate to. This must be a
            multiple of 2 for the symmetric difference.

        Returns
        -----------------

        coefficients: numpy.ndarray
            The coefficients used in the finite difference equation to approximate the derivative.
        '''
        # N number of coefficients
        # The if statement below is used to ensure that the correct order accuracy is specified and to raise an
        # appropriate error if not specified.
        if order_accuracy % 2 == 0:
            if d % 2 == 0:
                N = d + order_accuracy - 1
            else:
                N = d + order_accuracy
        else:
            raise ValueError('Order of Accuracy must be a multiple of 2 for the symmetric difference')
        
        # n represents the number of coefficients pre and post the frame of interest. This is used to create a
        # vector "master_row" below.
        n = np.int((N-1)/2)

        master_row = np.arange(n * -1, n + 1)
        # Creating an NxN matrix as a placeholder (note that the matrix indices shown in (6) start at 0 and end
        # at N-1, so its dimension is NxN).
        matrix = np.zeros([len(master_row), len(master_row)])
        
        # This loop raises each element of master_row to the exponent i. As noted in the previous text, there is 
        # a pattern in which the center of the rows are always =0, and each number to the right corresponds to +1,
        # +2, +3... etc. and to the left corresponds to -1, -2, -3,... etc. Calculating the matrix of the 
        # constants from the Taylor Expansion is therefore as simple as raising the elements in master_row to the 
        # exponent of the row number (i.e. the index of the for loop).
        for i in range(len(master_row)):
            matrix[i] = master_row ** i
        
        # Create a dummy vector named 'vec' to ultimately hold the right-hand side of our system of equations.
        vec = np.zeros(len(master_row))
        np.put(vec, d, np.math.factorial(d))
        # As shown above, our coefficients are then calculated by solving the system of linear equations in the
        # standard manner.
        coefficients = np.matmul(np.linalg.inv(matrix), vec)
            
        
        ## Uncommenting the code below will add a tolerance to force what "should be" zero values to return as 0.
        ## If uncommenting, you can adjust the tolerance to whatever you feel may be more appropriate.
        
        #tolerance = 1e-10
        #coefficients[abs(coefficients.real) < tolerance] = 0

        return coefficients

This is what will provide us with the flexibility to then define a derivative calculator:

def compute_derivative(signal, t, d, order_accuracy=2):
    '''
    Compute the derivative, d,  of a signal over a constant time/sampling interval, t, using n points prior to
    and following the frame of interest.


    Parameters
    -----------------
    signal: array_like
        an array of the signal.

    t: float
        The time interval between samples (over a vector of timestamps, it is sufficient to input t[1] -t[0]).
        Alternatively, 1/sampling frequency can also be used to denote t.

    d: int
        The derivative to take of the signal.
    
    order_accuracy: int
        The order of accuracy in which the finite difference equation will approximate to. Default = 2

    Returns
    -----------------

    signal_: ndarray
        the nth derivative of the input signal. The length of the signal is equal to that of the intput signal.
        Note: NaN values are added at beginning and/or the end of the array when insufficient data is present
        to compute the derivative (e.g. first frame of data has no prior datapoint, and thus the first n frames
        will be NaN when selecting the symmetric difference)
    '''
    # Similar to the previous function, this is just to catch any potential input errors.
    if order_accuracy % 2 == 0:
        if d % 2 == 0:
            N = d + order_accuracy - 1
        else:
            N = d + order_accuracy
    else:
        raise ValueError('Order of Accuracy must be a multiple of 2 for the symmetric difference')
         
    n = np.int((N-1)/2)
    
    # Get the appropriate coefficients using the function specified above.
    coefficients = get_finite_difference_coefficients(d,order_accuracy)
    
    # Create an empty array that will contain the approximated derivatives as we loop through the input signal.
    derivative = np.array([])
    
    # Specify frames of data to be used for for loop (i.e. where to apply each coefficient)
    coefficient_num = len(np.arange(n * -1, n + 1))
    
    for i in range(len(signal)):
        if i < n:
            signal_prime = np.nan
        elif i >= len(signal) - n:
            signal_prime = np.nan
        else:
            # "Refresh" the signal_prime value which will be appended to the derivative vector following each loop.
            signal_prime = 0
            for c in range(coefficient_num):
                signal_prime += coefficients[c] * signal[i + c - n] / (t ** d)

        derivative = np.append(derivative, signal_prime)

    return derivative

Of course, this can be modified or updated depending on your unique needs. However, this should provide a useful base to start computing derivatives with more flexibility in Python.

Summary

That is all I will go over on this page as the rest of the details are better suited for a Jupyter Notebook. For additional details pertaining to the code and its performance, check out the corresponding Jupyter Notebook on my GitHub page!