The Big Picture
One of the most powerful things about math is the fact that it is situation agnostic. We can take reference from the works of giants of the past such as Newton, Leibniz, Gauss, Riemann, Galileo and be convinced of the unmistakable rigour of the numbers laid before our eyes.
And with this powerful tool, we can put it into a completely unrelated field and know for a fact it will work. The math acts as a torch to light the way.
Per the title of the article, you’re probably here to satiate your curiosity. You’re here to find out what exactly are the links between electromagnetism and finance. As we shall see, they are aplenty.
In recent decades, there has been a mass exodus of physicists from their traditional domain of research labs and universities, right into the arms of financial firms.
Some History
Most readers with a passing interest in history, will be familiar with the Manhattan Project. The US Government spearheaded effort to develop the Atomic Bomb during the Second World War.
What few readers know, is the significance of the Manhattan Project to modern day finance.
In the 1930s, some of the brightest scientists of Europe, fleeing the unrest back home, found themselves in the deserts of New Mexico, trying to model the blast waves of nuclear detonations.
Three giants in Physics, Enrico Fermi, Stanislaw Ulam, and John Von Neumann, sought to develop a mathematics that could accurately model the probabilistic nature of Atomic Physics.
The historical origins of the name behind this method vary, but a commonly accepted explanation is that these Physicists loved to gamble during their free time. And in the process of gambling, they gained an epiphany of how they would go about improving the calculations in their work.
They henced named this method the after the famed casino in Monte Carlo. The district in the city state of Monaco, the playground for the European wealthy.
These days, the Monte Carlo method, albeit a simplified version of it, is heavily utilized in the world of finance.
It runs the gamut from risk management, to asset pricing, structured products, and portfolio optimization and many others.
Physics to Finance Today
This wonderful book describes just how much more advanced the field has become in the present day.
The author, Emanuel Derman paints a picture of his experience in the earlier part of his career, doing research on Quantum Field Theory in Princeton University.
And subsequently, how he managed to use apply his knowledge in Physics as a member of the Quantitative Investment Strategies group of Goldman Sachs.
Derman describes the similarities in a truly lucidative manner. Among other things, I particularly enjoyed his description of how information propagation in the financial markets is very similar to how elementary particles behave as fluctuations in the underlying quantum fields that pervade all space and time.
Comparing Electromagnetics to Financial markets
Electromagnetics
The behaviour of electromagnetic fields are described via the use of Poisson’s Equations.
A common task in the study of electromagnetism is to obtain the solution of the charge distribution, starting from
- The Poisson’s Equations
- The given boundary conditions.
For instance, the electrostatic distribution of a charged sphere.
The Poisson Equation in spherical coordinates is shown below.
Through suitable mathematical treatment and consideration of the boundary conditions, we perform integration to obtain V, the solution of the potential.
It is worth keeping in mind, that as intimidating as the solution looks, it is reflective of one of the simplest and straightforward cases, a single isolated sphere.
Naturally, in the analysis of more oddly shaped structures and non-uniform charge distributions, the complexity of the analysis grows exponentially.
Financial derivatives
Similarly, we shall be skipping the verbose introductions of what financial derivatives are, for I assume most readers who take an interest in this article are highly familiar with this bread and butter of quantitative finance.
I shall turn your attention once again to the familiar equation, that underlies all of quantitative finance.
The Black-Scholes formula.
For readers still unfamiliar with the field, i’d invite to check out this article by another Medium Writer Jørgen Veisdal, a research fellow at the Norwegian University of Science and Technology.
Now, some readers may be tempted to think:
“Yo, Black-Scholes is old news at this point! What possible value can there be in even bringing it up?”
I’d like to drop in a nugget of wisdom from the field of Physics and invite you to hear me out, just for a moment.
Perhaps the fact that i’m a physicist makes me biased in favour of him, but i’m happy to tell you that it’s fine piece of advice that has served many of us well.
There is the harmonic oscillator, as per many high school physics classes.
And there is the harmonic oscillator formulation in Theoretical Physics
Which is used, as part of other formulation, in areas such as the
The Higgs field
The common ground
The Black-Scholes formalism may form the basis of quantitative finance. But at a more elementary level, the Black-Scholes formalism is formed by the math of Partial Differential Equations (PDEs).
In some ways, these PDEs are our harmonic oscillators.
It may be instructive to examine the methods used in modelling the behaviour of electromagnetic fields, and see if they can offer insight into the modelling of the Black-Scholes formula and beyond.
Once again, there is no shortage of material on the internet demonstrating how to derive an analytic solution to Black-Scholes.
Such material is often replete with rigorous proofs and workings on how to convert the differential equations into a means in which numerical integration can be performed.
However, as we have all come to realise, reality is often far more complicated.
A financial instrument most definitely does not function in an isolated vacuum.
An investment portfolio’s very nature contains a diversified basket of different investment instruments to hedge against various forms of market risk.
If a catastrophe occurs and the market nosedives, it does not bode well for ‘long’ positions. But it is good news for ‘shorts’.
So how do we calculate investment returns pertaining to a particular market movement when returns from certain instruments deliver more, while some deliver less?
And to make things even worse, the value of the instruments can be intimately tied to each other. Aside from their value being affected by external conditions, these very behaviour of said instruments can affect the returns of one another, in complex forms of interdependence.
In cases like these, the math scales up.
Instead of a singular partial differential equation, we are now dealing with multiple coupled partial differential equations.
Similar to the case of electromagnetics, the complexity of the analysis quickly gets out of hand.
Numerical solutions to Partial Differential Equations
In the days before the invention of the computer, scientists and mathematicians had to find many ingenious ways to manipulate the PDEs.
Many are codified in the Latin works of Issac Newton, namely in the Principia.
These methods now form the basis of many high school and university freshman level courses in calculus.
Rigorous as they may be, these methods are concerned with the manipulation of Analytic Solutions.
A analytic solution is simply an expression that can be resolved into a finite number of terms.
However, in the modern day, many such problems are simply unsolvable by analytics means.
Entire fields of research are dedicated to obtaining solutions to PDEs
Another interesting article may be found here.
It documents the research efforts of Google’s Artificial Intelligence division, in finding smarter and more efficients means to resolve PDEs.
However, that level of complexity that Google’s scientists are dealing with, are wading into the realms of:
- Magnetohydrodynamics of plasma in Nuclear Fusion.
- Planetary scale fluid dynamics in Hurricane analysis.
The method utilised in the study is that of Finite Volume Method.
However, for this article we shall be analysing a simpler one, the Finite Difference Method (FDM).
The method of Finite Differences (FDM)
To quote scientists from the Google blog article above,
“ For most real-world problems, closed-form solutions to PDEs don’t exist. Instead, one must find discrete equations (“discretizations”) that a computer can solve to approximate the continuous PDE. Typical approaches to solve PDEs represent equations on a grid, e.g., using finite differences.”
Once again, I wish to reference a very powerful resource.
Code samples and visualisations are adapted from the book, with modifications to account for changes in various libraries & newer, stable releases since the date of publishing (Dec 2018).
The first step, as always, is importing the relevant dependencies.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import mpl_toolkits.mplot3dimport scipy.sparse as sp
import scipy.sparse.linalg
import scipy.linalg as la
Next, we create a grid. The region of analysis will be subdivided into discrete grid points, similar to those found on a map.
In this case, we will be working with the Dirichlet Boundary Conditions, that specify explicitly the values the function takes at the boundaries.
Additionally, we call ‘eye’ and ‘kron’, matrix construction functions from the scipy.sparse library.
As the output below demonstrates, we break up our region of analysis into a 1000 x 1000 grid.
More complicated series of PDEs can require splitting the grid into finer regions, at the cost of a longer run time.
N = 100
u0_t, u0_b = 5, -5
u0_l, u0_r = 3, -1 #Dirichlet Boundary conditions
dx = 1./ (N+1) #Grid SeparationA_1d = (sp.eye(N, k=-1) + sp.eye(N, k=1) - 4 * sp.eye(N))/dx**2A = sp.kron(sp.eye(N), A_1d) + (sp.eye(N**2, k=-N) + sp.eye(N**2,
k=N))/dx**2A<10000x10000 sparse matrix of type '<class 'numpy.float64'>'
with 49600 stored elements in Compressed Sparse Row format>
The FDM enables us to recast the PDE into a finite set of linear equations.
Allowing us to use the oldest trick in the book of Linear Algebra: The Eigenvector/Eigenvalue problem.
An open source copy of teaching material from the University of Auckland is shown below.
The Sparse Matrix has been created, in the form of A.
The next step is to create, b, the corresponding vector of values, as shown on the left side of the equation.
b = np.zeros((N, N))b[0, :] += u0_b # bottom
b[-1, :] += u0_t # top
b[:, 0] += u0_l # left
b[:, -1] += u0_r # right
b = - b.reshape(N**2) / dx**2v = sp.linalg.spsolve(A, b) #Solve the Eigen Problem
u = v.reshape(N, N)
Finally, the last block of code reconciles the behaviour of the PDEs across the region of discussion, together with the initial boundary conditions, to provide a neat color map.
U = np.vstack([np.ones((1, N+2)) * u0_b,
np.hstack([np.ones((N, 1)) * u0_l, u, np.ones
((N, 1)) * u0_r]),
np.ones((1, N+2)) * u0_t])x=np.linspace(0, 1, N+2)
X, Y = np.meshgrid(x, x)fig = plt.figure(figsize=(12, 5.5))
cmap = mpl.cm.get_cmap('cool_r')ax = fig.add_subplot(1, 2, 1)
c = ax.pcolor(X, Y, U, vmin=-5, vmax=5, cmap=cmap)
ax.set_xlabel(r'$x_1$', fontsize=18)
ax.set_ylabel(r'$x_2$', fontsize=18)ax = fig.add_subplot(1, 2, 2, projection = '3d')
p = ax.plot_surface(X, Y, U, vmin=-5, vmax=5, rstride=3, cstride=3,
linewidth=0, cmap=cmap)ax.set_xlabel(r'$x_1$', fontsize=18)
ax.set_ylabel(r'$x_2$', fontsize=18)
cb = plt.colorbar(p, ax=ax, shrink=0.75)
cb.set_label(r'$u(x_1, x_2)$', fontsize=18)
It bears noting that the the colours and values on display are once again agnostic to the context.
They could well be utilised to model temperature gradients from the flow of heat, the electric charge distribution across a region, and of course, behaviour of a basket of investments.
As an aside and conclusion
While left unused in the writing of this article, I think it’s worth mentioning a very useful mathematical tool that I came across in.
The FeniCS framework is a fairly recent addition to the scientific computing toolbox that deals with the solving of PDEs.
If you ask me, it arguments the current SciPy and NumPy libraries, through more efficient translation of the Python interface to lower level libraries written in C++ and Fortran.
We have dealt with a fairly simple example today, and did not have to use FeniCS. However, more curious readers may check out Numerical Python, the book referenced above, for calculations where FeniCS is useful.
Following in the footsteps of several renowned physicists who have quite a following on Medium, i’ve also decided to write articles such as this one, exploring the links between Physics and hitherto unrelated disciplines.
And how the tools of Physics, can be used to uncover value in said disciplines.
Here are the profiles of other Physicists you may find interesting.
I hope these articles provide as much intellectual stimulation to you as they have for me.
Thank you and do check back soon!