Two classes of algorithms are applied commonly to search for Earth models that fit observational data. The first comprises iterated linearized methods: much of this theory was already developed 50 years ago. When the problem is highly non-linear, these methods may easily converge to incorrect solutions, and they almost never provide accurate information about uncertainty of the solution. The second class comprises Monte Carlo (MC) methods: again, these existed 50 years ago, and while in principle they provide full information about uncertainty and theoretically will always converge towards the correct solution, they are, in their present form, extremely computationally burdensome due to the need to sample many different models and parameter values. As a result, current MC methods rarely actually converge to the true uncertainty structure in practice for large nonlinear problems. While methods exist that fall outside of these two groups, often they contain similarities to one of the groups, or both. There is a pressing need for new classes of algorithms for inversion in large, practical nonlinear problems.

We are currently developing new algorithms - Knowledge-Based Monte Carlo Sampling Methods - to find solutions to such problems more efficiently, and to enable currently intractable solutions to be found, by assuming a new class of prior information about the structure of the problem, based on approximate models for the relation between model parameters and data. This approach allows us to advance from the current, purely empirical viewpoint of defining a priori information where training data (images or other statistics) form the statistical basis for building probabilistic models, to a new level where a priori information depends on known intrinsic properties of, or correlations between, the objects/parameter systems to be reconstructed. Such properties are derived from rules imposed by physics, geology, etc. and may be deterministic or (more often) probabilistic.

We will improve algorithm performance by introducing the concept of passive prior information (in contrast to the regular active prior information used in Bayes Rule). Passive priors carry simplified information about the relation between data and unknown model parameters to improve algorithm convergence, but they do not influence the posterior distribution directly. Passive priors can be built from neural networks, simplified physics, or other simple representations of relations in Nature.

In particular we focus on problems that are known a priori to have a spatial structure such as tomographic imaging or remote sensing of a (possibly high-dimensional) body, or exploration of properties across a surface, both of which are common to many geoscientific inverse problems.

Our current understanding of physical processes in the Earth and planets depends on two distinct, computational methods: simulation and data inversion. Simulation proceeds by assuming a known, initial composition and state of the Earth or a planet, after which the basic physical laws are iteratively applied to each point in the model. In this way computation imitates Nature, and from the evolution and the end product of the simulation we can learn about the macroscopic processes in the planetary interior.

Data inversion (also known as modelling) is an entirely different activity, in which we analyse observational data with the purpose of reconstructing the internal structure of the Earth. An example is the reconstruction of geological structure in the crust and mantle from observations of seismic waves generated by earthquakes. Data inversion is essentially an optimization process where we seek a model that will explain the data.

Our research is motivated by the surprising fact that current approaches to inversion of data from the solid Earth are based on ideas that are disconnected from the physical laws reigning at the microscopic (grid-point) level in the object we are modelling. An example is inversion of seismic data from sedimentary basins, where the physical laws of sedimentation are ignored, although they are clearly expressed in the spatial structure of the layering.

We will develop methods that bears some similarities to ocean- and atmospheric sciences where data assimilation allows us to compute the state of the fluid while imposing constraints at grid-point-level from the flow equations governing the time development of the fluid motion (see, e.g., Zhihua Zhang and Moore, 2015). However, data-assimilation is of little use in solid Earth studies, due to the long timescales of geological processes. We have no direct access to complete records of geological structure at time instances in the past, and a full simulation of geological evolution in every iteration of a geophysical data inversion is computationally prohibited.

To make up for the missing observations in the past, we will exploit the well-known fact that present spatial structure of the subsurface bears the imprint of past processes. And past structure was formed under physical laws which, in many cases, are approximately known. One example is sedimentation of sands and shales in a deltaic environment, which can be approximately modelled as a diffusion process with additional fluid-mechanical terms. When the structure is a solution to the diffusion equation it allows us to put constraints on the relation between layer curvature and layer thickness when we compute models of deltaic subsurface models from geophysical data. Another example is the calculation of (the slow) fluid flow at the Earth's core-mantle boundary (CMB) from satellite measurements of the geomagnetic secular variation. Here, the flow field at CMB can be constrained by the flow equations controlling the structure of the flow field.

Our focus of this research activity is to develop theory and methods for solution of inverse problems with spatio-temporal physical/geological constraints.