This solver is correspods to the one developed in this paper named A Generalized Conditional Gradient Method for Dynamic Inverse Problems with Optimal Transport Regularization by Kristian Bredies, Marcello Carioni, Silvio Fanzon and Francisco Romero-Hinrichsen.
A dynamical inverse problem is one in which the both the data, and the forward operators are allowed to change in time. For instance, one could consider a medical imaging example, in which the patient is not completely immobile (data changing in time), while simultaneously the measuring instrument is not realising the same measurements at each time, as it is the case of a rotating scanner in CT/SPECT/PET, or change of measurement frequencies in MRI.
The presented method tackles such dynamic inverse problems by proposing a convex energy to penalize jointly with a data discrepancy term; together these give us a target minimization problem from which we find a solution to the target dynamic inverse problem.
For clear, deep, and mathematically correct explanations, please refer to the paper. The following is a mathematically incomplete description of the considered Energy and minimization problem, but it is enough to intuitively describe it.
We look for solutions in the space of dynamic Radon measures, these are
Radon measure defined on
time and space [0,1] x Ω
.
Given a differentiable curve γ:[0,1] -> Ω, the
Lebesgue measure dt
, and the
Dirac delta
δ, one can consider the following
product measure
ργ := dt x δγ(t)
representing a Dirac delta transported in time by the curve γ in space, which is a dynamic Radon measure.The energy we consider to solve the target dynamic inverse problem is parametrized by α, β > 0, and acts in the following way over such element:
Intuitively, the energy is penalizing the squared speed of the dynamic Radon measures.Since measure spaces are in particular vector spaces, given a family of weights ωi >0, and a family of curves γi, we can now consider μ, a weighted sum of these transported Dirac deltas
which is also a dynamic Radon measure.The measures are "moving time continuously", but the measurements are gathered by sampling discretely in time. Fix those time samples as 0 = t0 < t1 < ... < tT = 1, then, at each time sample, the considered dynamic Radon measures are simply Radon measures. We therefore consider at each of these time samples ti, a forward operator mapping from the space of Radon measures, into some data space Hti
Where at each time sample ti, the respective data spaces Hi are allowed to be different. Theoretically, these data spaces are real Hilbert spaces, numerically, these need to be finite dimensional.
Given data gathered at each time sample f0 ∈ H0, f1 ∈ H1, ... fT ∈ HT, and given any dynamical Radon measure ν, the data discrepancy term of our minimization problem is
And putting together the data discrepancy term with the proposed energy Jα, β to minimize, we build up the target functional that is minimized by our algorithm.
The energy Jα, β will promote sparse dynamic measures μ, and the proposed algorithm will return one such measure.
To see an animated example of Dynamic sources, measured data, and obtained reconstructions, please see this video.
The documentation of the code is available here.
- A finite family of Hilbert spaces Hi that can be numerically represented.
- A corresponding finite set of time samples ti.
- Forward operators Ki*: M(Ω) -> Hi, that represent your measurements at each time sample, with predual Ki: Hi -> C(Ω), mapping in particular into differentiable functions.
- Data gi ∈ Hi corresponding to the measurements of the ground truth at each time sample.
- The time samples in the interval
[0,1]
: very easy to adapt. - Dimension
d = 2
of domainΩ
: intermediate work. - 2-dimensional non-periodic domain of interest
Ω = [0,1]x[0,1]
: intermediate work, should not be an issue as long as the desired domain is convex or the curves are far apart from the boundary. Otherwise, quite challenging. - Forward operators Ki* smoothly vanishing on the
boundary
∂Ω
: very hard to adapt, the whole implemented code relies on the solutions lying on the interior of the domain. To lift this requirement, the insertion step and sliding step of the algorithm must consider projected gradient descent strategies to optimize for curves touching the boundary. But, given any forward operator Ki*, it is possible to smoothly cut-off the values near∂Ω
. The implemented Fourier measurements consider such cut-off to enforce this condition.
To get this repository, clone it locally with the command
git clone https://github.com/panchoop/DGCG_algorithm.git
Requirements to run this code with virtual environment:
- Python3 (3.6 or above).
- The packages specified in the
requirements.txt
file. - (optionally)
ffmpeg
installed and linked tomatplotlib
so the reconstructed measure can be saved as.mp4
videofiles. Ifffmpeg
is not available, this option can be disabled.
Alternatively, it can be run using docker without any pre-requisite in any operative system. To run with this method:
- Install Docker in your system.
- Clone locally the repository.
- Execute the command
run -v $(pwd):$(pwd) -w $(pwd) --rm panchoop/dgcg_alg:v0.1
It will execute your script saved asmain.py
in the same folder.
The code itself is not plug and play. It will require rewriting your operators and spaces to fit the implemented structures.
Specifically:
- Properly incorporate your forward operator.
- Properly specify your Hilbert space and its inner product
Additionally, keep in mind that given that the output of this method is a Radon
measure composed of a weighted sum of deltas transported by curves, the output
of this algorithm is such object encoded in the measure class
implemented in
the classes.py
module.
These objects are pickled
for later use, simultaneously they can be
exported as a .mp4
video file (via the .animate()
class method).
The code is heavily sub-optimized. Therefore expect long execution times. See table 1 in paper.
The file examples/Example_1.py
runs the numerical experiment #1 that is presented
in the paper. Run it directly inside the folder. To further understand
how to use the module, it is recommended to take a look in the file.
It is well commented.
The script will generate a folder where the iteration results will be stored.
Further files Example_2_*.py
and Example_3
are the ones presented in the
paper.
One can define/construct forward operators using integration kernels. Let φ:Ω -> H be a differentiable function mapping from our domain of interest Ω to some Hilbert space H. Then, we can define the forward operator K*: M(Ω) -> H, and its predual K: H -> C(Ω) as
Differentiability of φ is required for the differentiability of K(h), allowing us to minimize the linearized minimization problem arising from the insertion step of this algorithm.
- When running the algorithm, nearing convergence the energy is not monotonously decreasing!
-
- answer: Try setting the tolerance value to something higher. Likely there are rounding errors, see this issue