Dynamic inverse problems in imaging struggle with undersampled data and unrealistic motion. Neural fields provide a lightweight, smooth representation but often miss motion detail. This study shows that combining neural fields with explicit PDE-based motion regularizers (like optical flow) significantly improves 2D+time CT reconstruction. Results demonstrate that neural fields not only outperform grid-based solvers but also generalize effectively to higher resolutions, offering a powerful path forward for medical and scientific imaging.Dynamic inverse problems in imaging struggle with undersampled data and unrealistic motion. Neural fields provide a lightweight, smooth representation but often miss motion detail. This study shows that combining neural fields with explicit PDE-based motion regularizers (like optical flow) significantly improves 2D+time CT reconstruction. Results demonstrate that neural fields not only outperform grid-based solvers but also generalize effectively to higher resolutions, offering a powerful path forward for medical and scientific imaging.

How PDE Motion Models Boost Image Reconstruction in Dynamic CT

2025/10/01 03:30

:::info Authors:

(1) Pablo Arratia, University of Bath, Bath, UK (pial20@bath.ac.uk);

(2) Matthias Ehrhardt, University of Bath, Bath, UK (me549@bath.ac.uk);

(3) Lisa Kreusser, University of Bath, Bath, UK (lmk54@bath.ac.uk).

:::

Abstract and 1. Introduction

  1. Dynamic Inverse Problems in Imaging

    2.1 Motion Model

    2.2 Joint Image Reconstruction and Motion Estimation

  2. Methods

    3.1 Numerical evaluation with Neural Fields

    3.2 Numerical evaluation with grid-based representation

  3. Numerical Experiments

    4.1 Synthetic experiments

  4. Conclusion, Acknowledgments, and References

ABSTRACT

Image reconstruction for dynamic inverse problems with highly undersampled data poses a major challenge: not accounting for the dynamics of the process leads to a non-realistic motion with no time regularity. Variational approaches that penalize time derivatives or introduce PDE-based motion model regularizers have been proposed to relate subsequent frames and improve image quality using grid-based discretization. Neural fields are an alternative to parametrize the desired spatiotemporal quantity with a deep neural network, a lightweight, continuous, and biased towards smoothness representation. The inductive bias has been exploited to enforce time regularity for dynamic inverse problems resulting in neural fields optimized by minimizing a data-fidelity term only. In this paper we investigate and show the benefits of introducing explicit PDE-based motion regularizers, namely, the optical flow equation, in 2D+time computed tomography for the optimization of neural fields. We also compare neural fields against a grid-based solver and show that the former outperforms the latter.

1 Introduction

\

\ It is well-known that, under mild conditions, neural networks can approximate functions at any desired tolerance [26], but their widespread use has been justified by other properties such as (1) the implicit regularization they introduce, (2) overcoming the curse of dimensionality, and (3) their lightweight, continuous and differentiable representation. In [27, 28] it is shown that the amount of weights needed to approximate the solution of particular PDEs grows polynomially on the dimension of the domain. For the same reason, only a few weights can represent complex images, leading to a compact and memory-efficient representation. Finally, numerical experiments and theoretical results show that neural fields tend to learn smooth functions early during training [29, 30, 31]. This is both advantageous and disadvantageous: neural fields can capture smooth regions of natural images but will struggle at capturing edges. The latter can be overcome with Fourier feature encoding [32].

\ In the context of dynamic inverse problems and neural fields, most of the literature relies entirely on the smoothness introduced by the network on the spatial and temporal variables to get a regularized solution. This allows minimizing a data-fidelity term only without considering any explicit regularizers. Applications can be found on dynamic cardiac MRI in [17, 20, 19], where the network outputs the real and imaginary parts of the signal, while in [18] the neural field is used to directly fit the measurements and then inference is performed by inpainting the k-space with the neural field and taking the inverse Fourier transform. In [33, 34] neural fields are used to solve a photoacoustic tomography dynamic reconstruction emphasizing their memory efficiency. In [15], a 3D+time CT inverse problem is addressed with a neural field parametrizing the initial frame and a polynomial tensor warping it to get the subsequent frames. To the best of our knowledge, it is the only work making use of neural fields and a motion model via a deformable template.

\ In this paper, we investigate the performance of neural fields regularized by explicit PDE-based motion models in the context of dynamic inverse problems in CT in a highly undersampled measurement regime with two dimensions in space. Motivated by [4] and leveraging automatic differentiation to compute spatial and time derivatives, we study the optical flow equation as an explicit motion regularizer imposed as a soft constraint as in PINNs. Our findings are based on numerical experiments and are summarized as follows:

\ • An explicit motion model constraints the neural field into a physically feasible manifold improving the reconstruction when compared to a motionless model.

\ • Neural fields outperform grid-based representations in the context of dynamic inverse problems in terms of the quality of the reconstruction.

\ • We show that, once the neural field has been trained, it generalizes well into higher resolutions.

\ The paper is organized as follows: in section 2 we introduce dynamic inverse problems, motion models and the optical flow equation, and the joint image reconstruction and motion estimation variational problem as in [4]; in section 3 we state the main variational problem to be minimized and study how to minimize it with neural fields and with a grid-based representation; in section 4 we study our method on a synthetic phantom which, by construction, perfectly satisfies the optical flow constraint, and show the improvements given by explicit motion regularizers; we finish with the conclusions in section 5.

2 Dynamic Inverse Problems in Imaging

2.1 Motion Model

\

2.2 Joint Image Reconstruction and Motion Estimation

To solve highly-undersampled dynamic inverse problems, in [4] it is proposed a joint variational problem where not only the dynamic process u is sought, but also the underlying motion expressed in terms of a velocity field v. The main hypothesis is that a joint reconstruction can enhance the discovery of both quantities, image sequence and motion, improving the final reconstruction compared to motionless models. Hence, the sought solution (u ∗ , v∗ ) is a minimizer for the variational problem given below:

\

\ with α, β, γ > 0 being regularization parameters balancing the four terms. In [4], the domain is 2D+time, and among others, it is shown how the purely motion estimation task of a noisy sequence can be enhanced by solving the joint task of image denoising and motion estimation.

\ This model was further employed for 2D+time problems in [6] and [7]. In the former it is studied its application on dynamic CT with sparse limited-angles and it is studied both L 1 and L 2 norms for the data fidelity term, with better results for the former. In the latter, the same logic is used for dynamic cardiac MRI. In 3D+time domains, we mention [39] and [40] for dynamic CT and dynamic photoacoustic tomography respectively.

\

3 Methods

Depending on the nature of the noise, different data-fidelity terms can be considered. In this work, we consider Gaussian noise ε, so, to satisfy equation (2) we use an L2 distance between predicted measurements and data

\

\ Since u represents a natural image, a suitable choice for regularizer R is the total variation to promote noiseless images and capture edges:

\

\ For the motion model, we consider the optical flow equation (5), and to measure its distance to 0 we use the L1 norm. For the regularizer in v we consider the total variation on each of its components.

\

\ Thus, the whole variational problem reads as follows:

\

\

3.1 Numerical evaluation with Neural Fields

\

\

\

3.2 Numerical evaluation with grid-based representation

\ Each subproblem is convex with non-smooth terms involved that can be solved using the Primal-Dual Hybrid Gradient (PDHG) algorithm [42]. We refer to [4] for the details.

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Market Opportunity
Boost Logo
Boost Price(BOOST)
$0.003924
$0.003924$0.003924
+81.33%
USD
Boost (BOOST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime

SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime

The post SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime appeared on BitcoinEthereumNews.com. In a pivotal week for crypto infrastructure, the Solana network
Share
BitcoinEthereumNews2025/12/16 20:44
Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

The post Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be appeared on BitcoinEthereumNews.com. Jordan Love and the Green Bay Packers are off to a 2-0 start. Getty Images The Green Bay Packers are, once again, one of the NFL’s better teams. The Cleveland Browns are, once again, one of the league’s doormats. It’s why unbeaten Green Bay (2-0) is a 8-point favorite at winless Cleveland (0-2) Sunday according to betmgm.com. The money line is also Green Bay -500. Most expect this to be a Packers’ rout, and it very well could be. But Green Bay knows taking anyone in this league for granted can prove costly. “I think if you look at their roster, the paper, who they have on that team, what they can do, they got a lot of talent and things can turn around quickly for them,” Packers safety Xavier McKinney said. “We just got to kind of keep that in mind and know we not just walking into something and they just going to lay down. That’s not what they going to do.” The Browns certainly haven’t laid down on defense. Far from. Cleveland is allowing an NFL-best 191.5 yards per game. The Browns gave up 141 yards to Cincinnati in Week 1, including just seven in the second half, but still lost, 17-16. Cleveland has given up an NFL-best 45.5 rushing yards per game and just 2.1 rushing yards per attempt. “The biggest thing is our defensive line is much, much improved over last year and I think we’ve got back to our personality,” defensive coordinator Jim Schwartz said recently. “When we play our best, our D-line leads us there as our engine.” The Browns rank third in the league in passing defense, allowing just 146.0 yards per game. Cleveland has also gone 30 straight games without allowing a 300-yard passer, the longest active streak in the NFL.…
Share
BitcoinEthereumNews2025/09/18 00:41