Augmented NETT regularization of inverse problems (2024)

Various applications in medical imaging, remote sensing and elsewhere require solving inverse problems of the form

Augmented NETT regularization of inverse problems (1)

where Augmented NETT regularization of inverse problems (2) is an operator between Hilbert spaces modeling the forward problem, Augmented NETT regularization of inverse problems (3) is the data perturbation, Augmented NETT regularization of inverse problems (4) is the noisy data and Augmented NETT regularization of inverse problems (5) is the sought for signal. Inverse problems are well analyzed and several established approaches for its stable solution exist [1, 2]. Recently, neural networks and deep learning appeared as new paradigms for solving inverse problems [37]. Several approaches based on deep learning have been developed, including post-processing networks [812], regularizing null-space networks [13, 14], plug-and-play priors [1517], deep image priors [18, 19], variational networks [20, 21], network cascades [22, 23], learned iterative schemes [2429] and learned regularizers [3033].

Classical deep learning approaches may lack data consistency for unknowns very different from the training data. To address this issue, in [31] a deep learning approach named NETT (NETwork Tikhonov) regularization has been introduced which considers minimizers of the NETT functional

Augmented NETT regularization of inverse problems (6)

Here, Augmented NETT regularization of inverse problems (7) is a similarity measure, Augmented NETT regularization of inverse problems (8) is a trained neural network, Augmented NETT regularization of inverse problems (9) a functional and Augmented NETT regularization of inverse problems (10) a regularization parameter. In [31] it is shown that under suitable assumptions, NETT yields a convergent regularization method. This in particular includes provable stability guarantees and error estimates. Moreover, a training strategy has been proposed, where Augmented NETT regularization of inverse problems (11) is trained such that Augmented NETT regularization of inverse problems (12) favors artifact-free reconstructions over reconstructions with artifacts.

1.1.The augmented NETT

One of the main assumptions for the analysis of [31] is the coercivity of the regularizer Augmented NETT regularization of inverse problems (13) which requires special care in network design and training. In order to overcome this limitation, we propose an augmented form of the regularizer for which we are able to rigorously prove coercivity. More precisely, for fixed Augmented NETT regularization of inverse problems (14), we consider minimizers Augmented NETT regularization of inverse problems (15) of the augmented NETT functional

Augmented NETT regularization of inverse problems (16)

Here, Augmented NETT regularization of inverse problems (17) is a similarity measure and Augmented NETT regularization of inverse problems (18) is an encoder-decoder network trained such that for any signal Augmented NETT regularization of inverse problems (19) on a signal manifold we have Augmented NETT regularization of inverse problems (20) and that Augmented NETT regularization of inverse problems (21) is small. We term this approach augmented NETT (aNETT) regularization. In this work we provide a mathematical convergence analysis for aNETT, present a novel modular training strategy and investigate its practical performance.

The term Augmented NETT regularization of inverse problems (22) implements learned prior knowledge on the encoder coefficients, while smallness of Augmented NETT regularization of inverse problems (23) forces Augmented NETT regularization of inverse problems (24) to be close to the signal manifold. The latter term also guarantees the coercivity of (1.3). In the original NETT version (1.2), coercivity of the regularizer requires coercivity conditions on the network involved. Indeed, in the numerical experiments, the authors of [31] observed a semi-convergence behaviour when minimizing (1.2), so early stopping of the iterative minimization scheme has been used as additional regularization. We attribute this semi-convergence behavior to a potential non-coercivity of the regularization term. In the present paper we address this issue systematically by augmentation of the NETT functional which guarantees coercivity and allows a more stable minimization. Coercivity is also one main ingredient for the mathematical convergence analysis.

An interesting practical instance of aNETT takes Augmented NETT regularization of inverse problems (25) as a weighted q-norm enforcing sparsity of the encoding coefficients [34, 35]. An important example for the similarity measure is given by the squared norm distance, which from a statistical viewpoint can be motivated by a Gaussian white noise model. General similarity measures allow us to adapt to different noise models which can be more appropriate for certain problems.

1.2.Main contributions

The contributions of this paper are threefold. As described in more detail below, we introduce the aNETT framework, mathematically analyze its convergence, and propose a practical implementation that is applied to tomographic limited data problems.

  • The first contribution is to introduce the structure of the aNETT regularizer Augmented NETT regularization of inverse problems (26). A similar approach has been studied in [36] for a linear encoder Augmented NETT regularization of inverse problems (27). However, in this paper we do not assume that the image Augmented NETT regularization of inverse problems (28) consists of two components u and v but rather assume that there is some transformation Augmented NETT regularization of inverse problems (29) in which the signal Augmented NETT regularization of inverse problems (30) has some desired property such as, for example, sparsity. The term Augmented NETT regularization of inverse problems (31) enforces regularity of the analysis coefficients, which is an ingredient in most of existing variational regularization techniques. For example, this includes sparse regularization in frames or dictionaries, regularization with Sobolev norms or total variation regularization. On the other hand, the augmented term Augmented NETT regularization of inverse problems (32) penalized distance to the signal manifold. It is the combination of these two terms that results in a stable reconstruction scheme without the need of strong assumptions on the involved networks.
  • The second main contribution is the theoretical analysis of aNETT (1.3) in the context of regularization theory. We investigate the case where the image domain of the encoder is given by Augmented NETT regularization of inverse problems (33) for some countable set Λ, and Augmented NETT regularization of inverse problems (34) is a coercive functional measuring the complexity of the encoder coefficients. The presented analysis is in the spirit of the analysis of NETT given in [31]. However, opposed to NETT, the required coercivity property is derived naturally for the class of considered regularizers. This supports the use of the regularizer Augmented NETT regularization of inverse problems (35) also from a theoretical side. Moreover, the convergence rates results presented here use assumptions significantly different from [31]. While we present our analysis for the transform domain Augmented NETT regularization of inverse problems (36) we could replace the encoder space by a general Hilbert or Banach space.
  • As a third main contribution we propose a modular strategy for training Augmented NETT regularization of inverse problems (37) together with a possible network architecture. First, independent of the given inverse problem, we train a Augmented NETT regularization of inverse problems (38)-penalized autoencoder that learns representing signals from the training data with low complexity. In the second step, we train a task-specific network which can be adapted to the specific inverse problem at hand. In our numerical experiments, we empirically found this modular training strategy to be superior to directly adapting the autoencoder to the inverse problem. For the Augmented NETT regularization of inverse problems (39)-penalized autoencoder, we train the modified version described in [37] of the tight frame U-Net of [38] in a way such that Augmented NETT regularization of inverse problems (40) poses additional constraints on the autoencoder during the training process.

1.3.Outline

In section 2 we present the mathematical convergence analysis of aNETT. In particular, as an auxiliary result, we establish the coercivity of the regularization term. Moreover, we prove stability and derive convergence rates. Section 3 presents practical aspects for aNETT. We propose a possible architecture and training strategy for the networks, and a possible ADMM based scheme to obtain minimizers of the aNETT functional. In section 4, we present reconstruction results and compare aNETT with other deep learning based reconstruction methods. The paper concludes with a short summary and discussion. Parts of this paper were presented at the ISBI 2020 conference and the corresponding proceedings [39]. Opposed to the proceedings, this article treats a general similarity measure Augmented NETT regularization of inverse problems (41) and considers a general complexity measure Augmented NETT regularization of inverse problems (42). Further, all proofs and all numerical results presented in this paper are new.

In this section we prove the stability and convergence of aNETT as regularization method. Moreover, we derive convergence rates in the form of quantitative error estimates between exact solutions for noise-free data and aNETT regularized solutions for noisy data. To this end we make the assumption that we can achieve global minimizers of the functional (1.3) and analyze the properties of these solutions. This is a common assumption in variational regularization approaches and is adopted in this work. Extending the analysis to consider only local minima is consiberably more difficult and is out of the scope of this paper.

2.1.Assumptions and coercivity results

For our convergence analysis we make use of the following assumptions on the underlying spaces and operators involved.

Condition 2.1.

  • (A1) Augmented NETT regularization of inverse problems (43) and Augmented NETT regularization of inverse problems (44) are Hilbert spaces.
  • (A2) Augmented NETT regularization of inverse problems (45) for a countable Λ.
  • (A3) Augmented NETT regularization of inverse problems (46) is weakly sequentially continuous.
  • (A4) Augmented NETT regularization of inverse problems (47) is weakly sequentially continuous.
  • (A5) Augmented NETT regularization of inverse problems (48) is weakly sequentially continuous.
  • (A6) Augmented NETT regularization of inverse problems (49) is coercive and weakly sequentially lower semi-continuous.

We set Augmented NETT regularization of inverse problems (50) and, for given Augmented NETT regularization of inverse problems (51), define

Augmented NETT regularization of inverse problems (52)

which we refer to as the aNETT (or augmented NETT) regularizer.

According to (A4)–(A6), the aNETT regularizer is weakly sequentially lower semi-continuous. As a main ingredient for our analysis we next prove its coercivity.Coercivity of the aNETT regularizer.

Theorem 2.2If Condition 2.1 holds, then the regularizer Augmented NETT regularization of inverse problems (53) as defined in (2.1) with Augmented NETT regularization of inverse problems (54) is coercive.

Proof.Let Augmented NETT regularization of inverse problems (55) be some sequence in Augmented NETT regularization of inverse problems (56) such that Augmented NETT regularization of inverse problems (57) is bounded. Then by definition of Augmented NETT regularization of inverse problems (58) it follows that Augmented NETT regularization of inverse problems (59) is bounded and by coercivity of Augmented NETT regularization of inverse problems (60) we have that Augmented NETT regularization of inverse problems (61) is also bounded. By assumption, Augmented NETT regularization of inverse problems (62) is weakly sequentially continuous and thus Augmented NETT regularization of inverse problems (63) must be bounded. Using that Augmented NETT regularization of inverse problems (64), we obtain the inequality Augmented NETT regularization of inverse problems (65). This shows that Augmented NETT regularization of inverse problems (66) is bounded and therefore that Augmented NETT regularization of inverse problems (67) is coercive.□

Sparse aNETT regularizer.

Example 2.3To obtain a sparsity promoting regularizer we can choose Augmented NETT regularization of inverse problems (68) where Augmented NETT regularization of inverse problems (69) and Augmented NETT regularization of inverse problems (70). Since Augmented NETT regularization of inverse problems (71) we have Augmented NETT regularization of inverse problems (72) and hence Augmented NETT regularization of inverse problems (73) is coercive. As a sum of weakly sequentially lower semi-continuous functionals it is also weakly sequentially lower semi-continuous [35]. Therefore Condition (A6) is satisfied for the weighted q-norm. Together with theorem 2.2, we conclude that the resulting weighted sparse aNETT regularizer Augmented NETT regularization of inverse problems (74) is a coercive and weakly sequentially lower semi-continuous functional.

For the further analysis we will make the following assumptions regarding the similarity measure Augmented NETT regularization of inverse problems (75).

Similarity measure.

Condition 2.4

  • (B1) Augmented NETT regularization of inverse problems (76).
  • (B2) Augmented NETT regularization of inverse problems (77) is sequentially lower semi-continuous with respect to the weak topology in the first and the norm topology in the second argument.
  • (B3) Augmented NETT regularization of inverse problems (78) as Augmented NETT regularization of inverse problems (79)).
  • (B4) Augmented NETT regularization of inverse problems (80) as Augmented NETT regularization of inverse problems (81) (Augmented NETT regularization of inverse problems (82)).
  • (B5) Augmented NETT regularization of inverse problems (83) Augmented NETT regularization of inverse problems (84) with Augmented NETT regularization of inverse problems (85).

While (B1)–(B4) restrict the choice of the similarity measure, (B5) is a technical assumption involving the forward operator, the regularizer and the similarity measure, that is required for the existence of minimizers. For a more detailed discussion of these assumptions we refer to [40].

Similarity measures using the norm.

Example 2.5A classical example of a similarity measure satisfying (B1)–(B4) is given by Augmented NETT regularization of inverse problems (86) for some Augmented NETT regularization of inverse problems (87) and more generally by Augmented NETT regularization of inverse problems (88), where Augmented NETT regularization of inverse problems (89) is a continuous and monotonically increasing function that satisfies Augmented NETT regularization of inverse problems (90).

Taking into account theorem 2.2, Conditions 2.1 and 2.4 imply that the aNETT functional Augmented NETT regularization of inverse problems (91) defined by (1.3), (2.1) is proper, coercive and weakly sequentially lower semi-continuous. This in particular implies the existence of minimizers of Augmented NETT regularization of inverse problems (92) for all data Augmented NETT regularization of inverse problems (93) and regularization parameters Augmented NETT regularization of inverse problems (94) (compare ) [2, 31].

2.2.Stability

Next we prove the stability of minimizing the aNETT functional Augmented NETT regularization of inverse problems (95) regarding perturbations of the data Augmented NETT regularization of inverse problems (96).

Stability.

Theorem 2.6Let Conditions 2.1 and 2.4 hold, Augmented NETT regularization of inverse problems (97) and Augmented NETT regularization of inverse problems (98). Moreover, let Augmented NETT regularization of inverse problems (99) be a sequence of perturbed data with Augmented NETT regularization of inverse problems (100) and consider minimizers Augmented NETT regularization of inverse problems (101). Then the sequence Augmented NETT regularization of inverse problems (102) has at least one weak accumulation point and weak accumulation points are minimizers of Augmented NETT regularization of inverse problems (103). Moreover, for any weak accumulation point Augmented NETT regularization of inverse problems (104) of Augmented NETT regularization of inverse problems (105) and any subsequence Augmented NETT regularization of inverse problems (106) with Augmented NETT regularization of inverse problems (107) we have Augmented NETT regularization of inverse problems (108).

Proof.Let Augmented NETT regularization of inverse problems (109) be such that Augmented NETT regularization of inverse problems (110). By definition of Augmented NETT regularization of inverse problems (111) we have Augmented NETT regularization of inverse problems (112). Since by assumption Augmented NETT regularization of inverse problems (113) and Augmented NETT regularization of inverse problems (114), we have Augmented NETT regularization of inverse problems (115). This implies that Augmented NETT regularization of inverse problems (116) is bounded by some positive constant Augmented NETT regularization of inverse problems (117) for sufficiently large n. By definition of Augmented NETT regularization of inverse problems (118) we have Augmented NETT regularization of inverse problems (119). Since Augmented NETT regularization of inverse problems (120) is coercive it follows that Augmented NETT regularization of inverse problems (121) is a bounded sequence and hence it has a weakly convergent subsequence.

Let Augmented NETT regularization of inverse problems (122) be a weakly convergent subsequence of Augmented NETT regularization of inverse problems (123) and denote its limit by Augmented NETT regularization of inverse problems (124). By the lower semi-continuity we get Augmented NETT regularization of inverse problems (125) and Augmented NETT regularization of inverse problems (126). Thus for all Augmented NETT regularization of inverse problems (127) with Augmented NETT regularization of inverse problems (128) we have

Augmented NETT regularization of inverse problems (129)

This shows that Augmented NETT regularization of inverse problems (130) and, by considering Augmented NETT regularization of inverse problems (131) in the above displayed equation, that Augmented NETT regularization of inverse problems (132). Moreover, we have

Augmented NETT regularization of inverse problems (133)

This shows Augmented NETT regularization of inverse problems (134) as Augmented NETT regularization of inverse problems (135) and concludes the proof.□

In the following we say that the similarity measure Augmented NETT regularization of inverse problems (136) satisfies the quasi triangle-inequality if there is some Augmented NETT regularization of inverse problems (137) such that

Augmented NETT regularization of inverse problems (138)

While this property is essential for deriving convergence rate results, we will show below that it is not enough to guarantee stability of minimizing the augmented NETT functional in the sense of theorem 2.6. Note that [31] assumes the quasi triangle-inequality (2.2) instead of Condition (B4). The following remarks shows that (2.2) is not sufficient for the stability result of theorem 2.6 to hold and therefore Condition (B4) has to be added to the list of assumptions in [31] required for the stability.

Instability in the absence of Condition (B4).

Example 2.7Consider the similarity measure Augmented NETT regularization of inverse problems (139) defined by

Augmented NETT regularization of inverse problems (140)

where Augmented NETT regularization of inverse problems (141) is defined by Augmented NETT regularization of inverse problems (142) if Augmented NETT regularization of inverse problems (143) and Augmented NETT regularization of inverse problems (144) otherwise. Moreover, choose Augmented NETT regularization of inverse problems (145), let Augmented NETT regularization of inverse problems (146) be the identity operator and suppose the regularizer takes the form Augmented NETT regularization of inverse problems (147).

  • The similarity measure defined in (2.3) satisfies (B1)–(B3): Convergence with respect to Augmented NETT regularization of inverse problems (148) is equivalent to convergence in norm which implies that (B3) is satisfied. Moreover, we have Augmented NETT regularization of inverse problems (149), which is (B1). Consider sequences Augmented NETT regularization of inverse problems (150) and Augmented NETT regularization of inverse problems (151). The sequential lower semi-continuity stated in (B2) can be derived by separately looking at the cases Augmented NETT regularization of inverse problems (152) and Augmented NETT regularization of inverse problems (153). In the first case, by the continuity of the norm we have Augmented NETT regularization of inverse problems (154). In the second case, we have Augmented NETT regularization of inverse problems (155) for n sufficiently large. In both cases, the lower semi-continuity property follows from the weak lower semi-continuity property of the norm.
  • The similarity measure defined by (2.3) does not satisfy (B4): To see this, we define Augmented NETT regularization of inverse problems (156) where Augmented NETT regularization of inverse problems (157) and Augmented NETT regularization of inverse problems (158) is taken as a non-increasing sequence converging to zero. We have Augmented NETT regularization of inverse problems (159) and hence also Augmented NETT regularization of inverse problems (160) as Augmented NETT regularization of inverse problems (161). For any Augmented NETT regularization of inverse problems (162) we have Augmented NETT regularization of inverse problems (163) and Augmented NETT regularization of inverse problems (164) as Augmented NETT regularization of inverse problems (165). In particular, Augmented NETT regularization of inverse problems (166) does not converge to Augmented NETT regularization of inverse problems (167) if Augmented NETT regularization of inverse problems (168) and therefore (B5) does not hold. In summary, all requirement for theorem 2.6 are satisfied, except of the continuity assumption (B4).
  • We have Augmented NETT regularization of inverse problems (169) which implies that the similarity measure satisfies the quasi triangle-inequality (2.2). However as shown next, this is not sufficient for stable reconstruction in the sense of theorem 2.6. To that end, let Augmented NETT regularization of inverse problems (170) with Augmented NETT regularization of inverse problems (171) and Augmented NETT regularization of inverse problems (172) and let Augmented NETT regularization of inverse problems (173). In particular, Augmented NETT regularization of inverse problems (174) and Augmented NETT regularization of inverse problems (175). Therefore the minimizer of Augmented NETT regularization of inverse problems (176) with perturbed data Augmented NETT regularization of inverse problems (177) is given by Augmented NETT regularization of inverse problems (178) and the minimizer of Augmented NETT regularization of inverse problems (179) for data Augmented NETT regularization of inverse problems (180) is given by Augmented NETT regularization of inverse problems (181). We see that Augmented NETT regularization of inverse problems (182), which is clearly different from Augmented NETT regularization of inverse problems (183). In particular, minimizing Augmented NETT regularization of inverse problems (184) does not stably depend on data Augmented NETT regularization of inverse problems (185). Theorem 2.6 states that stability holds if (B4) is satisfied.

While the above example may seem somehow constructed, it shows that one has to be careful when choosing the similarity measure in order to obtain a stable reconstruction scheme.

2.3.Convergence

In this subsection we consider the limit process as the noise-level δ tends to 0. Assuming that Augmented NETT regularization of inverse problems (186) we would expect the regularized solutions to converge to some solution of the equation Augmented NETT regularization of inverse problems (187). This raises the obvious question whether this solution has any additional properties. In fact, we prove that the minimizers of the aNETT functional for noisy data converge to such a special kind of solution, namely solutions which minimize Augmented NETT regularization of inverse problems (188) among all possible solutions. For that purpose, here and below we use the following notation.

 -minimizing solutions.

Definiton 2.8For Augmented NETT regularization of inverse problems (189), we call an element Augmented NETT regularization of inverse problems (190) an Augmented NETT regularization of inverse problems (191)-minimizing solution of the equation Augmented NETT regularization of inverse problems (192) if

Augmented NETT regularization of inverse problems (193)

An Augmented NETT regularization of inverse problems (194)-minimizing solution always exists provided that data satisfies Augmented NETT regularization of inverse problems (195), which means that the equation Augmented NETT regularization of inverse problems (196) has at least one solution with finite value of Augmented NETT regularization of inverse problems (197). To see this, consider a sequence of solutions Augmented NETT regularization of inverse problems (198) with Augmented NETT regularization of inverse problems (199). Since Augmented NETT regularization of inverse problems (200) is coercive there exists a weakly convergent subsequence Augmented NETT regularization of inverse problems (201) with weak limit Augmented NETT regularization of inverse problems (202). Using the weak sequential lower semi-continuity of Augmented NETT regularization of inverse problems (203) one concludes that Augmented NETT regularization of inverse problems (204) is an Augmented NETT regularization of inverse problems (205)-minimizing solution. We first show weak convergence.

Weak convergence of aNETT.

Theorem 2.9Suppose Conditions 2.1 and 2.4 are satisfied. Let Augmented NETT regularization of inverse problems (206), Augmented NETT regularization of inverse problems (207) with Augmented NETT regularization of inverse problems (208) and let Augmented NETT regularization of inverse problems (209) satisfy Augmented NETT regularization of inverse problems (210). Choose Augmented NETT regularization of inverse problems (211) such that Augmented NETT regularization of inverse problems (212) and let Augmented NETT regularization of inverse problems (213). Then the following hold:

  • (a)

    Augmented NETT regularization of inverse problems (214) has at least one weakly convergent subsequence.

  • (b)

    All accumulation points of Augmented NETT regularization of inverse problems (215) are Augmented NETT regularization of inverse problems (216)-minimizing solutions of Augmented NETT regularization of inverse problems (217).

  • (c)

    For every convergent subsequence Augmented NETT regularization of inverse problems (218) it holds Augmented NETT regularization of inverse problems (219).

  • (d)

    If the Augmented NETT regularization of inverse problems (220)-minimizing solution Augmented NETT regularization of inverse problems (221) is unique then Augmented NETT regularization of inverse problems (222).

Proof.(a): Because Augmented NETT regularization of inverse problems (223), there exists an Augmented NETT regularization of inverse problems (224)-minimizing solution of the equation Augmented NETT regularization of inverse problems (225) which we denote by Augmented NETT regularization of inverse problems (226). Because Augmented NETT regularization of inverse problems (227) we have

Augmented NETT regularization of inverse problems (228)

Because Augmented NETT regularization of inverse problems (229) this shows that Augmented NETT regularization of inverse problems (230) is bounded. Due to the coercivity of the aNETT regularizer (see theorem 2.2), this implies that Augmented NETT regularization of inverse problems (231) has a weakly convergent subsequence.

(b), (c): Let Augmented NETT regularization of inverse problems (232) be a weakly convergent subsequence of Augmented NETT regularization of inverse problems (233) with limit Augmented NETT regularization of inverse problems (234). From the weak lower semi-continuity we get Augmented NETT regularization of inverse problems (235), which shows that Augmented NETT regularization of inverse problems (236) is a solution of Augmented NETT regularization of inverse problems (237). Moreover,

Augmented NETT regularization of inverse problems (238)

where for the second last inequality we used (2.4) and for the last equality we used that Augmented NETT regularization of inverse problems (239). Therefore, Augmented NETT regularization of inverse problems (240) is an Augmented NETT regularization of inverse problems (241)-minimizing solution of the equation Augmented NETT regularization of inverse problems (242). In a similar manner we derive Augmented NETT regularization of inverse problems (243) which shows Augmented NETT regularization of inverse problems (244).

(d): If Augmented NETT regularization of inverse problems (245) has a unique Augmented NETT regularization of inverse problems (246)-minimizing solution Augmented NETT regularization of inverse problems (247), then every subsequence of Augmented NETT regularization of inverse problems (248) has itself a subsequence weakly converging to Augmented NETT regularization of inverse problems (249), which implies that Augmented NETT regularization of inverse problems (250) weakly converges to the Augmented NETT regularization of inverse problems (251)-minimizing solution.□

Next we derive strong convergence of the regularized solutions. To this end we recall the absolute Bregman distance, the modulus of total nonlinearity and the total nonlinearity, defined in [31].

Absolute Bregman distance.

Definiton 2.10Let Augmented NETT regularization of inverse problems (252) be Gâteaux differentiable at Augmented NETT regularization of inverse problems (253). The absolute Bregman distance Augmented NETT regularization of inverse problems (254) at Augmented NETT regularization of inverse problems (255) with respect to Augmented NETT regularization of inverse problems (256) is defined by

Augmented NETT regularization of inverse problems (257)

Here and below Augmented NETT regularization of inverse problems (258) denotes the Gâteaux derivative of Augmented NETT regularization of inverse problems (259) at Augmented NETT regularization of inverse problems (260).

Modulus of total nonlinearity and total nonlinearity.

Definiton 2.11Let Augmented NETT regularization of inverse problems (261) be Gâteaux differentiable at Augmented NETT regularization of inverse problems (262). We define the modulus of total nonlinearity of Augmented NETT regularization of inverse problems (263) at Augmented NETT regularization of inverse problems (264) as Augmented NETT regularization of inverse problems (265). We call Augmented NETT regularization of inverse problems (266) totally nonlinear at Augmented NETT regularization of inverse problems (267) if Augmented NETT regularization of inverse problems (268) for all Augmented NETT regularization of inverse problems (269).

Using these definitions we get the following convergence result in the norm topology.

Strong convergence of aNETT.

Theorem 2.12Let Conditions 2.1 and 2.4 hold, Augmented NETT regularization of inverse problems (270) and let Augmented NETT regularization of inverse problems (271) be totally nonlinear at all Augmented NETT regularization of inverse problems (272)-minimizing solutions of Augmented NETT regularization of inverse problems (273). Let Augmented NETT regularization of inverse problems (274) be as in theorem 2.9. Then there is a subsequence Augmented NETT regularization of inverse problems (275) which converges in norm to an Augmented NETT regularization of inverse problems (276)-minimizing solution Augmented NETT regularization of inverse problems (277) of Augmented NETT regularization of inverse problems (278). If the Augmented NETT regularization of inverse problems (279)-minimizing solution is unique, then Augmented NETT regularization of inverse problems (280) as Augmented NETT regularization of inverse problems (281).

Proof.In [31], proposition 2.9 it is shown that the total nonlinearity of Augmented NETT regularization of inverse problems (282) implies that for every bounded sequence Augmented NETT regularization of inverse problems (283) with Augmented NETT regularization of inverse problems (284) it holds that Augmented NETT regularization of inverse problems (285). Theorem 2.9 gives us a weakly converging subsequence Augmented NETT regularization of inverse problems (286) of Augmented NETT regularization of inverse problems (287) with weak limit Augmented NETT regularization of inverse problems (288) and Augmented NETT regularization of inverse problems (289). By the definition of the absolute Bregman distance it follows that Augmented NETT regularization of inverse problems (290) and hence, together with [31], proposition 2.9, that Augmented NETT regularization of inverse problems (291). If the Augmented NETT regularization of inverse problems (292)-minimizing solution of Augmented NETT regularization of inverse problems (293) is unique, then every subsequence has a subsequence converging to Augmented NETT regularization of inverse problems (294) and hence the claim follows.□

2.4.Convergence rates

We will now prove convergence rates by deriving quantitative estimates for the absolute Bregman distance between Augmented NETT regularization of inverse problems (295)-minimizing solutions for exact data and regularized solutions for noisy data. The convergence rates will be derived under the additional assumption that Augmented NETT regularization of inverse problems (296) satisfies the quasi triangle-inequality (2.2).

Convergence rates for aNETT.

Proposition 2.13Let the assumptions of theorem 2.12 be satisfied and suppose that Augmented NETT regularization of inverse problems (297) satisfies the quasi triangle-inequality (2.2) for some Augmented NETT regularization of inverse problems (298). Let Augmented NETT regularization of inverse problems (299) be an Augmented NETT regularization of inverse problems (300)-minimizing solution of Augmented NETT regularization of inverse problems (301) such that Augmented NETT regularization of inverse problems (302) is Gâteaux differentiable at Augmented NETT regularization of inverse problems (303) and assume there exist Augmented NETT regularization of inverse problems (304) with

Augmented NETT regularization of inverse problems (305)

For any Augmented NETT regularization of inverse problems (306), let Augmented NETT regularization of inverse problems (307) be noisy data satisfying Augmented NETT regularization of inverse problems (308), Augmented NETT regularization of inverse problems (309), and write Augmented NETT regularization of inverse problems (310). Then the following hold:

  • (a)

    For sufficiently small α, it holds Augmented NETT regularization of inverse problems (311).

  • (b)

    If Augmented NETT regularization of inverse problems (312), then Augmented NETT regularization of inverse problems (313) as Augmented NETT regularization of inverse problems (314).

Proof.By definition of Augmented NETT regularization of inverse problems (315) we have Augmented NETT regularization of inverse problems (316). By theorem 2.12 for sufficiently small α we can assume that Augmented NETT regularization of inverse problems (317) and hence

Augmented NETT regularization of inverse problems (318)

Together with the inequality of arithmetic and geometric means Augmented NETT regularization of inverse problems (319) for Augmented NETT regularization of inverse problems (320) and Augmented NETT regularization of inverse problems (321) this implies Augmented NETT regularization of inverse problems (322) which shows (a). Item (b) is an immediate consequence of (a).□

The following results is our main convergence rates result. It is similar to proposition [31], theorem 3.1, but uses different assumptions.

Convergence rates for finite rank operators.

Theorem 2.14Let the assumptions of theorem 2.12 be satisfied, take Augmented NETT regularization of inverse problems (323), assume Augmented NETT regularization of inverse problems (324) has finite dimensional range and that Augmented NETT regularization of inverse problems (325) is Lipschitz continuous and Gâteaux differentiable. For any Augmented NETT regularization of inverse problems (326), let Augmented NETT regularization of inverse problems (327) be noisy data satisfying Augmented NETT regularization of inverse problems (328) and write Augmented NETT regularization of inverse problems (329). Then for the parameter choice Augmented NETT regularization of inverse problems (330) we have the convergence rates result Augmented NETT regularization of inverse problems (331) as Augmented NETT regularization of inverse problems (332).

Proof.According to proposition 2.13, it is sufficient to show that (2.5) holds with Augmented NETT regularization of inverse problems (333) in place of Augmented NETT regularization of inverse problems (334). For that purpose, let Augmented NETT regularization of inverse problems (335) denote the orthogonal projection onto the null-space Augmented NETT regularization of inverse problems (336) and let L be a Lipschitz constant of Augmented NETT regularization of inverse problems (337). Since Augmented NETT regularization of inverse problems (338) restricted to Augmented NETT regularization of inverse problems (339) is injective with finite dimensional range, we can choose a constant Augmented NETT regularization of inverse problems (340) such that Augmented NETT regularization of inverse problems (341).

We first show the estimates

Augmented NETT regularization of inverse problems (342)

Augmented NETT regularization of inverse problems (343)

To that end, let Augmented NETT regularization of inverse problems (344) and write Augmented NETT regularization of inverse problems (345). Then Augmented NETT regularization of inverse problems (346). Since Augmented NETT regularization of inverse problems (347) is an Augmented NETT regularization of inverse problems (348)-minimizing solution, we have Augmented NETT regularization of inverse problems (349). Since Augmented NETT regularization of inverse problems (350), we have Augmented NETT regularization of inverse problems (351). The last two estimates prove (2.6). Because Augmented NETT regularization of inverse problems (352) is an Augmented NETT regularization of inverse problems (353)-minimizing solution, we have Augmented NETT regularization of inverse problems (354) whenever Augmented NETT regularization of inverse problems (355). On the other hand, using that Augmented NETT regularization of inverse problems (356) is Gâteaux differentiable and that Augmented NETT regularization of inverse problems (357) has finite rank, shows Augmented NETT regularization of inverse problems (358) for Augmented NETT regularization of inverse problems (359). This proves (2.7).

Inequality (2.6) implies Augmented NETT regularization of inverse problems (360). Together with (2.7) this yields

Augmented NETT regularization of inverse problems (361)

which proves (2.5) with Augmented NETT regularization of inverse problems (362).□

Note that the theoretical results stated remain valid, if we replace Augmented NETT regularization of inverse problems (363) by a general coercive and weakly lower semi-continuous regularizer Augmented NETT regularization of inverse problems (364).

In this section we investigate practical aspects of aNETT. We present a possible network architecture together with a possible training strategy in the discrete setting 3 . Further we discuss minimization of aNETT using the ADMM algorithm. For the sake of clarity we restrict our discussion to the finite dimensional case where Augmented NETT regularization of inverse problems (365) and Augmented NETT regularization of inverse problems (366) for a finite index set Λ.

3.1.Proposed modular aNETT training

To find a suitable network Augmented NETT regularization of inverse problems (367) defining the aNETT regularizer Augmented NETT regularization of inverse problems (368), we propose a modular data driven approach that comes in two separate steps. In a first step, we train a Augmented NETT regularization of inverse problems (369)-regularized denoising autoencoder Augmented NETT regularization of inverse problems (370) independent of the forward problem Augmented NETT regularization of inverse problems (371), whose purpose is to well represent elements of a training data set by low complexity encoder coefficients. In a second step, we train a task-specific network that increases the ability of the aNETT regularizer to distinguish between clean images and images containing problem specific artifacts.

Let Augmented NETT regularization of inverse problems (372) denote the given set of artifact-free training phantoms.

  • Augmented NETT regularization of inverse problems (373)regularized autoencoder:

First, an autoencoder Augmented NETT regularization of inverse problems (374) is trained such that Augmented NETT regularization of inverse problems (375) is close to Augmented NETT regularization of inverse problems (376) and that Augmented NETT regularization of inverse problems (377) is small for the given training signals. For that purpose, let Augmented NETT regularization of inverse problems (378) be a family of autoencoder networks, where Augmented NETT regularization of inverse problems (379) are encoder and Augmented NETT regularization of inverse problems (380) decoder networks, respectively.

To achieve that unperturbed images are sparsely represented by Augmented NETT regularization of inverse problems (381), whereas disrupted images are not, we apply the following training strategy. We randomly generate images Augmented NETT regularization of inverse problems (382) where Augmented NETT regularization of inverse problems (383) is additive Gaussian white noise with a standard deviation proportional to the mean value of Augmented NETT regularization of inverse problems (384), and Augmented NETT regularization of inverse problems (385) is a binary random variable that takes each value with probability 0.5. For the numerical results below we use a standard deviation of 0.05 times the mean value of Augmented NETT regularization of inverse problems (386). To select the particular autoencoder based on the training data, we consider the following training strategy

Augmented NETT regularization of inverse problems (387)

and set Augmented NETT regularization of inverse problems (388). Here Augmented NETT regularization of inverse problems (389) are regularization parameters.

Including perturbed signals Augmented NETT regularization of inverse problems (390) in (3.1) increases robustness of the Augmented NETT regularization of inverse problems (391)-regularized autoencoder. To enforce regularity for the encoder coefficients only on the noise-free images, the penalty Augmented NETT regularization of inverse problems (392) is only used for the noise-free inputs, reflected by the pre-factor Augmented NETT regularization of inverse problems (393). Using auto-encoders, regularity for a signal class could also be achieved by means of dimensionality reduction techniques, where Augmented NETT regularization of inverse problems (394) is used as a bottleneck in the network architecture. However, in order to get a regularizer that is able to distinguish between perturbed and unperturbed signals we use Augmented NETT regularization of inverse problems (395) to be of sufficiently high dimensionality.

  • Task-specific network:

Numerical simulations showed that the Augmented NETT regularization of inverse problems (396)-regularized autoencoder alone was not able to sufficiently well distinguish between artifact-free training phantoms and images containing problem specific artifacts. In order to address this issue, we compose the operator independent network with another network Augmented NETT regularization of inverse problems (397), that is trained to distinguish between images with and without problem specific artifacts.

For that purpose, we consider randomly generated images Augmented NETT regularization of inverse problems (398) where either Augmented NETT regularization of inverse problems (399) or Augmented NETT regularization of inverse problems (400) with equal probability. Here Augmented NETT regularization of inverse problems (401) is an approximate right inverse and Augmented NETT regularization of inverse problems (402) are error terms We choose a network architecture Augmented NETT regularization of inverse problems (403) and select Augmented NETT regularization of inverse problems (404), where

Augmented NETT regularization of inverse problems (405)

for some regularization parameter Augmented NETT regularization of inverse problems (406). In particular, the image residuals Augmented NETT regularization of inverse problems (407) now depend on the specific inverse problem and we can consider them to consist of operator and training signal specific artifacts.

The above training procedure ensures that the network Augmented NETT regularization of inverse problems (408) adapts to the inverse problem at hand as well as to the Augmented NETT regularization of inverse problems (409)-regularized autoencoder. Training the network Augmented NETT regularization of inverse problems (410) independently of Augmented NETT regularization of inverse problems (411), or directly training the auto-encoder to distinguish between images with and without problem specific artifacts, we empirically found to perform considerably worse.

The final autoencoder is then given as Augmented NETT regularization of inverse problems (412) with modular decoder Augmented NETT regularization of inverse problems (413). For the numerical results we take Augmented NETT regularization of inverse problems (414) as the tight frame U-Net of [38]. Moreover, we choose Augmented NETT regularization of inverse problems (415) as modified tight frame U-Net proposed in [37] for deep synthesis regularization. In particular, opposed to the original tight frame U-net, the modified tight frame U-Net does not involve skip connections.

3.2.Possible aNETT minimization

For minimizing the aNETT functional (1.3) we use the alternating direction method of multiplies (ADMM) with scaled dual variable [4143]. For that purpose, the aNETT minimization problem is rewritten as the following constraint minimization problem

Augmented NETT regularization of inverse problems (416)

The resulting ADMM update scheme with scaling parameter Augmented NETT regularization of inverse problems (417) initialized by Augmented NETT regularization of inverse problems (418) and Augmented NETT regularization of inverse problems (419) then reads as follows:

(S1) Augmented NETT regularization of inverse problems (420).
(S2) Augmented NETT regularization of inverse problems (421). (S3) Augmented NETT regularization of inverse problems (422).

One interesting feature of the above approach is that the signal update (S1) is independent of the possibly non-smooth penalty Augmented NETT regularization of inverse problems (423). Moreover, the encoder update (S2) uses the proximal mapping of Augmented NETT regularization of inverse problems (424) which in important special cases can be evaluated explicitly and therefore fast and exact. Moreover, it guarantees regular encoder coefficients during each iteration. For example, if we choose the penalty as the 1-norm, then (S2) is a soft-thresholding step which results in sparse encoder coefficients. Step (S1) in typical cases has to be computed iteratively via an inner iteration. To find an approximate solution for (S1) for the results presented below we use gradient descent with at most 10 iterations. We stop the gradient descent updates early if the difference of the functional evaluated at two consecutive iterations is below our predefined tolerance of 10−5.

The concrete implementation of the aNETT minimization requires specification of the similarity measure, the total number of outer iterations Augmented NETT regularization of inverse problems (425), the step-size γ for the iteration in (S1) and the parameters defining the aNETT functional. These specifications are selected dependent of the inverse problem at hand. Table 1 lists the particular choices for the reconstruction scenarios considered in the following section.

Table 1.Parameter specifications for proposed aNETT functional and it numerical minimization.

α Augmented NETT regularization of inverse problems (426) γ Augmented NETT regularization of inverse problems (427) Nφ noise model Augmented NETT regularization of inverse problems (428)
Sparse view10−4 102 Augmented NETT regularization of inverse problems (429) 5040Gaussian, Augmented NETT regularization of inverse problems (430) Augmented NETT regularization of inverse problems (431)
Low dose Augmented NETT regularization of inverse problems (432) 102 10−3 201138Poisson, Augmented NETT regularization of inverse problems (433) Augmented NETT regularization of inverse problems (434)
Universality10−4 102 Augmented NETT regularization of inverse problems (435) 50160Gaussian, Augmented NETT regularization of inverse problems (436) Augmented NETT regularization of inverse problems (437)

In order to choose the parameters for the numerical simulations we have tested different values and manually chose the parameters which maximized performance among the considered parameters. Another way of choosing these parameters could be to try and learn these parameters from the data using some kind of machine learning approach or choose a bilevel approach similar to [44].

In the simulations we have observed that choosing c larger will tend to oversmooth the resulting reconstructions. Taking a smaller value for c we observed that the manifold term Augmented NETT regularization of inverse problems (438) tends to be undervalued resulting in worse performance. In a similar fashion we found that choosing Augmented NETT regularization of inverse problems (439) larger will have a smoothing effect on the resulting reconstructions while lowering Augmented NETT regularization of inverse problems (440) will make the reconstructions less smooth.

The ADMM scheme for aNETT minimization shares similarities with existing iterative neural network based reconstruction methods. In particular, ADMM inspired plug-and-play priors [1517] may be most closely related. However, opposed the plug and play approach we can deduce convergence from existing results for ADMM for non-convex problems [45]. While convergence of (S1)–(S3) and relations with plug and play priors are interesting and relevant, they are beyond the scope of this work. This also applies to the comparison with other iterative minimization schemes for minimizing aNETT.

In this section we apply aNETT regularization to sparse view and low-dose computed tomography (CT). For the experiments we always choose Augmented NETT regularization of inverse problems (441) to be the 1-norm. The parameter specifications for the proposed aNETT functional and its numerical minimization are given in table 1. For quantitative evaluation, we use the peak-signal-to-noise-ratio (PSNR) defined by

Augmented NETT regularization of inverse problems (442)

Here Augmented NETT regularization of inverse problems (443) is the ground truth image and Augmented NETT regularization of inverse problems (444) its numerical reconstruction. Higher value of PSNR indicates better reconstruction.

4.1.Discretization and dataset

For sparse view CT as well as for low dose CT we work with a discretization of the Radon transform Augmented NETT regularization of inverse problems (445). The values Augmented NETT regularization of inverse problems (446) are integrals of the function Augmented NETT regularization of inverse problems (447) over lines orthogonal to Augmented NETT regularization of inverse problems (448) for angle Augmented NETT regularization of inverse problems (449) and signed distance Augmented NETT regularization of inverse problems (450). We discretize the Radon transform using the ODL library [46] where we assume that the function has compact support in Augmented NETT regularization of inverse problems (451) and sampled on an equidistant grid. We use Nφ equidistant samples of Augmented NETT regularization of inverse problems (452) and Ns equidistant samples of Augmented NETT regularization of inverse problems (453). In both cases, we end up with an inverse problem of the form (1.1), where Augmented NETT regularization of inverse problems (454) is the discretized linear forward operator. Elements Augmented NETT regularization of inverse problems (455) will be referred to as CT images and the elements Augmented NETT regularization of inverse problems (456) as sinograms.

For all results presented below we work with image size 512×512 and use Ns =768. The number of angular samples Nφ is taken 40 for sparse view CT and Augmented NETT regularization of inverse problems (457) for the low dose example. In both cases we use the CT images from the Low Dose CT Grand Challenge dataset [47] provided by the Mayo Clinic. The dataset consists of 512×512 grayscale images of 10 different patients, where for each patient there are multiple CT scanning series available. We use the split Augmented NETT regularization of inverse problems (458) for training, validation and testing which corresponds to Augmented NETT regularization of inverse problems (459) CT images in the respective sets. We use the validation set to select networks which achieve the minimal loss on the validation set. The test set is used to evaluate the final performance. Note that by splitting of the dataset according to patient we avoid validation and testing on images patients that have already be seen during training time. An example image and the corresponding simulated sparse view and low-dose sinogram are shown in figure 1.

Augmented NETT regularization of inverse problems (460)

4.2.Numerical results

We compare results of aNETT to the learned primal-dual algorithm (LPD) [48], the tight frame U-Net [38] applied as post-processing network (CNN) and the filtered back-projection (FBP). Minimization of the loss-function for all methods was done using Adam [49] for 100 epochs, cosine decay learning rate Augmented NETT regularization of inverse problems (461) with Augmented NETT regularization of inverse problems (462) in the t-th epoch, and a batch-size of 4. For LPD we take the hyper-parameters Augmented NETT regularization of inverse problems (463) and N=7 network iterations and train according to [48]. Here, we choose to only use N = 7 network iterations because we observed instabilities during the training phase when this parameter was chosen larger and we have not performed any parametr tuning. For training of the tight frame U-Net we do not follow the patch approach of [38] but instead use full images obtained with FBP as CNN inputs. Training of all the networks was done on a GTX 1080 Ti with an Intel Xeon Bronze 3104 CPU.

  • Sparse View CT: to simulate sparse view data we evaluate the Radon transform for Augmented NETT regularization of inverse problems (464) directions. We generate noisy data Augmented NETT regularization of inverse problems (465) by adding Gaussian white noise with standard deviation taken as 0.02 times the mean value of Augmented NETT regularization of inverse problems (466). We use the 2-norm distance as the similarity measure. Quantitative results evaluated on the test set are shown in table 2. All learning-based methods yield comparable performance in terms of PSNR and clearly outperform FBP. The reconstructions shown in figure 2 indicate that aNETT reconstructions are less smooth than CNN reconstructions and less blocky than LPD reconstructions.
  • Low Dose CT: for the low dose problem, we use a fully sampled sinogram with Augmented NETT regularization of inverse problems (467) and add Poisson noise corresponding to 104 incident photons per pixel bin. The Kullback-Leibler divergence Augmented NETT regularization of inverse problems (468) is a more appropriate discrepancy term than the squared 2-norm distance in case of Poisson noise and the reported values and reconstructions use the Kullback-Leibler divergence as the similarity measure. Quantitative results are shown in table 2. Again, all learning-based methods give similar results and significantly outperform FBP. Visual comparison of the reconstructions in figure 3 shows that CNN yields cartoon like images and the LPD reconstruction again looks blocky. The aNETT reconstruction shows more texture than the CNN reconstruction and at the same time is less blocky than the LPD reconstruction.
  • Universality: in practical applications, we may not have a fixed sampling pattern. If we have many different sampling patterns, then training a network for each sampling pattern is infeasible and hence reconstruction methods should be applicable to different sampling scenarios. Additionally, it is desirable that an increased number of samples indeed increases performance. In order to test this issue, we consider the sparse view CT problem but with an increased number of angular samples without retraining the networks. Due to the rigidity of the used framework LPD cannot easily be adapted to this problem and we therefore decided to only compare aNETT with the post-processing CNN. For the results presented here, no network was retrained. Quantitative evaluation for this scenario is given in table 2. We see that aNETT slightly outperforms the CNN in terms of PSNR. The advantage of aNETT over CNN, however, is best observed in figure 4. One observes that CNN yields a similar reconstructions for both angular sampling patterns. On the other hand, aNETT is able to synergistically combine the increased sampling rate of the sinogram with the network trained on coarsely sampled data. Despite using the network trained with only 40 angular samples, aNETT reconstructs small details which are not present in the reconstruction from 40 angular samples.

Augmented NETT regularization of inverse problems (469)

Augmented NETT regularization of inverse problems (472)

Augmented NETT regularization of inverse problems (474)

Table 2.Overview of metric results evaluated on the test-set. The values shown are the average of the PSNR±the standard deviation calculated over the test dataset. The values in bold show the best results. The na entry means that LPD was not applied to this problem setting, as in the used framework there is no canonical way to use LPD with modified sampling pattern.

PSNRFBPLPDPostaNETT
Sparse view23.8±1.3 Augmented NETT regularization of inverse problems (475) 37.1±0.937.1±1.0
Low dose36.9±1.643.6±1.3 Augmented NETT regularization of inverse problems (476) 43.9±1.3
Universality32.4±1.6 na 37.7±0.8 Augmented NETT regularization of inverse problems (477)

4.3.Discussion

The results show that the proposed aNETT regularization is competitive with prominent deep-learning methods such as LPD and post-processing CNNs. We found that the aNETT does not suffer as much from over-smoothing which is often observed in other deep-learning reconstruction methods. This can for example be seen in figure 3 where the CNN yields an over-smoothed reconstruction and the aNETT reconstruction shows more texture. Besides this, aNETT reconstructions are less blocky than LPD reconstructions. Moreover, aNETT is able to leverage higher sampling rates without retraining the networks to reconstruct small details while other deep-learning methods fail to do so. We conjecture that this advantage arises due to the fact that aNETT can make use of the higher sampling rate using the data-consistency term in (1.3), while the CNN is agnostic to this change in the sampling rate. In some scenarios, it may not be possible to retrain networks. Especially for learned iterative schemes network training is a time-consuming task. Training aNETT on the other hand is straightforward and, as demonstrated, yields a method which is robust to changes of the forward problem during testing time.

While a more extensive study with respect to the influence of noise could be done to further analyse the advantages and disadvantages of each method, this is not our main focus here and is thus postponed to a future study.

Finally, we note that aNETT relies on minimizing (1.3) iteratively. With the use of the ADMM minimization scheme presented in this article, aNETT is slower than the methods used for comparison in this article. Designing faster optimization schemes for (1.3) is beyond the scope of this work, but is an important and interesting aspect.

We have proposed the aNETT (augmented NETwork Tikhonov) for which we derived coercivity of the regularizer under quite mild assumptions on the networks involved. Using this coercivity we presented a convergence analysis of aNETT with a general similarity measure Augmented NETT regularization of inverse problems (478). We proposed a modular training strategy in which we first train an Augmented NETT regularization of inverse problems (479)-regularized autoencoder independent of the problem at hand and then a network which is adapted to the problem and first autoencoder. Experimentally we found this training strategy to be superior to directly training the autoencoder on the full task. Lastly, we conducted numerical simulations demonstrating the feasibility of aNETT.

The experiments show that aNETT is able to keep up with the classical post-processing CNNs and the learned primal-dual approach for sparse view and low dose CT. Typical deep learning methods work well for a fixed sampling pattern on which they have been trained on. However, reconstruction methods are expected to perform better if we use an increased sampling rate. We have experimentally shown that aNETT is able to leverage higher sampling rates to reconstruct small details in the images which are not visible in the other reconstructions. This universality can be advantageous in applications where one is not fixed to one sampling pattern or is not able to train a network for every sampling pattern.

D O and M H acknowledge support of the Austrian Science Fund (FWF), project P 30 747-N32. Theresearch of L N has been supported by the National Science Foundation (NSF) Grants DMS 1 212 125 and DMS 1 616 904.

No new data were created or analysed in this study.

Augmented NETT regularization of inverse problems (2024)
Top Articles
7 Things To Know Before You Buy Gas at Sam's Club
Sam’s Club Gas Prices: Everything You Need To Know
Truist Bank Near Here
What is Mercantilism?
The Daily News Leader from Staunton, Virginia
Wausau Marketplace
Lowes 385
Publix 147 Coral Way
Connexus Outage Map
Healing Guide Dragonflight 10.2.7 Wow Warring Dueling Guide
Nyuonsite
Cvs Appointment For Booster Shot
Lake Nockamixon Fishing Report
Gem City Surgeons Miami Valley South
G Switch Unblocked Tyrone
Wausau Obits Legacy
Golden Abyss - Chapter 5 - Lunar_Angel
Wausau Marketplace
Craigslist Pinellas County Rentals
Seeking Arrangements Boston
Baldur's Gate 3: Should You Obey Vlaakith?
Boise Craigslist Cars And Trucks - By Owner
The Creator Showtimes Near R/C Gateway Theater 8
Dmv In Anoka
Craigslist Rentals Coquille Oregon
Leben in Japan – das muss man wissen - Lernen Sie Sprachen online bei italki
Hwy 57 Nursery Michie Tn
Penn State Service Management
Obsidian Guard's Skullsplitter
Sam's Club Near Wisconsin Dells
Missing 2023 Showtimes Near Grand Theatres - Bismarck
Ancestors The Humankind Odyssey Wikia
Scat Ladyboy
Sports Clips Flowood Ms
Los Amigos Taquería Kalona Menu
new haven free stuff - craigslist
Tributes flow for Soundgarden singer Chris Cornell as cause of death revealed
Polk County Released Inmates
Final Exam Schedule Liberty University
Reborn Rich Ep 12 Eng Sub
Baywatch 2017 123Movies
Aliciabibs
8 Ball Pool Unblocked Cool Math Games
Gt500 Forums
Janaki Kalaganaledu Serial Today Episode Written Update
LumiSpa iO Activating Cleanser kaufen | 19% Rabatt | NuSkin
Borat: An Iconic Character Who Became More than Just a Film
Port Huron Newspaper
Human Resources / Payroll Information
Craigslist Cars For Sale By Owner Memphis Tn
Assignation en paiement ou injonction de payer ?
Tyrone Dave Chappelle Show Gif
Latest Posts
Article information

Author: Maia Crooks Jr

Last Updated:

Views: 6391

Rating: 4.2 / 5 (63 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Maia Crooks Jr

Birthday: 1997-09-21

Address: 93119 Joseph Street, Peggyfurt, NC 11582

Phone: +2983088926881

Job: Principal Design Liaison

Hobby: Web surfing, Skiing, role-playing games, Sketching, Polo, Sewing, Genealogy

Introduction: My name is Maia Crooks Jr, I am a homely, joyous, shiny, successful, hilarious, thoughtful, joyous person who loves writing and wants to share my knowledge and understanding with you.