Recursive reconstruction of piecewise constant signals by minimization of an energy function

The problem of denoising piecewise constant signals while preserving their jumps is a challenging problem that arises in many scientific areas. Several denoising algorithms exist such as total variation, convex relaxation, Markov random fields models, etc. The DPS algorithm is a combinatorial algorithm that excels the classical GNC in term of speed and SNR resistance. However, its running time slows down considerably for large signals. The main reason for this bottleneck is the size and the number of linear systems that need to be solved. We develop a recursive implementation of the DPS algorithm that uses the conditional independence, created by a confirmed discontinuity between two parts, to separate the reconstruction process of each part. Additionally, we propose an accelerated Cholesky solver which reduces the computational cost and memory usage. We evaluate the new implementation on a set of synthetic and real world examples to compare the quality of our solver. The results show a significant speed up, especially with a higher number of discontinuities.

1. Introduction. In this paper, we focus on recovering Piecewise constant (PWC) signals from noisy measurements. A PWC signal is characterized by a set of flat regions, called plateaus or levels, with a finite number of instantaneous discontinuities also called jumps, shifts, steps or singularities according to the discipline. This class of signals arises in several fields, including geophysics [16], bioinformatics [2], biology [14] and digital imagery [3]. PWC signals offer two important pieces of information: the values of the plateaus and the boundaries located in each discontinuity. Thus, a denoising technique must not only recover the value of each plateau, in the true signal, but also detect and preserve its discontinuities. So the main problem could be tackled as a joint restoration-segmentation inverse problem, which generally leads to better results than executing each task independently [24].
We focus on the classical degradation model where the observed discrete signal y is related to the underlying discrete signal x by: The ground truth signal is considered to be PWC without any limitation on the plateaus values, x i ∈ R , and the number of discontinuities is unknown. Additionally, x is corrupted by a zero-mean additive white Gaussian noise b with a standard deviation σ (Figure 1).
Usually, the inverse problem in Eq. (1) is tackled by minimizing the following energy function : The first term in the equation is called the data fidelity term which ensures closeness between the signal x and the observation y. In a statistical setting, the choice of an l 2 data fidelity term implicitly corresponds to the assumption that the noise is white Gaussian. The second term is called the smoothness term and it is added since the information provided by the model in Eq . (1) is not sufficient to compute an adequate solution. The smoothness term is expressed as a sum of Potential function (PF)'s over neighboring points [4]. The PF is chosen to force the solution to exhibit a set of a priori features like the presence of discontinuities in our case. Finally, the parameter λ 2 (whose choice is a difficult problem by itself [12]) is set to control the trade-off between the fidelity term and the prior constraints. The minimization of the energy function in Eq. (2) is equivalent to the Maximum A Posteriori (MAP) in the Markov random fields (MRF) framework. The global minimizer x * of Eq. (2) maximizes the conditional probability p(x|y) given the observable data y. The data fidelity term represents the likelihood in the MRF framework and the second term serves as the prior distribution p(x). Numerous PF's have been tested in the literature [5] for the denoising task. This is achieved by forcing a smoothness prior among interacting sites. But an everywhere smoothness constraint, represented by a PF V (r) = r 2 , is not adequate to preserve large differences located in the discontinuities. Therefore, the need of an edge-preserving PF, that allows the recovery of discontinuities in the original signal while applying smoothness elsewhere. A solution was proposed by (D.Gemman and S. Geman) in their line process model [18] or the weak string constraint by (A.Blake and A.Zisserman) [1]. The weak string model uses a truncated quadratic interaction function V (r) = min (α, r 2 ) to switch off the smoothness prior when the gradient magnitude exceeds a given threshold. Above this threshold, the quadratic smoothness is replaced by a constant penalty indicating the presence of a discontinuity. The weak string model uses a deterministic approach to represent the signal. The same energy was obtained by (D.Geman and D.Geman) [18] using the MRF framework and an additional boolean variable called Line Process (LP). The switch off between the quadratic and constant penalty is explicitly represented by the boolean variable indicating the presence or absence of a discontinuity. So given a parameter h called the sensitivity that represents the minimal value of a jump, a discontinuity is detected if the difference |x s+1 − x s | exceeds h.
As was proven in [6], the sensitivity h is related to the truncated constant α by variables: the estimated signal x, and the LP l: The parameters α and λ are empirical constants that must be fixed a priori. Therefore, the energy function (4) must be minimized for several parameters in order to find an adequate solution. Minimizing the energy function in Eq. (4) leads to a general, nonconvex and mixed variables optimization problem [19].
The Problem (5) is mixed since it involves a continuous variable x and an integer binary variable representing the LP l. The nonconvexity comes from the use of the nonconvex truncated quadratic PF. Although a considerable effort was made to develop convex edge-preserving PF [7,9], nonconvex PF offers better performance for signals with extreme discontinuities [22]. Finding a global minimizer of E is an NP-hard problem [8] due to the use of an edge-preserving PF. The energy function is nonconvex with a large number of local minima, one for each configuration l of the LP. Also, the energy is not smooth at the global minimizer which makes local minimization techniques impractical.
Several global optimization techniques were proposed to compute an approximation of the global minimizer. These methods are either based on stochastic approach or continuous deterministic relaxation. In the statistical approach, we mention the Iterated Conditional Modes (ICM) [4] and the Simulated Annealing (SA) [18,17]. The results of these algorithms are extremely inefficient since ICM is sensible to the initial guess and the cooling schedule of SA is slow. In the relaxation approach, we mention (Blake and Zisserman) Graduated non-convexity (GNC) algorithm [6] which is a deterministic annealing method for approaching a global minimum with an unconstrained continuous variable. The GNC algorithm excels SA in both speed and quality of the solution [6]. For a detailed account of the history, we refer the reader to [21,27].
The combinatorial Discontinuity Position Sweep (DPS) algorithm offers a better alternative to GNC in terms of running time and restoration quality [15] for short and medium duration signals. Unfortunately, DPS becomes very slow for long duration signals, as it has to solve an important number of the linear systems with size n, where n is the length of the signal.
In this paper, we present a recursive scheme to implement the DPS algorithm that drastically reduces its running time. The recursive scheme is based on the conditional independence, between two parts of the signal, that emerges after the confirmation of a jump position. A direct effect of this splitting approach is the shrinkage of the linear systems. Furthermore, we propose an accelerated Cholesky solver that avoids recomputing the factorization of each system, but instead, deduces it from a unique factorization corresponding to the initial system.
The remaining of the article is organized as follows. In section 2, we present the key ideas behind the classical DPS algorithm that transforms the Pb. (5) into a purely combinatorial problem and we show an efficient strategy to find the optimal configuration of LP. In section 3, we present the splitting process based on a discontinuity detection technique, we also introduce an accelerated Cholesky solver that only computes a unique factorization to solve all the remaining systems. Then we compare the complexity of both implementations to quantify the gain provided by the recursive approach. Finally, in section 4, we compare the reconstruction quality of DPS against recent algorithms [25,26] and we provide some benchmarking results illustrating the gain in running time.
2. The DPS algorithm. The key formulation of the DPS algorithm is to transform the mixed problem (5) into a pure combinatorial one. This could be considered as a dual approach to GNC's which keeps the continuous variable x and eliminates the LP. For a fixed LP l, finding the optimal solution over x is an easy task since it only involves solving a tridiagonal linear system. DPS must then search among all possible configurations of the LP space L. Since the number of LP's is exponential, card(L) = 2 n , a brute force search is unpractical. DPS overcomes this difficulty by structuring all the configurations of L into the nodes of a hypercube and integrates an efficient search strategy that prunes an important part of L. This strategy is based on a unique decomposition of the hypercube into a set of disjoint levels.
2.1. Hypercube representation of the LP. The first idea of DPS is to represent all the configurations of the LP in L = {0, 1} n as the nodes of a hypercube or n-cube. This graph has 2 n nodes, which we encode as n bits words using Gray codes [20]. The resulting encoding ensures that two nodes are neighbors if and only if they differ only by one digit.
Each encoding of a node l in the hypercube L could be represented by a set I ⊂ {1, 2, . . . , n} that regroups the confirmed discontinuities. We then use the notation l = e I such as: We associate to each set I, an energy function E I (x) = E(x, e I ). With this notation, the mixed Pb. (5) could be rewritten as: the energy function in (7) is quadratic and strictly convex, therefore, it admits a unique global minimum. The global minimum x * I is then characterized by the stationarity condition: In order to compute the global minimum x * I of E I , we transform (8) into a linear system: (9) A(I)x * I = y where the matrix A(I) is tridiagonal, symmetric and definite-positive.
By Eq. (9) we established that for a fixed LP l indexed by I computing its associate energy is reduced to solving a linear system. The solution is cheaply computed since the main matrix presents all the advantages in term of symmetry, positivity, and sparsity. As a result, the mixed problem Pb. (5) could be transformed into the following pure combinatorial problem: (10) is theoretically simple [23], nevertheless, a direct approach, that enumerates all the configurations, is not practical. Since the cardinal of L is exponential card(L) = 2 n . DPS structures the nodes of L in terms of levels and proposes a sweep enumerative search strategy that reduces the number of visited nodes. In order to guide our search, we use the notion of level in a hypercube. A level L k of the hypercube contains all the configurations that have exactly k discontinuities. As a result, the hypercube L is decomposed into the following disjoint levels ( Figure 2a).
, 1} n /card(I) = k} are the (n + 1) levels of the hypercube. We construct a sequence of optimization problems (P k ) k∈[0,n] based on (11) to limit the search on a given level.
In order to compute the solution of (P k ), we must solve card(L k ) = n k tridiagonal linear systems. DPS prunes the search space of (P k ) by applying a cutting enumerative search strategy that limits the domain of (P k ) to the neighbors of e I k−1 . Where e I k−1 is the optimal solution of (P k−1 ). In consequence, a confirmed discontinuity will always be kept in the upcoming levels.
The set of neighbors of e I k−1 will be denoted as follows: The pruning strategy will lower the number of visited nodes from card(L k ) = n k to the n − k − 1 neighbors of e I k−1 .
We thereby converted the independent sequence of problems (P k ) into a cheaper recursive sequence of problems (Q k ). The definition of (Q k ) is given by:  Figure 2b) illustrates DPS pruning process which only considers the neighbors of the previous optimal solution. As an example, in the second level, we already found the optimal LP (0100) in the first level. By this information, DPS will only consider its neighbors (red and blue nodes) and will prune the rest (empty nodes). Additionally, DPS will stop the search if the energy did not decrease from a level to another. In the example, we stop the search at the second level (red dashed line) since the energy increased from the first level to the second one.
The sequence of problems is solved by first exploring the level L 0 , which consists of seeking a solution with no discontinuities. Then at each level, we keep the found discontinuities, and we search if the introduction of a new discontinuity could decrease the energy function, if not the rest of the levels are pruned and the optimal solution lies within the actual level. The resulted technique is called Discontinuity Position Sweep (DPS) and is illustrated in (Figure 2b).
3. The recursive and accelerated DPS algorithm. The DPS algorithm has a O(n 2 ) time complexity and O(n) space complexity which makes it very fast for small and medium size signals. However, its execution time increases substantially for large signals like the case of images. The main reason for this bottleneck is due to the number and the size of the solved linear systems. As a result, the applicability of DPS for large data is reduced tremendously.
We propose a recursive implementation called Recursive Discontinuity Position Sweep (RDPS) that reduces the size of the solved systems by disconnecting the nodes of the hypercube L. This decomposition is based on the conditional independence created by a confirmed discontinuity. This means, discovering a discontinuity at random position p, will allow us to compute the parts x [1,p] and x [p,n] independently. In consequence, the hypercube will shrink as we discover new discontinuities.

Algorithm 1: The DPS algorithm
Data: λ, h, y Result: x * the restored signal We also propose an accelerated Cholesky solver that avoids the computation of each factorization but instead extracts them from a cached matrix C. This is possible since all the solved matrices share a common structure and differ only by their size. We denote A λ the set of all the solved matrices, we prove that, if we know the Cholesky factorization of a matrix A ∈ A λ , we could easily compute the Cholesky factorization of any matrix in A λ with lower dimension than A. Therefore, the accelerated algorithm will only compute the Cholesky factorization C, of the first matrix A(∅), and then will extract all the other factorizations from C.
3.1. The recursive implementation of DPS. The DPS algorithm uses a breadth first search in the levels of the hypercube with a cutting strategy that only considers the neighbors of the optimal solution in the previous level (14). Thus, a discontinuity i k , introduced in the level k, will be present in all the optimal solutions of the superior levels.
We make use of this property to stop the search process as soon as we encounter the first discontinuity. The search in the upcoming levels is then replaced by a new two independent sub-problems, where each sub-problem is concerned only with restoring a part of the signal. The independence of the two sub-problems is proved in the next proposition: Proposition 1. If we introduce a discontinuity i k , recovering the signal x could be replaced by recovering x [1,i k ] and x [i k +1,n] independently.
Proof. Restoring the signal with only one discontinuity i k , is equivalent to solving the linear system A(I)x = y, where I = {i k }.
The matrix A is symmetric tridiagonal and has the following form: and d i = 1+b i +b i−1 except for d 0 = 1+b 0 and d n = 1+b n−1 Since e I i k = 1, the terms A(I) i k ,i k +1 and A(I) i k −1,i k vanishes, and A(I) becomes a diagonal block matrix. As as result, we could compute x as: Which proves that we can compute x [1,i k ] and x [i k +1,n] independently.
This fundamental property will allow us to reformulate the problem (5) in a sequence of recursive sub-problems (R y ). Since each problem is decomposed in two sub-problems, the decomposition process follows the classical binary tree structure for a recursive formulation.
We define the problem (R y ) as: Where L m 01 = e ∈ {0, 1} m | j e j = 1 or j e j = 0 is the LP subset with dimension m containing at most one discontinuity. Which means that we restrict the search only on the first two levels.
The initial problem (10) is then equivalent to solving (R y ) which considers the entire signal. For an instance (R y [i,j] ) of the sequence (R y ) y , we compute the optimum l * and if l * ∈ L 0 we stop the recurrence since the signal y [i,j] is constant. In the other hand, if l * ∈ L 1 , we confirm that the signal y [i,j] contains a unique discontinuity in the position i k , such as l * i k = 1. The problem R y [i,j] is then divided into the two sub-problems (R y [i,i k ] ) and (R y [i k +1,j] ).
This recursive process defines a binary tree T = (R, E) where each node is an optimization problem concerned with the recovery of a part of the signal, and the discontinuities denote separations conditioned on an accepted discontinuity i k = 1.   We propose a simple brute-force algorithm to solve an instance of the sequence (R y ). The algorithm considers all the nodes of the levels L 0 and L 1 , and returns the node with the minimal energy. The algorithm (2) presents an efficient test to Algorithm 2: Brute force Solver for the problem (R y ).
Data: Noisy signal y n ← |y| /* Compute the set of energies in the level L 1 */ for i k ← 1, . . . , n do detect the presence of a discontinuity in a given part of the signal. The recursive implementation of RDPS will depend heavily on this test to differentiate a base case of the recurrence ( a constant signal) with a decomposed one that needs further search for new discontinuities. As mentioned above, the decomposition of the signals follows a binary tree structure, and the optimal solution will be gathered by regrouping all the constant signals in the leaves of the tree.
This recursive approach rapidly reduces the size of the solved systems, and only the first search considers full-size matrices. This is illustrated in (Figure 4), which plots the number of these systems as a function of their sizes. The figure shows that the majority of these systems are of small size due to the shrinking effect of introducing a new discontinuity, and the number of full-size matrices becomes negligible.
3.2. Accelerated DPS implementation. The brute force search (algorithm 2) must check the energy value in each LP in the first two levels of the hypercube. This requires solving an important number of systems in the form (18). Conversely, we remark that those matrices share a common structure, and differ only by their size. This common structure is defined by the family: (19) A λ = A(n) ∈ M n | n ∈ N Function ReconstructionRDPS(y,x) Data: Noisy signal y Result: Reconstructed Piecewise constant signal x n ← |y| /* Compute the line process with the minimal energy.
/* Call the subroutine on the left part.
/* Call on the right part.
ReconstructionRDPS(y r ,x r ) /* Concatenate the two parts.  where M n is the set of square matrices of size n and the expression of A(n) is given as: The accelerated solver will extract the Cholesky decomposition of any matrix of the family A λ from the initial decomposition of A(∅) = CC t . Since any matrix of the family A λ , of size m, will have the same coefficients as A(∅) up to the m − 1-th row, the values of its Cholesky decomposition remains the same as C, and only the elements of the last row need some adjustments. This will reduce the resolution of each system to a simple linear forward/backward substitution algorithm.
Proposition 2. let M = A(m) and N = A(n) denote two matrices from A λ such as m < n. Given the Cholesky decomposition of N = CC t , we could extract the Cholesky decomposition of M = BB t from C.
Proof. The coefficients of the matrices M and N are the same for all the rows and columns strictly inferior to m. So we rewrite M using the following decomposition by block: which leads to: Also, we rewrite N as a block matrix: . And since N [1,m] and M [1,m] are equal, we could compute B as follows: The equation in (22) is derived from the uniqueness of the Cholesky factorization since the matrices M and N have positive diagonal elements. Also, the division in (23) is always possible because C m−1,m−1 is not null otherwise it would imply that N m−1,m−2 is null, or N m−1,m−2 = −λ 2 .
The proof shows the possibility of extracting the Cholesky coefficients of any matrix in A λ from the decomposition of the initial matrix C. The accelerated solver will only compute C and uses (23,24) to deduce the remaining decompositions.
3.3. Complexity comparaison. In this section, we study the complexity of DPS and its alternative implementation to illustrate the reduction gained by RDPS. So, we consider a signal y of size n with p discontinuities. We denote T (n, p) the number of basic operations to restore this signal, and we compute T (n, p) for each implementation.
Firstly, the classical implementation of DPS will search all the configurations on the first p + 1 levels. For each level L k , it will solve n − k linear system of size n except for the initial level L 0 , where only one system is needed. The solver must compute the Cholesky decomposition and perform a two-pass forward/backward substitution. So each system will take 4n division, 4(n − 1) multiplication and 4(n − 1) substation or addiction.
Thus, the number of basic operations needed by DPS is given by: Secondly, for RDPS, all the work is realized inside the test (algorithm 2) to check the existence of a discontinuity in a part of the signal y [i,j] with size m. In order to confirm this, the test must solve the problem R(y [i,j] ) which considers all the nodes of the set L m 01 . Thus, the test must solve m + 1 linear system of size m, but with the use of the accelerated solver, each system will only consume 6n operations instead of 12n.
Given the cost of each problem (R y ), we could write T (n, p) with a recursive formula based on the reconstruction scheme dictated by the decomposition binary tree (R, E).
The overall expression of T (n, p) can't be developed any further since it depends on the position of the discontinuities which is a random variable. On the other hand, we could consider the best case for RDPS that consist of a signal with uniformly distributed discontinuities where the important jumps reside in the middle of each block. In this case, we could develop the expression of T (n, p) and we get the following explicit formula: T (n, p) = S(n) + 2T ( n 2 , p−1 2 ) T (n, 0) = S(n) In this case we get: This represents a considerable reduction in arithmetic operations compared to DPS. Furthermore, the reduction becomes relevant with a higher number of discontinuities.
4. Numerical results. In this section, we compare the restoration quality of the RDPS with two modern algorithms. The first one is the Energy based stepfinder (EBS) [25] which is based on graph cut [8] and the second algorithm takes a variational approach and minimizes the well-known l 2 -Potts functional [25]. We also illustrate the boost, achieved by the recursive implementation, on a set of synthetic signals with different sizes and number of discontinuities, the two main factors slowing DPS. The results show significant gains especially with a higher number of discontinuities.

4.1.
Reconstruction of synthetic data. The algorithm RDPS was implemented in C++ and tested on an Apple machine with an i7 intel processor at 2.2 GHz. The reconstruction of a signal of size 2 10 3 took 68ms. One of the main features of DPS is that its time is independent of the signal to noise ratio.
In the first example, we apply the RDPS algorithm to determine the steps in the trajectories of molecular motors which allows to study the motion of individual enzymes. The initial signal is a synthetic signal from [25] which was simulated based on experimental data of RNA polymerase II using published rates [11]. Each point in the data is a noisy measure of the molecular motor position in base pair (bp) unit system at given time t in seconds. The simulated data was used to assess the accuracy of an algorithm called Energy based step-finder presented in [25]. EBS applies a two-step reconstruction process to detect the steps in high bandwidth signals. The first step receives the full noisy data and tries to attenuate the noise by solving a total variation regularized least-squares problem [10]. The proposed solver for this step offers a linear complexity but the resulting signal x * has several false discontinuities. In the second step, EBS filters those false discontinuities by compressing the signal x * to tuples of amplitudes and their lengths. Then, EBS applies a combinatorial clustering algorithm based on graph cuts to connect the tuples between false discontinuities and only assign steps to the true ones. Figure  (5) outlines the reconstruction of the simulated data by EBS and our algorithm RDPS. As can be seen from the two reconstructions, RDPS only reports a unique false discontinuity at t ≈ 1.7, whereas EBS detected several false steps. Also, we computed the Mean square error (MSE) of each reconstruction, RDPS has a lower (MSE = 0.012) compared to EBS error (MSE = 0.091).
In the second example, we compare DPS to a recent algorithm taking a different modelization approach [13]. The presented algorithm formulates the reconstruction  Figure 5. Restoration of a synthetic simulated data of the movement of a molecular motor. the initial data contains 10 4 data points regularly distributed in t ∈ [0, 2s]. Left (5a) we remark that DPS exactly recovers all parts except for the hard jump at t ≈ 1.7. In the right (5b), the restored signal by EBS algorithm with several missed jumps.
problem by a variational approach which leads to the minimization of the well known l 2 -Potts functional: (33) P λ (x) = λ||∇(x)|| 0 + ||y − x|| 2 We use the classical example illustrated in [13] and published in (pottslab). The signal in (Figure 6a) has 256 data points x i ∈ [0, 1], and is corrupted by a Gaussian white noise with a standard deviation σ = 0.2. Both reconstructions detect all the discontinuities with comparable MSE errors. Indeed, DPS reports an error of 5.36 10 −04 (Figure 6b) compared to 5.28 10 −04 for the l 2 -Potts solver (Figure 6c).

4.2.
Benchmarking results. We study empirically the MSE error of the DPS approximate solution as a function of the hyper-parameters. We consider a case study signal with 4 discontinuities with a minimal jump h m = 1. For each choice of λ ∈ [0.1, 50] and h ∈ [0.01, 1.2], we compute the averaged MSE over 10 realizations of noise. The results are depicted in figure (7). We could emphasis three different regions in the plot: the orange one represents the overfitted solutions with a set of false discontinuities that are either due to lower values of h or higher values for λ. The yellow region corresponds to convenient values of the hyper-parameters since the model detects all the true discontinuities. Finally, the red region matches the underfitted solutions where the model fails to detect the true discontinuities due to higher values of the sensitivity h, typically h > h m .
The DPS algorithm presents many features such as its robustness to the noise and the independence of its time to the noise standard deviation σ. This is not the case for other algorithms, which either suffer in restoration quality since they cannot differentiate the plateaus from the noise, or take more time for convergence like the GNC algorithm since the number of iterations of its iterative solver is an increasing function of σ. Figure 8 shows the robustness of DPS and GNC for restoring highly  Figure (6a) represents the initial signal, (6b) shows the reconstructed signal with DPS with no false discontinuities, but with some errors in the height of some plateaus. Finally, the figure (6c) plots the optimal solution for the l 2 -Potts solver which also detect all the discontinuities.  corrupted signals. Both algorithms minimize the same energy (4) from weak string model [1]. Also, we did compare DPS and RDPS in a set of simulated signals with different sizes and number of discontinuities The results of the experiments are presented in (Figure 9), ( 10a) and ( 10b), and confirm the important reduction rate by RDPS.
Conclusion. In this paper, we presented a recursive and accelerated implementation of the DPS algorithm for MRF reconstruction of PWC signals corrupted with gaussian noise. The recursive approach is possible, thanks to the conditional   Figure 9. The figure plots the difference T b − T r , in seconds, between the classical implementation time T b and the recursive implementation T r . The difference varies depending on two factors : the signal size n and the number of discontinuities k. From this figure it can be seen that the reduction is important especially with a higher number of discontinuities.
independence created by the confirmation of a discontinuity. This allows a rapid diminution of the signal size, the main factor slowing DPS. Also, we presented an accelerated Cholesky solver, specially conceived for the family of matrices solved by DPS. The accelerated solver only computes a unique factorisation and deduces all the upcoming factorisations. Numerical results confirmed an important speed up by the new implementation. This speed up is more relevant with a higher number of discontinuities. One direction for future research is to reduce the workload on the first level by using an unsupervised classification algorithm. The classifier could be applied as a preprocessing step, to discard a set of the hypercube. The sweep strategy would then only be applied to the remaining parts.