The whole mesh deformation model: a fast image segmentation method suitable for effective parallelization
© Lenkiewicz et al.; licensee Springer. 2013
Received: 8 January 2013
Accepted: 27 February 2013
Published: 19 March 2013
In this article, we propose a novel image segmentation method called the whole mesh deformation (WMD) model, which aims at addressing the problems of modern medical imaging. Such problems have raised from the combination of several factors: (1) significant growth of medical image volumes sizes due to increasing capabilities of medical acquisition devices; (2) the will to increase the complexity of image processing algorithms in order to explore new functionality; (3) change in processor development and turn towards multi processing units instead of growing bus speeds and the number of operations per second of a single processing unit. Our solution is based on the concept of deformable models and is characterized by a very effective and precise segmentation capability. The proposed WMD model uses a volumetric mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times, independently of image contents. The model also offers a good ability for topology changes and allows effective parallelization of workflow, which makes it a very good choice for large datasets. We present a precise model description, followed by experiments on artificial images and real medical data.
KeywordsSegmentation; Parallelization; Deformable models; Medical imaging
The image segmentation procedure is one of the most important steps involved in the process of image analysis. This refers to the task of partitioning a given image into multiple regions and is typically used to locate and mark objects and boundaries in input scenes. After segmentation, the image represents a set of data far more suitable for further algorithmic processing and decision making. Image segmentation algorithms are a very broad field and they have received significant amount of research interest.
A very interesting family of image segmentation algorithms, that has been gaining a lot of focus for many years, is called deformable models. They are based on the concept of placing a geometrical object in the scene of interest and deforming it until it assumes the shape of objects of interest. This process is usually guided by several forces, which originate in mathematical functions, features of the input images, and other constraints of the deformation process, like object curvature or continuity. A range of very desired features of deformable models include their high capability for customization and specialization for different tasks and also extensibility with various approaches for prior knowledge incorporation. This set of characteristics makes deformable models a very efficient approach, which is capable of delivering results in competitive times and with very good quality of segmentation, robust to noisy, and incomplete data.
Numerous examples of deformable models usage can be found, starting from the earlier years of image processing until the recent research efforts. The first work about that subject has been presented by Kass et al. . These authors have described a method called Snakes, which proposed placing a single contour in the scene of interest and then subjecting it to deformations until it assumes the shape of the objects present in that scene. The deformations have been constrained by the external and internal energies, which described the features of the scene and of the contour itself, respectively. This idea has caused a lot of interest in the field of image segmentation in general and in relation to medicine. Numerous authors started to propose their improvements and changes to the original formulation, including the geometric active contours models  (applied to magnetic resonance images of brain) or active contours for segmentation objects without strictly defined edges  (tested only on artificial images and pictures) among the most noteworthy ones.
Solutions derived from the original method formulated by Kass et al.  usually used an Euler differential equation to determine the solution. A different approach has been presented by Amini et al.  in their early publication, where the authors proposed a solution based on dynamic programming. The method allowed the introduction of new type of constraints, called hard constraints, describing rules that could not be violated. It also guaranteed the numerical stability of the solution, thus addressing a serious disadvantage of the Kass method, where the iterations that formed the intermediate steps of execution showed a large level of instability and for the final solution they had to be considered meaningless. The drawback of the Amini solution was a big overhead in terms of memory requirements and execution times. The idea of algorithmic approach was further examined by Williams and Shah , who have proposed a greedy algorithm approach and also introduced some advances regarding the energy function estimation. Their solution delivered a significant improvement in terms of execution time and memory needs, as by the definition it considers only local information in each iteration. Therefore, this approach would not guarantee that the found resulting solution would be the globally lowest one. However, those authors argued that the tests of their method have proven its ability to deliver results of a very close quality to those of the dynamic programming version. Tao et al.  have extended a very popular formulation of external force, called the Gradient Vector Flow (GVF) . Their formulation, called the fluid vector flow, has improved some features of the GVF that have been not optimal, namely insufficient capture range and poor convergence for concavities. Xie  has experimented with gradient vector interaction in order to deal with the initialization dependency problem. An interesting approach has been presented by De Santis and Iacoviello . These authors have experimented with a discrete formulation of the energy function instead of defining the algorithms in the continuum. Their experiments have shown that using the said approach it was possible to obtain more accurate segmentation with significantly lower computational costs.
The ability to change topology of the shape has been a very significant component of deformable models and numerous works have been presented with the aim of classifying and describing different aspects of topology changing [10, 11]. McInemey and Terzopoulos [12, 13] have considered the incapability of the parametric deformable models for topological transformations without additional mechanisms. They have introduced a model called the T-snake, which was able to dynamically adapt its topology to that of the target object, flow around objects embedded within the target object, and/or automatically merge with other models interactively introduced by the user. A very important solution has been proposed by Casselles , who presented a model based on a curve evolution approach instead of an energy minimization one. It allowed automatic changes in the topology when implemented using the level-sets-based numerical algorithm  and also naturally prevented self-intersecting, which is a costly procedure in parametric deformable models. Their solution has served as the base for numerous following works [15–17]. However, the ability to change the topology is not always desired. Especially in the field of medical imaging, we are often dealing with a situation when the topology of the object of interest is known from anatomical knowledge and can be defined upon algorithm execution. In order to provide such functionality a topology-preserving formulation was proposed . It has however achieved the goal by imposing a hard constraint on the number of connected components, which had to be known before the segmentation and could not be modified during it. This was too restrictive for some applications, so a more subtle method was proposed in . The authors have formulated their solution in a way that would allow the components of the segmented shape to merge, split or vanish without changing the genus of the initial deformable model.
Due to their attractive features one of the fields in which deformable models perform very well, and thus are a very popular choice, is the area of medical image analysis. Digital image processing has successfully been applied to this area for more than three decades. The numerous benefits that it offers include, in particular, improvement in the interpretation of examined data, full or nearly full automation of tasks normally performed by a physician, better precision and accuracy of obtained results, and also possibility of exploring new imaging modalities, leading to new anatomical or functional insights.
Despite the large amount of work carried out in this area, deformable models still suffer from a number of drawbacks. Those that have been gaining the most focus are:
sensitivity to the initial position and shape of the model—without a proper initialization the model might get trapped in local minima and thus not reach the objects of interest or not detect some of their features correctly;
sensitivity to noise in the input images and to flawed input data;
problematic topology changing—whenever the scene of interest includes more than one object or when the objects present in the scene contain discontinuities, deformable models need to change the topology of their shape. This is not straightforward in the parametric formulation of the method and requires specific algorithms;
the need for user supervision over the process.
Some of the above drawbacks have successfully been addressed in an interesting work that has been presented by Barreira and Penedo  and further described in . They have formulated a model called The topological active volumes (TAV) which has been introduced as a general model for automatic segmentation of 3D scenes. The TAV model has differed from the first methods based on deformable models, in which the segmentation was performed using solely boundary information and the result included only a contour or a surface describing the shape of the objects of interest. In the case of TAV, the deformed shape was represented with a volumetric mesh, with some nodes being responsible for describing the boundaries of objects and others for modeling their interior structure. The mesh was initialized over the entire image and then converged towards the objects present in the scene. Thanks to the mesh structure, the TAV model was able to describe segmented scenes with more detail and more resemblance to the real-world objects. It also showed a very good potential for the topology changing capabilities and solved the issue of initialization, thanks to the presence of the nodes in the entire image. However, the initialization of the mesh was done in a way, which could be compared to initializing any other Active Contour method over the entire input image. Although it assures covering the entire area of interest, it also introduces disadvantages, like starting the segmentation process from the the most distant location possible. This increases the overall segmentation time and also raises the chances of performing an error during segmentation, because more irrelevant objects and noises are encountered during the process.
Modern medical imaging is focused around high-resolution image data, which is capable of delivering increasingly more information to medical practitioners, which in turn leads to improvement in detection rates and increased precision of computer-aided surgeries and planning. However, processing such large sets of data presents new challenges, which is partly a result of the current advances in the field of microprocessors. No longer are we witnessing significant growth of possibilities of single processing unit, but rather a trend towards multi-unit processing. Numerous attempts have been taken to parallelize the workflow of medical image processing using computer clusters [22–25].
In this article, we present our innovative model for 3D image segmentation, called whole mesh deformation (WMD) model. It presents a set of very desired solutions that successfully address the above-mentioned disadvantages: it eliminates completely any reliance on the initialization of the process, it allows efficient topology changes and shows low dependence on user interaction. Comparing to the TAV solution it also offers a number of significant advantages, namely much better computational efficiency, high suitability for effective parallelization, and much better solving of the initialization problem. Because of a flexible and parameterized implementation of the energy function the WMD model can also easily be extended with further possibilities, like the ability to incorporate prior knowledge using, e.g., statistical models. The proposed method is designed in a way to be highly suitable for modern applications of image processing, which is a treatment of large datasets composed of 3D, high-resolution images and taking advantage of multi-processor execution environments. The preliminary version of this model has been presented in . In this article, we describe our model with more detail, propose a new mechanism for topology changes and a method for workload parallelization. We also present more experiments using real images as the input.
The remainder of this article is organized as follows: in Section 2, we describe our model for image segmentation, the WMD model, and we describe how can it deal with the most significant problems that can be encountered during 3D image segmentation tasks. In Section 3 we describe the numerical parameters that are used for configuration of the model and we explain the high robustness of the method in case of selection of non optimal values. In Sections 2.5 and 4, we describe the process of shape optimization along with the condition of segmentation finalization and recognition of irrelevant parts of the images. In Section 5, we describe a very important part of the segmentation techniques based on deformable models, namely the topology changes scheme. In Section 6, we describe the parallelization of the model, namely how efficiently it shares the workload when executed on a multi-processing unit environment. In Section 7, we present our experiments with the WMD model divided into following parts: comparison of the segmentation time with the TAV method using a set of artificial images, comparison of the segmentation time using different levels of precision, performance with real medical images, and the performance gain when executed in a parallel environment. Finally, in Section 8 we present our conclusions.
2 Description of the WMD approach
2.1 General description of the ideas
The number of nodes that segment the boundary of the object is usually significantly lower than the number of nodes which model their interior structure. This might not be the case in the exemplary image given on Figure 1, but in real-world scenarios the density of the mesh is significantly higher, resulting in different proportions between the external and internal nodes. Furthermore, because of the mesh structure, every augmentation of the number of external nodes would result in an even bigger increase of the internal nodes number. As a result, the computational power is not efficiently used, because distributing the internal nodes inside of objects of interest is a trivial task and the most precision is required at the boundary detection part.
The segmentation process is completed not sooner than when the external nodes place themselves on the borders of objects of interest. This is a very inefficient approach, because the distance that the external nodes need to travel during the segmentation process is relatively large. The mesh is always instanced in a way to cover the entire image (or volume) and the external nodes are instanced on the edges of the mesh. This puts them in the most distant location possible from the objects of interest, which in turn results in longer segmentation times.
The shapes present in the scene of interest are approached from image borders with a single layer of external nodes. This makes the segmentation time highly dependent on the contents of the image, because the distance which the nodes need to travel can be varying in different images or even in different parts of a single image. This makes the segmentation time very difficult to predict or to control. That is a much undesired characteristic if we want to perform a parallel implementation of the algorithm or to segment a three-dimensional volume composed of a number of slice images. Without the possibility of predicting the segmentation time for each part of the image it will be very hard or impossible to guarantee an even distribution of workload and overall segmentation time in these cases will always be longer than necessary.
In order to take advantage of the great potential of the TAV method but also to cope with the above-described drawbacks, we have proposed the WMD Model, which uses a different organization of the mesh in order to process the scene of interest. An exemplary segmentation using the ideas of our model is presented on the bottom row of Figure 1. The major difference is the lack of division between internal and external nodes of the mesh. All nodes are treated equally and their behavior depends only on the image features in their nearest neighborhood. However, no functionality of the segmentation method is lost. The model maintains the ability to describe segmented scenes with high detail and great resemblance to the real-world objects, offers a very good potential for the topology changing capabilities and independence of initialization. The overall idea is to simulate the behavior of both types of nodes from the TAV model, but without dividing them strictly in two groups. During the segmentation process each node should discover by itself which behavior should it carry out and proceed in the desired way for the rest of the process. As a consequence of the above-described difference, the segmentation process is performed in a different manner: the nodes, which have been instanced near the boundaries of the object, begin to progress towards them, while the remaining nodes change their positions only enough to maintain a stable, regular structure of the mesh. In the end of the process, the nodes that are irrelevant are discarded.
Enforcing the correct behavior on all nodes of the mesh without dividing them strictly in two groups is obtained with an innovative formulation of the energy function of the mesh. The task of defining a single energy function that could incorporate all necessary behaviors and perform a correct segmentation is naturally more complex than in the case of two energy functions, but it opens a possibility to address all of the above-described disadvantages and offer a significant performance improvement.
2.2 Formulation of the energy function
The energy function of the model, which guides the segmentation process, is defined in a way to assume its minimal values when the nodes of the mesh position themselves on the shape of interest. In order to simulate the deforming behavior of the mesh, the TAV model needs two energy functions, one for each type of the mesh nodes. Naturally, the WMD model needs only one energy function, because the mesh is composed of only one type of nodes.
The energy function consists of two groups of forces: internal and external ones, which are responsible, respectively, for preserving the structure of the objects and for applying the features of input images. The division of the energy function into two forces has a purely organizational purpose and it takes its origin from the first publications about deformable models .
In order to calculate energy for a given model state, the parameter domain [0,1]×[0,1]×[0,1] is discretized as a regular mesh defined by the internode spacing Gx, Gy, Gz, and each of the contributing forces described in Sections 2.3 and 2.4 is calculated using the appropriate algorithm.
2.3 Internal energy formulation
The parameter k stands for the total number of nodes in the mesh. Parameter represents the average distance between the neighboring nodes (average edge length). The parameters m n and μ n are, respectively, the average distance and angle between the edges incident at a node n and are calculated by dividing the sum of lengths (or angles) of the links in the neighborhood of the node by their number. Only those links that are not longer than the flexibility parameter allows are considered for this step. The flexibility parameter of the links is a feature used for the task of topological changes of the WMD model. This characteristic will be described in more detail in section 5 of this article. The symbols α and β next to the sum expressions represent their weights and serve to balance their impact on the whole equation. As we can see from Equation (1), the continuity force attracts the nodes of the mesh to maintain equal distances between each other, whereas the curvature term attracts the nodes to keep a 90-degree angle. This serves to preserve the initial structure of the mesh based on regular, cubic elements.
2.4 External energy formulation
The symbol I(v) represents the intensity values taken directly from the input images. The G(v) symbol stands for the GVF  values, which are calculated using the GVF algorithm over the input images. Similarly, the E d g(v) symbol represents the values from the edge detector, which are defined using the Canny Edge Detector algorithm  over the input images. Similar to Equation (1), the symbols γ,δ, and ε represent the weights of particular components.
2.5 Proposed optimization of the shape of the mesh
where μ is a small value near zero. Equation (4) verifies the number of nodes that have changed their position in the last algorithm iteration. Whenever this number is decreased to zero (or very near to zero) the mesh is assumed to be in its stable position and the segmentation is finished.
2.6 Complexity of the WMD model
The proposed WMD model follows a pattern of greedy algorithm optimization for all the nodes of the mesh. The steps performed in that pattern include: (a) calculation of the energy (Equations 1 and 2) in all the neighbor locations of a given node; (b) definition of the location with the lowest energy as a new location for the given node; (c) repetition of steps (a) and (b) for all nodes of the mesh, until the overall energy continues to decrease. From those steps we can see that the execution time of the segmentation task depends linearly on the number of the nodes in the mesh and the necessary number of algorithm repetitions (calculation of new position for each node of the mesh). Furthermore, the number of necessary iterations depends strictly on the distance which the nodes need to travel during the optimization step. In the WMD model, we have focused on decreasing this value as much as possible. As a result, the general complexity of WMD and TAV models is similar but the pessimistic execution time of the WMD model is severely lower than the pessimistic execution time of the TAV-based algorithm.
3 Proposed numerical parametrization of the energy function
Symbols starting from the left represent the weights of: curvature, continuity, GVF, image intensity, and the edge detector terms. The justification for the above assumption is the following:
The continuity and curvature terms need to be weighted with values just high enough to maintain a high regularity of the mesh. This will not decrease the quality of segmentation as the majority of nodes will have to travel only small distances during the entire process.
The nodes that are initialized in the proximity of objects of interest need to be attracted to their borders and they should be the only ones attracted—nodes that are initialized farther should maintain their positions and regularity of the mesh. That is why we need to give the GVF force a weight slightly higher than the continuity and curvature ones, but keep the range of GVF on a low level.
A high weight of the image intensity values serves to impose the correct behavior on the nodes of the mesh that have been initialized inside the objects of interest. The weight needs to be higher than the one of GVF, as the nodes inside the objects should not be attracted by the GVF force. Instead, they should be distributed near their initial locations and maintain a stable structure, thanks to the continuity and curvature forces. This scenario assumes that the objects present in the images are bright and the background is dark. However, this can easily be modified whenever required by the scenario of application.
The weight of the edge detector energy parameter needs to be set to the highest value, as it will serve to fix permanently the position of the nodes that reach the edges of objects of interest.
During the process of establishing and confirming the correctness of (5), we have also noted that the proposed WMD model proves to be robust and not highly sensitive to changes in the values of the numerical parameters, as long as they follow (5). Each segmentation scenario has a certain choice of optimal values of the parameters, but in most cases within the margin of 2.5% of the optimal values it was not possible to notice any change in the delivered result. The margin of 3.5% introduced small changes but still delivers correct and precise results.
4 Recognition of the unwanted parts of the mesh
where σ represents the standard deviation of the image intensity values in the sub-mesh and N E stands for the number of nodes finalized on the edges of objects of interest, n is the total number of nodes in the mesh, I(V i ) is the intensity value of the input image at the point occupied by the node V i and is the average intensity value for the whole input image. The parts that we want to discard from the final result would show very low values given by those two equations, because most of the nodes in those sub-meshes would be residing over a background area. Therefore, a simple clustering procedure into two groups is enough to separate the relevant from the irrelevant parts.
5 Proposed dynamic topology changes model
The proposed WMD model requires an efficient and precise scheme for the topology changes. This is due to the fact that some parts of the mesh can become irrelevant during the segmentation process and as a consequence they need to be disconnected from the relevant part and discarded. To cope with those needs we have developed a topology changes mechanism as a component of the WMD model. It allows to change the topology of the mesh during the segmentation process and to create discontinuities in the mesh structure while the optimization step progresses. Whenever a need for mesh reconfiguration would be encountered during the shape optimization, the model should react in a desired way and start creating discontinuities in the mesh structure. On the other hand, the said topology changes feature should be constrained at all times by the mesh energy function, which would guarantee that it will not cause it to progress out of its stable state. This twofold dependency between two processes would assure the correct behavior of the mesh and a stable progression towards its optimal shape. This is performed in contrast to the TAV method, where the task of topology changing is a separate step of the whole segmentation process [20, 21].
taking into consideration the size of discontinuities in the input images. This range has been established experimentally using 30 artificial images as the input. Whenever a certain link fails to satisfy (8) it is marked as broken. We can safely assume that this process is carried out only when desired because the mesh is defined to have a rigid and stable structure and as a consequence the majority of the links of the mesh extend their lengths only by small values during the whole process. When the situation of a link breaking occurs, we know that this is a result of attracting the given node to a border of an object present in the scene of interest and thus the connection breaking should be called.
This sort of chain reaction is much desired as it will trigger the movement of nodes and breaking of connections in a small neighborhood. This will in turn lead to a successful detection of the entire discontinuity in the mesh structure. Such process will be stopped at the right locations, namely where the nodes of the mesh encounter the edges of objects of interest. This is guaranteed by defining the energy of the edge detector to a high value, which will always stop the progression of nodes.
6 Parallel implementation
The WMD model has been designed since the beginning with the aim for execution on parallel architectures. We have implemented a parallel image segmentation algorithm using our model as the foundation. It is able to take advantage of several processing units by dividing the segmentation task into equal parts, which would be carried out by the said units simultaneously. In such a scenario, the workload distribution is performed in the following way: all the nodes of the mesh are divided in as many groups as there are processing units available. Each of the units will have full access to the input data in order to allow full mesh deformation and avoid a situation when the given node is not able to move outside a specific region, because its processing unit does not hold any information about the necessary image data. Naturally this approach imposes the drawback of time required to propagate the input data between all the processing units, but our tests with a computer cluster have shown that this time is not very significant in terms of the whole segmentation time. In a scenario with multi-core machine being the execution environment, this time would be even less considerable.
The processing units perform the mesh optimization simultaneously using the steps described in Section 2.5 and test Equation 4 in order to decide when to conclude the segmentation.
It is important to note that in the case of TAV the equal distribution of the workload would not be as straightforward as dividing the nodes in equally large groups. This possibility of our model is a result of the approach that we have taken, namely the lack of division between internal and external nodes and by implementing the dynamic topology changes scheme.
Thanks to these feature, as stated before, the time required to perform segmentation with our method depends very little on the contents of the input image. As a consequence, this time is very uniform for different parts of the mesh. On the other hand, in the case of TAV, the distance that the external nodes need to travel during the segmentation task depends highly on the contents of the image, which results in a big variation of the segmentation times for different parts of the mesh. This is the reason why our method is much more suitable for parallelization than TAV or any method based on the concept of contour evolution.
7 Experiments and results
7.1 Performance experiments
We have performed our experiments using two algorithms introduced in the previous section. One of them was an implementation of the TAV model  and the other was based on the WMD model. Both models include the ability to perform topological changes. The values of the numerical parameters of energy function that have been used for all the following experiments are the following: α=0.4,β=0.1,γ=0.7,δ=0.5,ε=0.9. As it can be seen, they follow the general rule specified in (5). The algorithm was implemented in C#. The execution environment included a machine equipped with 3-GHz processor and 2 GB of RAM. The following tests have been prepared in order to test the most low-level features of the model, i.e., the energy formulation, the process of shape optimization and the definition of the mesh, which are the unique features of the WMD model when comparing to the TAV solution or other similar approaches.
7.2 Comparison of segmentation time
This confirms our expectations described in the previous section. What is also very notable is that the WMD model delivered very similar segmentation times in all the scenarios, whereas the TAV model showed a much higher variety in this value. This shows that the segmentation time and the number of iterations are nearly independent from the contents of the images. To explain this let us consider the following.
The size of the mesh in those experiments was 16×16 pixels, which results in 16 pixels of maximum possible distance needed for each node to travel during the segmentation process. This is true if we assume that the segmentation parameters are chosen correctly and every given edge would be approached only by the nodes initialized in their nearest neighborhood. As we can see, the maximum travel distance of a node can be estimated with high confidence without any information about the contents of the input images.
The size of the two input images was 512×512 pixels, which results in mesh of the size 33×33 nodes for each image, giving total number of 1,089 nodes, constructing 1,024 cells of the mesh. The number of mesh cells is relatively high, which lets us assume with high probability that at least in one of the cells a node would have to travel the maximum possible distance. And since the whole segmentation time is defined by the node that has to move for the largest distance of all the nodes, we can assume that this pessimistic scenario is the factual scenario for majority of algorithm executions. Tracing the segmentation process iteration by iteration we have confirmed that the above assumption is correct. Through the initial 16 iterations the nodes are progressing towards the object borders and after that the step of breaking unwanted connections between nodes is taken in order to perform topology changes and separate unwanted parts from the mesh. This part shows to be less predictable and presents some variation, which is mainly responsible for the small differences in required iterations for different scenarios. What can be seen from these experiments is that the WMD model is by definition not highly dependent from the contents of the segmented image and delivers identical results in much shorter times.
7.3 Dependence between the grid density and segmentation speed
The values of execution times for all those scenarios are depicted in Figure 8(right). The time required to carry out segmentation using the TAV method is growing severely with the growth of mesh complexity, whereas in case of WMD we can notice only a very slight growth of the execution time. As we can see, the smaller number of iterations is balanced with the increased number of nodes to process in the whole mesh. As a result, the execution times are maintained on nearly constant level until the optimal value is reached (12×12 pixels). Then we can see a slightly faster increase of the execution time, as the growing number of nodes in the mesh is no longer balanced by smaller number of iterations.
This low dependance of the segmentation time from mesh density is a much desired feature of the WMD model, as it allows performing precise segmentations of very complex shapes, having almost no drawback of increased execution times. With most of the known segmentation methods, including the TAV, increase of the segmentation accuracy imposes also a significant growth of the segmentation time.
7.4 Performance with real-world medical images
It is important to notice the large difference in the contents of the consecutive slices in each of the sets. In every case of 3D volume segmentation the whole execution time will highly be dependent from the slice, which requires the biggest amount of time to be processed. In our CT skull scan example, as well as in many similar real-world medical images, we can see that the last slices of the sets contain very little data, located far from the image borders. Performing the volume segmentation using the assumptions of the TAV we would need the nodes of the mesh to travel through nearly the entire image to reach this data. Therefore, the whole segmentation would be very time consuming. We have examined the last slice of the CT volume and found out that the longest distance from the edge of the image to object of interest is 220 pixels. This means that the number of iterations required by the TAV method to complete the segmentation should be around 220. On the other hand, our WMD method is nearly independent from the contents of the image, so the segmentation times should be almost equal for all the slices and the total segmentation time should significantly be lower.
7.5 Parallel implementation results
8 Conclusions and future work
In this article, we have presented a novel model for 3D image segmentation, called the WMD Model. This method shows to be highly efficient, giving good segmentation results in short times. More accurately our model shows the following desired features:
The segmentation procedure is significantly faster than other similar approaches like TAV, thanks to allocation of more nodes into the boundary detection process and to their more optimal distribution.
The segmentation procedure is predictable and stable. This allows a very efficient parallelization, which is very hard to obtain with other popular methods based on deformable models framework.
The segmentation time is virtually independent from the density of the mesh. This is a very desirable and also unique feature of our method, as it allows to obtain high-quality results without the drawback of increased segmentation time.
Segmentation time depends also very little from the distribution of the objects in the input image—the only factor that defines the time in a noteworthy manner is the resolution of the input image, but also that has much less impact on the segmentation time than in the case of the TAV model.
The dependency on user interaction and on initialization of segmentation is significantly lower in the WMD model than in the TAV model.
Execution on a multi-processing unit environment is straightforward and offers nearly linear performance gain.
Tests on real-world images have proven that those features allow our solution to perform very well in medical applications. It has been able to detect and segment objects in CT, MRI, and X-Ray scans of different parts of human body. The WMD model can be implemented on any simple multi-core platform, which makes it very easy to apply in any medical office.
This study was partially supported by the Portuguese Fundacão para a Ciência e a Tecnologia through the grant contract SFRH/BD/61265/2009, by MSFT-Microsoft Portugal through HIPERCAMBIO project: High Performance Computing Applied to Medical Data and Bioinformatics, by Instituto de Telecomunicacões and by University of Beira Interior.
- Kass M, Witkin A, Terzopoulos D: Snakes: active contour models. Int. J. Comput. Vis 1988, 1(4):321-331. 10.1007/BF00133570View Article
- Caselles V: Geometric models for active contours. In International Conference on Image Processing. NJ, USA: IEEE Press Piscataway; 1995:9-12.
- Chan TF, Vese LA: Active contours without edges. IEEE Trans. Image Process 2001, 10(2):266-277. 10.1109/83.902291View Article
- Amini A, Tehrani S, Weymouth T: Using dynamic programming for minimizing the energy of active contours in the presence of hard constraints. In International Conference on Computer Vision. NJ, USA: IEEE Press Piscataway; 5–8 Dec. 1988.
- Williams D, Shah M: A fast algorithm for active contours and curvature estimation. CVGIP: Image Understand 1992, 55: 14-26. 10.1016/1049-9660(92)90003-LView Article
- Tao W, Cheng I, Basu A: Fluid vector flow and applications in brain tumor segmentation. IEEE Trans. Biomed. Eng 2009, 56(3):781-789.View Article
- Xu C, Prince J: Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process 1998, 7(3):359-369. 10.1109/83.661186MathSciNetView Article
- Xie X: Active contouring based on gradient vector interaction and constrained level set diffusion. IEEE Trans. Image Process 2010, 19: 154-164.MathSciNetView Article
- Santis AD, Iacoviello D: A discrete level set approach to image segmentation. Signal, Image Video Process 2007, 1(4):303-320. 10.1007/s11760-007-0032-5View Article
- Delingette H, Montagnat J: Shape and topology constraints on parametric active contours. Comput. Vis. Image Understand 2000, 83(2):140-171.View Article
- Mcinerney T, Terzopoulos D: Deformable models in medical image analysis: a survey. Med. Image Anal 1996, 1: 91-108. 10.1016/S1361-8415(96)80007-7View Article
- Mcinerney T, Terzopoulos D: Topologically adaptable snakes. In Proceedings of the International Conference on Computer Vision (ICCV’95) Cambridge, MA. NJ, USA: IEEE Press Piscataway; 1995:840-845.View Article
- Mcinerney T, Terzopoulos D: Topology adaptive deformable surfaces for medical image volume segmentation. IEEE Trans. Med. Imag 1999, 18(10):840-850. 10.1109/42.811261View Article
- Osher S, Sethian J: Fronts propagating with curvature dependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys 1988, 79: 12-49. 10.1016/0021-9991(88)90002-2MathSciNetView Article
- Fritscher K, Schubert R: 3D image segmentation by using statistical deformation models and level sets. Int. J. Comput. Assist. Radiol. Surg 2006, 1(3):123-135. 10.1007/s11548-006-0048-2View Article
- Leventon M, Eric W, Grimson L, Faugeras O: Statistical shape influence in geodesic active contours. In Proceeding of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. NJ, USA: IEEE Press Piscataway; 2000:316-323.
- Malladi R, Sethian J, Vemuri B: Shape modeling with front propagation: a level set approach. IEEE Trans. Pattern Anal. Mach. Intell 1995, 17(2):158-175. 10.1109/34.368173View Article
- Han X, Xu C, Prince J: A topology preserving level set method for geometric deformable models. IEEE Trans. Pattern Anal. Mach. Intell 2003, 25(6):755-768. 10.1109/TPAMI.2003.1201824View Article
- Segonne F, Pacheco J, Fischl B: Active contours under topology control genus preserving level sets. In Computer Vision for Biomedical Image Applications. New York, NY, USA: Springer US; 2005:135-145.View Article
- Barreira N, Penedo MG: Topological active volumes. EURASIP J. Appl. Signal Process 2005, 2005: 1939-1947. 10.1155/ASP.2005.1939View Article
- Barreira N, Penedo M, Alonso C, Marino C, Ansia F: Handling topological changes in the topological active volumes model. In Proceedings of the 5th International Conference on Image Analysis and Recognition. Berlin, Germany: Springer Berlin Heidelberg; 2008:121-131.
- Natarajan R, Krishnaswamy D: A case study in parallel scientific computing: the boundary element method on a distributed-memory multicomputer. In Proceedings of the IEEE/ACM Supercomputing Conference, San Diego, CA, USA. NY, USA: ACM New York; 1995:33-33.View Article
- Taniguchi RI, Makiyama Y, Tsuruta N, Yonemoto S, Arita D: Software platform for parallel image processing and computer vision. In SPIE Proceedings Parallel and Distributed Methods for Image Processing. Edited by: Shi H, Coffield PC. UK: SPIE, Cardiff; 2001:2-10.
- Nicolescu C, Jonker P: A data and task parallel image processing environment. Parallel Comput 2002, 28(7–8):945-965.View Article
- Ataseven Y, Akalin-Acar Z, Acar C, Gencer N: Parallel implementation of the accelerated BEM approach for EMSI of the human brain. Med. Biol. Eng. Comput 2008, 46(7):671-679. 10.1007/s11517-008-0316-0View Article
- Lenkiewicz P, Pereira M, Freire M, Fernandes J: The whole mesh deformation model for 2D and 3D image segmentation. In International Conference on Image Processing. NJ, USA: IEEE Press Piscataway; 2009:4045-4048.
- Canny J: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell 1986, 8(6):679-698. http://portal.acm.org/citation.cfm?id=11275 View Article
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.