The image segmentation procedure is one of the most important steps involved in the process of image analysis. This refers to the task of partitioning a given image into multiple regions and is typically used to locate and mark objects and boundaries in input scenes. After segmentation, the image represents a set of data far more suitable for further algorithmic processing and decision making. Image segmentation algorithms are a very broad field and they have received significant amount of research interest.
A very interesting family of image segmentation algorithms, that has been gaining a lot of focus for many years, is called deformable models. They are based on the concept of placing a geometrical object in the scene of interest and deforming it until it assumes the shape of objects of interest. This process is usually guided by several forces, which originate in mathematical functions, features of the input images, and other constraints of the deformation process, like object curvature or continuity. A range of very desired features of deformable models include their high capability for customization and specialization for different tasks and also extensibility with various approaches for prior knowledge incorporation. This set of characteristics makes deformable models a very efficient approach, which is capable of delivering results in competitive times and with very good quality of segmentation, robust to noisy, and incomplete data.
Numerous examples of deformable models usage can be found, starting from the earlier years of image processing until the recent research efforts. The first work about that subject has been presented by Kass et al. [1]. These authors have described a method called Snakes, which proposed placing a single contour in the scene of interest and then subjecting it to deformations until it assumes the shape of the objects present in that scene. The deformations have been constrained by the external and internal energies, which described the features of the scene and of the contour itself, respectively. This idea has caused a lot of interest in the field of image segmentation in general and in relation to medicine. Numerous authors started to propose their improvements and changes to the original formulation, including the geometric active contours models [2] (applied to magnetic resonance images of brain) or active contours for segmentation objects without strictly defined edges [3] (tested only on artificial images and pictures) among the most noteworthy ones.
Solutions derived from the original method formulated by Kass et al. [1] usually used an Euler differential equation to determine the solution. A different approach has been presented by Amini et al. [4] in their early publication, where the authors proposed a solution based on dynamic programming. The method allowed the introduction of new type of constraints, called hard constraints, describing rules that could not be violated. It also guaranteed the numerical stability of the solution, thus addressing a serious disadvantage of the Kass method, where the iterations that formed the intermediate steps of execution showed a large level of instability and for the final solution they had to be considered meaningless. The drawback of the Amini solution was a big overhead in terms of memory requirements and execution times. The idea of algorithmic approach was further examined by Williams and Shah [5], who have proposed a greedy algorithm approach and also introduced some advances regarding the energy function estimation. Their solution delivered a significant improvement in terms of execution time and memory needs, as by the definition it considers only local information in each iteration. Therefore, this approach would not guarantee that the found resulting solution would be the globally lowest one. However, those authors argued that the tests of their method have proven its ability to deliver results of a very close quality to those of the dynamic programming version. Tao et al. [6] have extended a very popular formulation of external force, called the Gradient Vector Flow (GVF) [7]. Their formulation, called the fluid vector flow, has improved some features of the GVF that have been not optimal, namely insufficient capture range and poor convergence for concavities. Xie [8] has experimented with gradient vector interaction in order to deal with the initialization dependency problem. An interesting approach has been presented by De Santis and Iacoviello [9]. These authors have experimented with a discrete formulation of the energy function instead of defining the algorithms in the continuum. Their experiments have shown that using the said approach it was possible to obtain more accurate segmentation with significantly lower computational costs.
The ability to change topology of the shape has been a very significant component of deformable models and numerous works have been presented with the aim of classifying and describing different aspects of topology changing [10, 11]. McInemey and Terzopoulos [12, 13] have considered the incapability of the parametric deformable models for topological transformations without additional mechanisms. They have introduced a model called the Tsnake, which was able to dynamically adapt its topology to that of the target object, flow around objects embedded within the target object, and/or automatically merge with other models interactively introduced by the user. A very important solution has been proposed by Casselles [2], who presented a model based on a curve evolution approach instead of an energy minimization one. It allowed automatic changes in the topology when implemented using the levelsetsbased numerical algorithm [14] and also naturally prevented selfintersecting, which is a costly procedure in parametric deformable models. Their solution has served as the base for numerous following works [15–17]. However, the ability to change the topology is not always desired. Especially in the field of medical imaging, we are often dealing with a situation when the topology of the object of interest is known from anatomical knowledge and can be defined upon algorithm execution. In order to provide such functionality a topologypreserving formulation was proposed [18]. It has however achieved the goal by imposing a hard constraint on the number of connected components, which had to be known before the segmentation and could not be modified during it. This was too restrictive for some applications, so a more subtle method was proposed in [19]. The authors have formulated their solution in a way that would allow the components of the segmented shape to merge, split or vanish without changing the genus of the initial deformable model.
Due to their attractive features one of the fields in which deformable models perform very well, and thus are a very popular choice, is the area of medical image analysis. Digital image processing has successfully been applied to this area for more than three decades. The numerous benefits that it offers include, in particular, improvement in the interpretation of examined data, full or nearly full automation of tasks normally performed by a physician, better precision and accuracy of obtained results, and also possibility of exploring new imaging modalities, leading to new anatomical or functional insights.
Despite the large amount of work carried out in this area, deformable models still suffer from a number of drawbacks. Those that have been gaining the most focus are:

sensitivity to the initial position and shape of the model—without a proper initialization the model might get trapped in local minima and thus not reach the objects of interest or not detect some of their features correctly;

sensitivity to noise in the input images and to flawed input data;

problematic topology changing—whenever the scene of interest includes more than one object or when the objects present in the scene contain discontinuities, deformable models need to change the topology of their shape. This is not straightforward in the parametric formulation of the method and requires specific algorithms;

the need for user supervision over the process.
Some of the above drawbacks have successfully been addressed in an interesting work that has been presented by Barreira and Penedo [20] and further described in [21]. They have formulated a model called The topological active volumes (TAV) which has been introduced as a general model for automatic segmentation of 3D scenes. The TAV model has differed from the first methods based on deformable models, in which the segmentation was performed using solely boundary information and the result included only a contour or a surface describing the shape of the objects of interest. In the case of TAV, the deformed shape was represented with a volumetric mesh, with some nodes being responsible for describing the boundaries of objects and others for modeling their interior structure. The mesh was initialized over the entire image and then converged towards the objects present in the scene. Thanks to the mesh structure, the TAV model was able to describe segmented scenes with more detail and more resemblance to the realworld objects. It also showed a very good potential for the topology changing capabilities and solved the issue of initialization, thanks to the presence of the nodes in the entire image. However, the initialization of the mesh was done in a way, which could be compared to initializing any other Active Contour method over the entire input image. Although it assures covering the entire area of interest, it also introduces disadvantages, like starting the segmentation process from the the most distant location possible. This increases the overall segmentation time and also raises the chances of performing an error during segmentation, because more irrelevant objects and noises are encountered during the process.
Modern medical imaging is focused around highresolution image data, which is capable of delivering increasingly more information to medical practitioners, which in turn leads to improvement in detection rates and increased precision of computeraided surgeries and planning. However, processing such large sets of data presents new challenges, which is partly a result of the current advances in the field of microprocessors. No longer are we witnessing significant growth of possibilities of single processing unit, but rather a trend towards multiunit processing. Numerous attempts have been taken to parallelize the workflow of medical image processing using computer clusters [22–25].
In this article, we present our innovative model for 3D image segmentation, called whole mesh deformation (WMD) model. It presents a set of very desired solutions that successfully address the abovementioned disadvantages: it eliminates completely any reliance on the initialization of the process, it allows efficient topology changes and shows low dependence on user interaction. Comparing to the TAV solution it also offers a number of significant advantages, namely much better computational efficiency, high suitability for effective parallelization, and much better solving of the initialization problem. Because of a flexible and parameterized implementation of the energy function the WMD model can also easily be extended with further possibilities, like the ability to incorporate prior knowledge using, e.g., statistical models. The proposed method is designed in a way to be highly suitable for modern applications of image processing, which is a treatment of large datasets composed of 3D, highresolution images and taking advantage of multiprocessor execution environments. The preliminary version of this model has been presented in [26]. In this article, we describe our model with more detail, propose a new mechanism for topology changes and a method for workload parallelization. We also present more experiments using real images as the input.
The remainder of this article is organized as follows: in Section 2, we describe our model for image segmentation, the WMD model, and we describe how can it deal with the most significant problems that can be encountered during 3D image segmentation tasks. In Section 3 we describe the numerical parameters that are used for configuration of the model and we explain the high robustness of the method in case of selection of non optimal values. In Sections 2.5 and 4, we describe the process of shape optimization along with the condition of segmentation finalization and recognition of irrelevant parts of the images. In Section 5, we describe a very important part of the segmentation techniques based on deformable models, namely the topology changes scheme. In Section 6, we describe the parallelization of the model, namely how efficiently it shares the workload when executed on a multiprocessing unit environment. In Section 7, we present our experiments with the WMD model divided into following parts: comparison of the segmentation time with the TAV method using a set of artificial images, comparison of the segmentation time using different levels of precision, performance with real medical images, and the performance gain when executed in a parallel environment. Finally, in Section 8 we present our conclusions.