Review of satellite image segmentation for an optimal fusion system based on the edge and region approaches

Tags: segmentation, Segmentation results, image processing, satellite images, image segmentation, image segmentation techniques, remote sensing, regions, region, segmentation algorithms, pixel, Academic Press, IJCSNS International Journal of Computer Science and Network Security, International Journal, texture segmentation, Image Fusion results, Guilford Press, pixels, edges, homogeneity, Region growing, fusion approach, Image Understanding, New York, edge approaches, threshold value, industrial quality control, image classes, algorithms, image histogram, Image analysis, Satellite image segmentation techniques, Computer Vision, Graphics and Image Processing, approaches, image fusion, segmentation approaches, Region segmentation
Content: 242
IJCSNS International Journal of computer science and Network Security, VOL.7 No.10, October 2007
Review of satellite image segmentation for an optimal fusion system based on the edge and region approaches
Ahmed REKIK, Mourad Zribi, Ahmed Ben Hamida, Mohammed Benjelloun1, Universitй du Littoral Cфte d'Opale, Laboratoire d'Analyse des Systиmes du Littoral, (LASL-EA 2600), 50 rue Ferdinand Buisson, B.P. 699, 62228 Calais Cedex, France 2 Ecole Nationale d'Ingйnieurs de Sfax Unitй de Recherche: Technologie de l'Information et Electronique Mйdicale `TIEM' B.P.W, 3038 Sfax, Tunisie
Summary Image analysis, usually, refers to a process of images provided by a computer in order to find the objects within the image. Image segmentation is one of the most critical tasks in automatic image analysis. It consists of subdividing an image into its constituent parts as well as extracting them. A great variety of segmentation algorithms have been developed in the last few decades; but more of these algorithms can be really applied to all images. Some of them are not suitable for some particular situations, especially in satellite images which, often, contain different textured regions or varying background, and are often subjected to illumination changes or environmental effects. Searching for more precision, we propose, in this work a fusion of the edge approach with the region approach for satellite image segmentation. Indeed, this paper presents an overview on image fusion techniques applied to satellite image segmentation. The aim is to exploit the advantages of the two approaches in order to know closed contours and homogeneous areas for optimal image segmentation. Key words: Satellite images, segmentation approaches, edge approaches, region approaches, satellite image fusion. 1. Introduction The study and the extraction of the different elements that compose an image constitute a fundamental task in the image processing and chain analysis. In fact, the necessity to replace the human observer by a computer for image analysis was in the origin of the development of image processing. Indeed, computing allows an off line and real time analysis more powerful than that which can be used by human-being [1]. Moreover, the importance of vision to human being was in the origin of the introduction of the image processing into many fields as well as the growth of research and the developing topics in image processing. Among these fields one can quotes the remote sensing field which knew these last years a growing interest
particularly in all what relates to the satellite image segmentation and interpretation techniques. These techniques are often obligatory in all artificial vision systems. They greatly affect the quality of results of the subsequent steps of the analysis. The employed techniques remain generally dependent on the specificity of the image to process: the richness in textures with different orientations and/or scales, blurred transitions between regions, occulted contours; or on the types of visual indices to extract: edges, uniform regions in the sense of grey levels, textures, forms; or on the nature of the problem to be solved below the segmentation: 3D reconstruction, pattern recognition, image understanding, automated object tracking. The problem of segmentation still remains open. In fact, image segmentation is one of the primary steps in image analysis for object identification. The main aim is to recognize homogeneous regions within an image as distinct and belonging to different objects. Segmentation stage does not worry about the identity of the objects. They can be labeled later [2]. The segmentation process can be based on finding the maximum homogeneity in grey levels within the regions identified. In this paper, we propose an image segmentation system adapted to the satellite images (textured region extraction). The architecture of our proposed system combines two concepts. (i) The evaluation of the two approaches (edge, region), and the choice of the optimal method in each approach. (ii) The integration of the information resulting from these optimal complementary segmentation methods: edge detection and region extraction. Thus, this allows us to exploit the advantages of each. 2. Satellite Images In today's world of advanced technology where most satellite images are recorded in digital format virtually,
Manuscript received October 5, 2007 Manuscript revised October 20, 2007
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
243
all image interpretation and analysis involve some elements of digital processing. Digital image processing may involve numerous procedures including formatting and correcting of the data, digital enhancement to facilitate better visual interpretation, or even automated classification of targets and features entirely done by computer [3]. The development and application of various remote sensing platforms result in the production of huge amounts of satellite image data. Therefore, there is an increasing need for effective querying and browsing in these image databases. In order to take advantage and make good use of satellite images data, we must be able to extract meaningful information from the imagery. Indeed, interpretation and analysis of satellite imagery involves the identification and/or measurement of various targets in an image in order to extract useful information about them [4]. Targets in satellite images may be any feature or object which can be observed in an image, and have the following characteristics: · Targets may be a point, line, or area feature. This means that they can have any form, a bus in a parking lot or plane on a runway, a bridge or roadway, a large expanse of water or a field. · The target must be distinguishable; it must contrast with other features around it in the image. 2.1. Processing Functions: The satellite image processing techniques available in image analysis systems can be classified into the following categories : · Pre-processing · Image Transformation · Image segmentation and Analysis The pre-processing techniques involve different operations applicable to the satellite images among which one quotes: analysis and extraction of information; data correction for sensor irregularities and unwanted sensor or atmospheric noise; data conversion. So these techniques operations accurately represent the reflected or emitted radiation measured by the sensor; and geometric distortions correction due to sensor-Earth geometry variations. Image segmentation and analysis operations are used to digitally identify and classify pixels in the data. Segmentation is usually performed on multi-channel data sets (A) and this process assigns each pixel in an image to a particular class or theme (B) based on different characteristics of the pixel brightness values. There are a variety of approaches taken to perform digital segmentation. We will describe two generic approaches, namely the must used in literature: edge and region segmentation.
3. Segmentation approaches
Segmentation can be considered the first step and key issue recognition, scene understanding and image understating. Applications range from industrial quality control to medicine, robot navigation, geophysical exploration, remote sensing, and military applications. In all these areas, the quality of the final result depends largely on the quality of the segmentation [5]. During the past years, many image segmentation techniques have been developed and different classification schemes for these techniques have been proposed [6].
3.1. Thresholding segmentation:
This method consists in comparing the measure associated to each pixel to one or some threshold in order to determine the class which the pixel belongs to. The attribute is generally the grey level, although color or a simple texture descriptor can also be used. Threshold may be applied globally across the image (static threshold) or may be applied locally so that the threshold varies dynamically across the image. Several algorithms have been proposed for thresholding segmentation of satellite images, proposing thus an automatically selection of an adequate threshold for different satellite image class.
3.1.1 Proposed algorithms:
In these algorithms developed below, one uses the
following partial sums: For j = 0,KK, n :

j
A j
C j
= =
i=0 j i=0
h(i) i 2h(i)
i B j = ih(i) i=0 i D j = i3h(i) i=0
(1)
where h(i) is the image histogram.
(a) Minimum algorithm:
This algorithm is developed by Prewitt and Mendelsohn [7]. It consists in choosing t (threshold) in order to minimize h(t) between two maxima of the histogram . (b) Inter-mode algorithm: This algorithms is an alternative of the minimum algorithm [7], it consists in choosing the threshold t as the average of the two gray levels corresponding to the two histogram maximum h (max1) and h (max2), relating to the object and its background, that is to say: t= (max1+max2) /2.
244
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
(c) Moment algorithm:
This algorithm designed by Tsai [8], chooses t so that
the binary image and its original have the same three
first moments. Thus, the value of t is that which brings
best the fraction At/An closer to the value of x0

x0

=
Bn 2
An ( x22
+ -
x2 2 4 x1 )
where:

x1

=
Bn Dn AnCn
- Cn2 - Bn2
(2)
x2
=
BnCn - An Dn AnCn - Bn2
Ej/Aj-log (Aj) + (En-Ej)/( An-Aj) - log (An - Aj) (3) The value of the threshold t is that of j for which this expression is minimal.
a)Thresholding results with Inter-means algorithm
b) Thresholding results with Inter-means algorithm
a) Sattelite image represting the city of Athиnes, spot 5
b) Sattelite image represting the city of Rio Janeiro, Formosat-2
c)Thresholding results with Moment algorithm
d) Thresholding results with Moment algorithm
c)Thresholding results with Minimum algorithm
d)Thresholding results with Minimum algorithm
e)Thresholding results with triangle algorithm
f) Thresholding results with triangle algorithm
e)Thresholding results with Inter-mode algorithm
f)Thresholding results with Inter-mode algorithm
Fig.1. Thresholding segmentation results for the minimum and the inter-mode algorithm. (d) Entropy algorithm: Thresholding entropy established by Kapur and Al [9], consists in evaluating, for all j= 0 ...... n, the following expression:
g)Thresholding results with
h) Thresholding results with
Entropy algorithm
Entropy algorithm
Fig.2. Thresholding segmentation results for the inter-mean, the moment,
the triangle and the entropy algorithm.
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
245
(e) Inter-means algorithm:
The inter-means algorithm developed by Otsu [10]
maximizes the expression:
X j = Aj (An - Aj )( j - j )2 ,for j=0....n-1
(4)
The threshold value of t is that of j which maximizes
Xj. This mathematical formulation has for effect to position the threshold t in the middle of the averages of
the two image classes.
(f) Triangle algorithm: A line is constructed between the maximum of the histogram at brightness imax and the lowest value imin in the image. The distance d between the line and the histogram h[i] is computed for all values of i from i = imin to i = imax. The brightness value io where the distance between h[io] and the line is maximal is the threshold value, that is, t=io. This technique is particularly effective when the object pixels produce a weak peak in the histogram For the evaluation of these different algorithms, one chose the segmentation quality criterion (psycho-visual criterion) [11], and result given by inter-mean and intermode algorithms are the most satisfying.
3.2. Edge segmentation:
Edges segmentation is a particularly simple and effective means for increasing geometric detail in an image. It is performed by first detecting edges and then either adding these back into the original image to increase contrast in the vicinity of an edge, or highlighting edges using saturated (black, white or color) overlays on borders. Indeed, edges in images are areas with strong intensity contrasts ­ a jump in intensity from one pixel to the next.
Edge detecting an image significantly reduces the amount of data and filters out useless information, while preserving the important Structural Properties in an image. The fundamental importance of line and edge information in satellite images has been recognized [12]. Indeed, local features, such as lines and edges, can describe the structure of a scene relatively independently on the illumination. Image segmentation techniques based on edge have long been in use. Although a variety of methods of edge detection have been suggested, and may be grouped into two categories: derivative methods [13] and optimal filtering method.
3.2.1 Derivative methods:
In computer vision, edge detection is traditionally implemented by convolving the signal with some form of linear filter, usually a filter that approximates a first or
second derivative operator. An odd symmetric filter will approximate a first derivative, and peaks in the convolution output will correspond to edges (luminance discontinuities) in the image. An even symmetric filter will approximate a second derivative operator. Zerocrossings in the output of convolution with an even symmetric filter will correspond to edges; maxima in the output of this operator will correspond to tangent discontinuities, often referred to as bars, or lines. a) First order derivative methods: Most edge detection methods work on the assumption that an edge occurs where there is a discontinuity in the intensity function or a very steep intensity gradient in the image. Using this assumption, if we take the derivative of the intensity values across the image and find points where the derivative is a maximum, we will have marked our edges. We will show thereafter three types of operators which belong to first order derivative methods. - Sobel operator: The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. In theory at least, the operator consists of a pair of 3Ч3 convolution kernels. One kernel is simply the other rotated by 90°. These kernels are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation Fig.3. Segmentation results using Sobel operator. -Roberts operator: The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an
246
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
image. It thus highlights regions of high spatial frequency which often correspond to edges. In its most common usage, the input to the operator is a grayscale image, as is the output. Pixel values at each point in the output represent the estimated absolute magnitude of the spatial gradient of the input image at that point. In theory, the operator consists of a pair of 2Ч2 convolution kernels. One kernel is simply the other rotated by 90°. This is very similar to the Sobel operator. These kernels are designed to respond maximally to edges running at 45° to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation Fig.4. Segmentation results using Roberts operator. -Prewitt operator: The Prewitt operator, similarly to the Sobel, approximates the first derivative. Fig.5. Segmentation results using Prewitt operator. b) Second order derivative methods: All of the previous edge detectors have approximated the first order derivatives of pixel values in an image. It is also possible to use second order derivatives to detect edges. Indeed, the second order derivative methods, optimal edges (maxima of gradient magnitude) are found by searching for places where the second derivative is zero. The isotropic generalization of the second derivative to two dimensions is the Laplacien .
-Laplacien operator:
The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The Laplacian of an image highlights regions of rapid intensity change and is therefore often used for edge detection The Laplacian of a function f(x,y), denoted by 2 f (x, y), is defined by :
2 f (x, y) = 2 f (x, y) + 2 f (x, y)
(7)
x2
y2
We can use discrete difference approximations to estimate the derivatives and represent the Laplacian operator with the convolution mask 3 x 3 for 4neighborhoods and 8-neighborhood. Using one of these kernels, the Laplacian can be calculated using standard convolution methods. Because these kernels are approximating a second
derivative measurement on the image, they are very sensitive to noise. To counter this, the image is often Gaussian smoothed before applying the Laplacian filter.
This pre-processing step reduces the high frequency noise components prior to the differentiation step.
-Laplacian of Gaussian operator (LoG):
In fact, since the convolution operation is associative, we can convolve the Gaussian smoothing filter with the Laplacian filter first of all, and then convolve this hybrid filter with the image to achieve the required result. Doing things this way has two advantages: -Since both the Gaussian and the Laplacian kernels are usually much smaller than the image, this method usually requires far fewer arithmetic operations. -The LoG (`Laplacian of Gaussian') kernel can be recalculated in advance so only one convolution needs to be performed at run-time on the image. The Gaussian distribution function in two variables, g(x,y), is defined by :
g ( x, y) = 1 e -( x2 + y2 ) 2 2
(8)
2 2
where is the standard deviation representing the width
of the Gaussian distribution.
The LoG operator calculates the second spatial
derivative of an image. This means that in areas where
the image has a constant intensity (i.e. where the
intensity gradient is zero), the LoG response will be zero.
In the vicinity of a change in intensity, however, the LoG
response will be positive on the darker side, and negative
on the lighter side. This means that at a reasonably sharp
edge between two regions of uniform but different
intensities, the LoG response will be:
-zero at a long distance from the edge,
-positive just to one side of the edge,
-negative just to the other side of the edge,
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
247
-zero at some point in between, on the edge itself.
It is possible to approximate the LoG filter with a filter that is just the difference of two differently sized Gaussians. Such a filter is known as a DoG filter (`Difference of Gaussians').
-Difference of Gaussians operator (DoG):
The DOG operator functions on the same principle
that the LOG operator. But rather than to apply directly
on the image the Laplacian of Gaussian operator, one
uses the fact that this one can be approached validly by
the difference of two Gaussian.
Original image
LoG
DoG
8-neighborhood Laplacien
Fig.6. Segmentation results using Second order derivative methods. 3.2.2 Optimal filtering method These methods consist in the following steps: given a model of an ideal edge and the type of the detection's operator, we seek the optimal filter which obeys to criteria and allows the detection of this model in a given context. The approach used as reference for a great number of methods remains the canny approach [34]. It derives an optimal operator according to three criteria. a) Canny detector: Among the whole edge detection methods, the Canny edge detector is the most rigorously defined operator and is widely used. The popularity of the Canny edge detector can be attributed to its optimality according to the three criteria of good detection, good localization, and single response to an edge. In fact, in 1983, Canny proposed a theoretical study of edge detection. It is the first to formalize the three criteria, which must validate
an edge detector. The first criterion of Canny is good detection (noise robustness), the second The current standard edge detection scheme widely used around the world is the Canny edge detector. Canny treated edge detection as a signal processing problem and aimed to design the `optimal' edge detector. He formally specified an objective function to be optimized and used this to design the operator. The objective function was designed to achieve the following optimization constraints: - Maximize the signal to noise ratio to give good detection. This favors the marking of true positives. - Achieve good localization to accurately mark edges. - Minimize the number of responses to a single edge. This favors the identification of true negatives, that is, non-edges are not marked. The overall edge detection procedure developed by Canny [14] was as follows: 1. Find the maxima of the partial derivative of the image function I in the direction orthogonal to the edge direction, and to smooth the signal along the edge direction. Thus Canny's operator looks for the maxima of where However, many implementations of the Canny edge detector actually approximate this process by first convolving the image with a Gaussian to smooth the signal, and then looking for maxima in the first partial derivatives of the resulting signal (using masks similar to the Sobel masks). Thus we can convolve the image with 4 masks, looking for horizontal, vertical and diagonal edges. The direction producing the largest result at each pixel point is marked. Record the convolution result and the direction of the edge at each pixel. 2. Perform non-maximal suppression. Any gradient value that is not a local peak is set to zero. The edge direction is used in this process. 3. Find connected sets of edge points and form into lists. 4. Threshold these edges to eliminate `insignificant' edges. Canny introduced the idea of thresholding hysteretic. This involves having two different threshold values, usually the higher threshold being 3 times the lower. Any pixel in an edge list that has a gradient greater than the higher threshold value is classed as a valid edge point. Any pixels connected to these valid edge points that have a gradient value above the lower threshold value are also classed as edge points. That is, once you have started an edge you don't stop it until the gradient on the edge has dropped considerably. The purpose of the criteria defined by canny is to find an analytical expression for the optimal filter for edge detection. However, the experiment showed that this filter presents two major drawbacks: the first one concerns the implementation of this filter, which is difficult. The second one relates to the errors of
248
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
discretization and quantification of the filter, which involves a distortion of the obtained edges.
Original image
Canny
Fig.7. Segmentation results using optimal filtering methods. 3.2.3 Insuffissance of edge approach: It is noteworthy to know that an edge, even sophisticated, always produces an imperfect result, for various reasons such as: presence of noises, difficulty in regulating the parameters of the detector in an optimal way, presence of edges with variable contrast. Consequently, we can't carry out segmentation on the basis of edges only. Effectively, edges are generally discontinuous, incomplete and in many cases many parasitic edges are detected coming from noise, or macroscopic textures. 3.3. Region segmentation: Another way of extracting and representing information from an image is to group pixels together into regions of similarity. One would group pixels together according to the rate of change of their intensity over a region. In fact, the regions formation can be realized in two different ways: (i) split and/or merge process founded on some global criteria, (ii) a points aggregation procedure based on some local similarity criteria. Due to the global level of the analysis, in a split and merge process the local information is not considered. On the other hand, a point's aggregation to a region needs to take into account some global information of the region, as well as some local information relative to the pixel. For this reason, the region extraction through point's aggregation approach has been retained. 3.3.1 Region growing segmentation: A range of image segmentation algorithms are based on region growing. Region growing algorithms take one or more pixels, called seeds, and grow the regions around them based upon a certain homogeneity criteria. If the adjoining pixels are similar to the seed, they are merged with them within a single region. The process
continues until all the pixels in the image are assigned to one or more regions. Indeed, region growing is one of the most region based segmentation algorithms. It starts by choosing a starting point or seed pixel. The most habitual way is to select these seeds by randomly choosing a set of pixels in the image, or by following a priori set direction of scan of the image. The region grows by successively adding neighboring pixels that are similar, according to a certain homogeneity criterion, increasing step by step the size of the region. The growing process is continued until a pixel and the average of the region is small. The growing process is continued until a pixel not sufficiently similar to be aggregated is found. It means that the pixel belongs to another object and the growing in this direction id finished. When there is not any neighboring pixel which is similar to the region, the segmentation of the region is completed.
Original Image
Growing region
Fig.8. Segmentation results with growing region process. 3.3.2 Split and Merge segmentation: One of the basic properties of segmentation is the existence of a predicate P which measures the region homogeneity. If this predicate is not satisfied for some region, it means that that region is inhomogeneous and should be split into sub-regions. On the other hand, if the predicate is satisfied for the union of two adjacent regions, then these regions are collectively homogeneous and should be merged into a single region. A way of working toward the satisfaction of these homogeneity criteria is the split and merge algorithm. This technique consists, as their name denotes, of two basic steps. First, the image is recursively split until all the regions verify a homogeneity criterion. Next, in a second step, all adjacent regions are reassembled of way that resulting regions satisfy the homogeneity criterion. A quad-tree structure is often used to affect the step of splitting: it is based on the recursive decomposition of the regions that does not verify the homogeneity criterion into four squared sub-regions, starting from the whole image. Therefore, an inverse pyramidal structure is builded. The merging step consists on merging the
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
249
adjacent blocks which represent homogenous regions but have been divided by the regular decomposition.
Original Image
Split and Merge
Fig.9. Segmentation results with growing split and merge process. 4. Fusion Extraordinary advances in sensor technology and communication have brought a need for processing techniques that can effectively combine information from different sources into a single composite for interpretation. Image fusion provides the means to integrate multiple images into a composite image that is more suitable for the purposes of human visual perception and computerprocessing tasks such as segmentation, feature extraction and target recognition [15]. 4.1. Satellite image fusion: In the early days of analogical remote sensing when the only remote sensing data source was aerial photography, the capability for data fusion from different sources was limited. Today, with most data available in digital format, from a wide array of sensors, data fusion is a common method used for interpretation and analysis. Data fusion fundamentally involves the combining or merging of data from multiple sources in an effort to extract better and/or more information. This may include data that are multitemporal, multiresolution, multisensor, or multi-Data type in nature. Imagery collected at different times is integrated to identify areas of change. Multiresolution data merging is useful for a variety of applications. The merging of data of a higher spatial resolution with data of lower resolution can significantly sharpen the spatial detail in an image and enhance the discrimination of features. Data from different sensors may also be merged, bringing in the concept of multisensor data fusion. An excellent example of this technique is the combination of multispectral optical data with radar imagery. These two diverse spectral representations of the surface can provide complementary information. The optical data
provide detailed spectral information useful for discriminating between surface cover types, while the radar imagery highlights the structural detail in the image. 4.2. Fusion of the edge and region approaches: The application of the space segmentation based on the edge detection provides precise and open edges, whereas methods based on the growing region techniques offer related areas but with less precise defined edges. It is thus interesting to be able to combine these approaches in order to obtain closed areas with quite localized edges. There are various techniques for image fusion, even at the pixel level. The fusion technique proposed in this paper is a cooperative fusion [16]. The principle of this technique is described below. Let us note C the whole of the pixels obtained by the edge segmentation approach and Cr the whole of the pixels resulting from region segmentation approach. Our approach will allow correcting the edges Cr using the edges C (here we have used 8-neighborhood Laplacian). On the other hand, this approach makes it possible to close C with information provided by Cr. In fact, the fusion principle consists in a growth of regions that take in account edges that has been extracted beforehand in the picture original. This hold in account can be considered like a constraint according to the more important role that takes edges [41]. Edge segmentation
Region segmentation
Fig.10. Illustration of the cooperative fusion approach.
Original Image
Fusion results
Fig.11. Segmentation results with fusion approach.
250
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.10, October 2007
5. Conclusion Satellite image segmentation techniques, in general, are still in need of considerable improvement. The techniques we have looked (threshold, edge, region) at still have some faults and there is, as yet, no perfect segmentation algorithm, something which is vital for the advancement of remote sensing domain and its applications. However, integration of region and edge information has brought improvements to previous results. Work in this field of research has generated numerous proposals in the last few years. This current interest encourages us to predict that further work and improvement of segmentation will be focused on integrating algorithms as well as information. References [1]T.R. Reed, J.M.H. Du Buf, A review of recent texture segmentation and feature extraction techniques, Computer Vision, Graphics and Image Processing: Image Understanding, vol. 57, pp. 359­372, 1993. [2] A. Rosenfeld, A. Kak, 2nd ed Digital Picture Processing, Vol. 12. Academic Press, Orlando, Florida, 1992. [3]J.B. Campbell, Introduction to Remote Sensing. The Guilford Press, New York, 1997. [4]T.M. Lillesand, and R.W. Kiefer, Remote Sensing and Image Interpretation. John Wiley and Sons Inc., New York, 1994. [5]J.P. Cocquerez, S. Philipp, Analyse d'images: filtrage et segmentation, Collection Enseignement de la physique, Masson, Paris, 1995. [6]R.M. Haralick, L.G. Shapiro, Survey, image segmentation techniques, Computer Vision, Graphics and Image Processing, vol. 29, pp. 100­132,1995. [7]J. Fan, J. Yu, G. Fujita, T. Onoye, L. Wu, and I. Shirakawa, Spatiotemporal segmentation for compact video representation, Signal Process. Image Commun. 16, 553­ 566, 2001. [8]J. M. S. Prewitt and M. L. Mendelsohn., The analysis of cell images, in Ann. New York Acad. Sci., Vol. 128, pp. 1035-1053, New York Acad. Sci., New York, 1996. [9]J. N. Kapur, P. K. Sahoo, and A. K. C. Wong.,"A new method for gray-level picture thresholding using the entropy of the histogram", comput. Vision Graphics Image Process. 29, 273-285,1985. [10]N. Otsu., A threshold selection method from gray-level histogram", IEEE Trans. Systems Man Cybernet. SMC-8, pp 62-66, 1978. [11]S. Chabrier, Etude psychovisuelle de la segmentation d'images, Laboratoire Vision et Robotique ­ ENSI, 2005. [12]A.Cumani, edge detection in multispectral images, CVGIP: Computer Models and Image Processing, 53, 40-51, 1991. [13]E. Davies, Machine Vision: Theory, Algorithms and Practicalities, Academic Press, Chap. 5, 1990. [14]D. Demigny, T. Kamle, A discrete expression of Canny's criteria for step edge detector. Performance evaluation,
IEEE Trans on Pattern Analysis and Machine Intelligence, vol. 19, n° 11, p. 1199-1211, November 1997. [15]M.A. Abidi, and R.C. Gonzalez, Eds. Data Fusion in Robotics and Machine Intelligences Academic Press, San Diego, 1992. [16]C. Pohl, and J.L. Genderen, Multisensor image fusion in remote sensing concepts methods and application, International Journal of Remote Sensing 19, 5 pp. 823-854, 1998. Ahmed Rekik was born in Sfax, Tunisia, in 1978. He received the electric engineering degree from the engineering school of Gabes in Tunisia in 2001. He received Master. degree in 2003, from the engineering school of Sfax in Tunisa. From 2004 he is a Ph.D. student in LETI Laboratory in Sfax engineering school (Tunisia), and in the LASL Laboratory in Littoral University in France. From 2006, he is a university assistant in Biotechnology institute, Department biomedical, in Sfax Tunisia. Her research interest includes image segmentation, medical image processing, statistical segmentation, and information fusion.

File: review-of-satellite-image-segmentation-for-an-optimal-fusion.pdf
Title: Microsoft Word - 32_11-B-_1019-OK_ _-Copyright Accepted_ 1006 Review of satellite image segmentation for an optim
Author:
Published: Wed Oct 31 15:17:23 2007
Pages: 9
File size: 0.46 Mb

Copyright © 2018 doc.uments.com