An Unsupervised Real-Time Multi-Task Vision System

Abstract-- In this paper, a new unsupervised real-time multi-task vision system is presented. The system is based on a new approach to real-time image ...
163KB taille 1 téléchargements 29 vues
An Unsupervised Real-Time Multi-Task Vision System Konstantine Kiy

Abstract-- In this paper, a new unsupervised real-time multi-task vision system is presented. The system is based on a new approach to real-time image description and segmentation of color images developed by the author [2-7]. The early version of the theory (gray-scale images) was successfully applied to autonomous real-time road following [1]. The proposed system can adapt itself to lighting conditions and recognize colors for even low saturated images. In this paper, we explain the mechanisms of this adaptation, as well as new methods of inclusion of texture in real-time image analysis. Index Terms--computer vision, image understanding, robotics, autonomous vehicles.

I. INTRODUCTION In this paper, we describe a new unsupervised real-time multi-task vision system. The system is based on a new approach to real-time image description and segmentation of color images developed by the author [2-7]. The early version of the theory (gray-scale images) was successfully applied to autonomous real-time road following [1]. The proposed system can adapt itself to lighting conditions and recognize colors for even low saturated images. In this paper, we explain the mechanisms of this adaptation, as well as new methods of including texture in real-time image analysis. Certain psychophysical aspects of the vision system of human beings that enable them to cope with low-saturated images are implemented in the proposed system. The developed technique gives simultaneous geometric and color description of selected objects and allows us to generate subsystems of real-time approximate reasoning to solve image understanding problems. In [6], a subsystem of approximate reasoning for a qualitative image classification for images obtained under a perspective projection and for finding a road in images has been presented. In this paper, we give another examples of approximate reasoning in image understanding based on the developed theory. For instance, a subsystem aimed at getting parts together to obtain the desired entire object is presented. We present also a subsystem of conceptual image description based on notions of visibility and purity. We give applications of the developed methods to analysis of road scenes (including analysis image sequences) for autonomous road following and for traffic control. The main emphasis is placed on relative properties of objects in the scene. We not only find local characteristics of the objects extracted, but study the interaction of objects in the scene and formulate their description on the basis of relations and State Institute of Physics and Technology 13/7 Prechistenka st. 119034 Moscow, Russia., E-mail: [email protected], Phone: (095) 3935410

linguistic descriptions rather than by using their numerical characteristics. This allow us to produce algorithms that do not require adjustments to particular images. II. SYSTEM OVERVIEW AND BASIC PRINCIPLES The system is based on a new approach to processing color images developed by the author [2-7]. This approach reduces processing color images to the three opponent (R/(R+G), G/(G+B), I) processes. The motivation, parallels with the human vision, and explanations of the main reasons for introduction of this system can be found in [6] and, especially, in detailed paper [7]. It must be stressed that the developed theory is the result of an attempt to exclude a human from a control loop in image processing. It is clear that, when a human tries only a bit to evaluate images in advance, to find the methods applicable to a particular class of images, and to adjust certain parameters in programs, he implicitly brings into action the entire power of his intelligence. There are two reasons for exclusion of a human from processing images. Firstly, we will see what our methods are worth themselves in application to a changeable environment. This will focus our efforts on development of mechanisms of self-adaptation in computer vision. Secondly, this makes us to study and understand methods and models of knowledge representation and reasoning so successfully applied by human beings in solving vision problems. When we try to compensate the loss of even a little bit of human intelligence, we realize what this little bit actually is. As a result, a great experience of neurophysiologists that still requires application in computer vision will come in the forefront in our investigations. The first idea we have to adopt is that, in actual real-world images, the color at a pixel is nonsense. Different points of a real-world object may have different colors or be colorless. As we know from works of neurophysiologists [8], human beings and animals evaluate the entire object and extract the dominating color on the basis of simultaneous analysis of geometry, colors, and taking into account the illumination. To be able to treat colors in this way in computer vision, a special technique has been developed in papers [2-7]. Below, we briefly describe its basic ideas. At the first stage of processing, we prepare a concise description of images determined by the feature functions of the opponent processes on the basis of the method presented in [3, 4, 6, 7]. We divide each of the images into a number of vertical (horizontal) strips and, for each strip, extract cluster grains that represent parts of objects in the strip in a compressed form. These cluster grains have simultaneous geometric and numerical (with respect

1998 IEEE International Conference on Intelligent Vehicles

29

to the corresponding feature function) description. The geometric description comprises a number of intervals on the vertical axis that approximately indicate the location of this cluster grain in the strip. For definiteness sake, we assume below that the image is divided into vertical strips. The numerical description provides a range of feature values that corresponds to this particular cluster grain. The feature values from this range are close to the feature value of the cluster grain as integers and with respect to the geometry of their distribution in the strip. It is worth noting that every feature value has its system of intervals that describe its value distribution in the strip [24]. Moreover, the value that corresponds to the cluster grain dominates the other values from the range. Comparison of results for feature functions defined by ratios R/(R+G), G/(G+B) allows us to generate compound cluster grains that comprise a set of intervals on the vertical axis and two ranges of values for ratio feature functions [6,7]. In the mentioned papers, we construct also global compressed geometric objects that are continuous systems of cluster grains [2-7] and approximately correspond to homogeneous parts of actual geometric objects in the scene. We call these continuous systems sections. In [6,7], we explained how to approximately classify these objects and evaluate their shape. In this paper, we focus our attention on contrasts in intensity and color and on a new method of texture description, as well as on assembling actual objects from homogeneous parts obtained. We show also how to use these ideas of qualitative and conceptual image description in separation of objects in low-saturated images. To be able to classify images without adjustments of parameters in algorithms and programs, we have to produce a system of qualitative and conceptual image description in a way similar to that of the human vision system. Moreover, we have to develop a subsystem of approximate reasoning that produces this description. To overcome these problems, we obtain a qualitative and conceptual description of the extracted local (cluster grains) and global (sections) objects by studying their interaction in space of the image as approximate geometric objects and in colors and intensity. The interaction in colors and intensity is studied by using the corresponding ranges of values for feature functions of the mentioned opponent processes. It must be stressed that these ranges of feature values for cluster grains are obtained with the use of systems of intervals mentioned above. These systems of intervals correspond to a particular feature value [3,4] and approximately describe the geometry of the value distribution for the feature in the strip. We consider two feature values close to each other, if their systems of intervals are geometrically close to each other. This geometric closeness is measured by special measures of proximity between systems of intervals introduced in [2-4]. This means that feature values from ranges of variation for feature values of cluster grains are close to each other not only as integer numbers, but their systems of intervals are close to each other as well. This gives an opportunity to take into account the particular geometry of the image in classification of the ranges of feature values and to separate objects with a poor contrast.

III. CONCEPTUAL DESCRIPTION OF OBJECTS EXTRACTED AND CONTRASTING IN COLORS AND INTENSITY The basic concept of the proposed image description and classification consists in the wide application of relations between objects and of their linguistic values rather than quantitative characteristics. We try to avoid exact figures in evaluation of the scene. We prefer to consider an object "small", if it is either less than neighboring objects or is almost the same rather than precise its size in pixels. We treat the contrast between parts on the basis of the geometric distribution of feature values, which allows us to specify exact thresholds and to adjust automatically the algorithms to a particular scene. We take also into account the peculiarities of the feature space considered. The processes of finding cluster grains and their close feature values for the ratio and intensity feature functions are slightly different. This difference reflects the nonhomogeneity of the space of colors near the point of the zero saturation [6,7]. In ratios, the alteration of color results in passing the point 0.5. To take this property into account, we cut the ranges for cluster grains located near this point to prohibit them to cross it. This allows us to separate cluster grains that have different colors but are almost the same in digital representations. Presumably, the similar procedure is implemented in the human vision. This allows a human to contrast color differences in his perception. Since the problem of image segmentation is an ill-posed problem, we need in introduction of additional structures, linguistic descriptions, and systems of approximate reasoning to resolve the contradictions. Consider, for instance, a local subsystem of approximate reasoning that produces qualitative and conceptual description for local objects in a strip of the image. This subsystem does not use particular knowledge about objects and can be applied for unknown scenes. It leans upon the evaluation of the interaction between the extracted local objects in color, intensity, and space. To evaluate the interaction in space between cluster grains presented as geometric objects by a collection of intervals on the vertical axis, we need to recall certain notions from [2-4] and to introduce a number of new ones that will be explained in detail in [7]. Firstly, we classify all intervals of cluster grains with respect to their visibility [4]. This notion characterizes a number of points in the part of the strip defined by the particular interval of the cluster grain considered. We can introduce a linguistic characteristic for each of the intervals by prescribing to it a level of visibility. These definitions clearly conform to the corresponding idea of the human vision.

Fig. 1. An image of a road scene with an extracted approximate global geometric object superimposed.

1998 IEEE International Conference on Intelligent Vehicles

30

The interaction of a particular interval of the considered cluster grain with intervals of the other cluster grains are characterized by its purity. Let us consider all intervals of the other cluster grains of an approximately equal visibility that intersect this interval. We can split these intervals into three categories: intervals with close, comparatively distinct, and completely different ranges of feature values. It is worth noting that this partition into the three categories depends only on the particular image, does not lean upon predetermined estimates, and gives a mechanism for self-adaptation of the vision system. Basing on the fraction of pixels of the considered interval common with the intervals of the mentioned categories, we generate a linguistic characteristic of its purity. In this way we obtain a linguistic characteristic of the parts of the strip that correspond to the most visible intervals of cluster grains that cover the vertical projection of the strip considered. Any interval of a cluster grain has its lower and upper adjacent intervals that belong to the other cluster grains. The gaps between intervals of a cluster grain may be explained by both random and fundamental effects. To produce the linguistic characteristics of the contrast between one of the most visible intervals and their adjacent most visible intervals, we consider again the three mentioned categories of possible adjacent intervals. Basing on the categories into which fall adjacent intervals, we also produce a linguistic characteristics of the lower and upper contrasts for the interval considered. We use the notion of purity and its linguistic characteristics for generation of compound cluster grains that consist of pairs of cluster grains constructed for the ratios. It is worth noting that we not only describe the contrast between each particular interval of the considered cluster grain and the adjacent intervals of other cluster grains, but find the most visible adjacent intervals as well.

Fig. 2. The G/(G+B) image that corresponds to the image of Fig. 1. To illustrate the introduced notions, we consider the results of processing the tenth strip for the image of Fig. 1. Figures 2 and 3 show the G/(G+B) image and the results of extraction of cluster grains and their geometric description and classification for this image, respectively. The cluster grains are shown in bold in Fig. 3. The vertical axis of this figure corresponds to the vertical axis of the image. A point on the horizontal axis indicates the value of the G/(G+B) feature function. The other ratio is approximately close to 0.5 for the whole image. It is also difficult to separate the road and the right roadside in intensity due to poor contrast and nonhomogeneity of the road in intensity. Therefore, in this case, the problem of color classification is reduced to a lower dimension

problem connected with only one ratio. The image has low-saturated colors.

Fig. 3. The geometry of the value distribution for the tenth strip of the image of Fig. 2. Although a human clearly see the difference between low-saturated road and the grass on the right roadside (the saturation is approximately equal to 0.1 in both cases), the difference in digital representation is insignificant. In Fig. 3, the cluster grains that correspond to the road and to the grass plot have the feature values equal to 31 and 34, respectively. These values correspond to low-saturated blue and green, respectively. The intermediate values 32 and 33 may belong to the both road and grass plot, as it can be seen from Fig. 3. Except for colorless values, the intervals of low-saturated blue values are clearly separated from intervals of low-saturated green feature values. Therefore, despite the small difference between the parts in figures of digital representation, we (human beings) and the proposed algorithm find that the contrast is good. Figures 3 and 4 show how both the geometric approach (the separation cluster grains with feature values close with respect to their systems of intervals) and color contrasting (separation of ranges of feature values) work in order to separate local objects with a poor contrast in the digital representation. Compared with the construction of global objects described in [3,4], we have improved the procedure for extension of local objects to the adjacent strips by using such their characteristics as visibility and purity. This allows us to avoid the ambiguity in the process of constructing global objects and to suppress false objects. Basing on visibility and purity characteristics of the cluster grains included into a section, we are also able to prescribe a level of visibility and purity to the entire section. In this way, we can extract dominating objects and suppress less visible or pure objects. On the other hand, we can extract mixed objects with combined characteristics. Since the properties to be visible or pure or mixed lean upon local relations and linguistic description rather than upon quantitative characteristics, even small, but visible and pure objects, can be extracted in this way. The interaction of selected objects in the image allow us to give integral characteristics of image texture that can be used in realtime algorithms of image segmentation. We apply this idea to evaluation of image texture on the basis of one ist component. The similar consideration based on consideration of interaction of objects (local and global) for several image components ( for instance, for compound color cluster grains and sections and the same

1998 IEEE International Conference on Intelligent Vehicles

31

objects for the intensity image) will allow us to obtain even subtler texture characteristics.

Fig. 4. Cluster grains and global objects for the image of Fig. 1. Consider an example that illustrates the aforesaid. The results of local classification for all 16 strips of the image of Fig. 1 and global objects found are presented in Fig. 4. The vertical axis of this figure indicates the value of the feature function. The integer numbers on the horizontal axis indicate the numbers of the corresponding strips. The vertical lines represent strips. The polygonal lines and small rectangles correspond to global sections and cluster grains, respectively. Every sufficiently visible and pure section gives a compressed representation for an actual homogeneous object or for its part. For certain purposes, it will be convenient to obtain a conventional representation of this object as a region in the image plane. The information that is encapsulated in the section allows us to assign a region to it. The simplest region that can be assigned to a section is determined by a set of ranges of feature values for each its cluster grain in the corresponding strips. This collection of ranges provides a collection of thresholds in the strips. The standard thresholding process that corresponds to this collection of lower and upper thresholds determines a region in the image plane. However, its straightforward application gives adequate results only for simple model scenes. It should be noted that ranges of close (with respect to feature values) cluster grains, as a rule, intersect each other. Numerous examples show that it is inevitable for real-world images. For instance, for the image of Fig. 1, the road and the grass plot at the right roadside both contain the colorless component. In Fig. 3, this component is represented by intensities 32 and 33 that have considerable intersection with systems of intervals of both the road cluster grain and the cluster grain of the grass plot. If we do not take into account the geometry of the distribution of values 32 and 33 in the strip, the binary images for both the road and the grass plot will have serious faults. On the other hand, it is impossible to infer whether two considered adjacent cluster grains represent two parts of the same object or parts of different objects only on the basis of the local information within the strip. In the considered particular case, these two cluster grains have

different colors. This is a certain evidence for them to be different objects (or parts of different objects). In general, this is not the case. Close (in space and in digital representation) cluster grains may be of almost the same color. It is worth noting that the notion "close in digital representation" itself depends on the particular image. Since the proposed method is unsupervised, similar to a human being, our algorithm must take into account global arguments and understand whether these close cluster grains belong to different global objects. This means that the process of thresholding must not be pure local and will depend on the global behavior of objects. The ranges of feature values for a particular cluster grain may also have random outliers. The aforesaid means that to improve the binary image that corresponds to a section, we must introduce the procedure of filtration and smoothing for the sequence of local thresholds. These procedures provide averaged and smoothed thresholds obtained just on the basis of analysis of thresholds for adjacent strips. For the sequence of lower and upper thresholds, we consider the procedure of finding a local maximum and minimum, respectively. If the local minimum coincides with the current lower threshold, we replace it by the local maximum among the nearest thresholds. We operate with the upper thresholds in a similar way. These procedures are repeated a number of times to eliminate random effects. It is also clear that we must find geometric constraints on the applicability of thresholds within the strip to separate objects with a poor contrast. For instance, as it is clear from Fig. 3, feature value 32 and 33 must be included in the road only inside the interval that corresponds to its cluster grain. The final precision of thresholds is made on the basis of analysis the entire information of the particular compressed global object and its adjacent objects. In contrast to the existing methods of thresholding that are pure heuristic and suffer from lack of a reasonable theoretic background [9], the presented method leans upon the introduced geometric representation of the distribution of feature value. This allows us to solve the problem of finding local thresholds on a reliable and logical base. The process is pure deductive, clear, and satisfies requirements of mathematical severity. It does also lean upon restricted conditions and is applicable for a wide variety of images. It seems that this solution of the problem of local thresholding is close to the final. IV. FINDING OBJECTS ON THE BASIS OF ASSEMBLING THEIR HOMOGENEOUS PARTS The algorithms presented in the previous sections cover the subconscious level of processing and do not require any particular knowledge about the objects in the scene. They lean only on a general linguistic description of local properties of cluster grains, their interaction, and general ideas concerning the similarity of local objects in the adjacent strips applied to extension of local objects for generating global objects. However, it is impossible to solve problems of image understanding without a proper use of knowledge. To be able to adequately apply the knowledge about objects in the scene, in addition to conceptual local description presented in previous sections, we must perform a global conceptual description of homogeneous compressed objects obtained. In [6], we explained how to classify the approximate compressed

1998 IEEE International Conference on Intelligent Vehicles

32

geometric objects determined by sections for images obtained under a perspective projection. This classification provides qualitative and conceptual description for geometric properties of homogeneous parts. We assign to each of the parts linguistic characteristics like "long", "short", "narrow", "wide", "close", "distant", "high", "planar", "compact", "extensive", etc. These linguistic characteristics have estimates of their validity, i.e., we prescribe to each section a value of the measure of confidence that this property is true [6]. This classification is performed by a subsystem of approximate reasoning on the basis of analysis of geometry of systems of intervals that determine the geometry of the compressed object. It is possible that several objects in the scene satisfy approximate geometric and color description of the object to find. This object may be a road or a car of a certain color or a building or something else. However, it is possible that neither part (section) found suits completely the idea of the entire object or the characteristics of the entire object cannot be recovered on the basis of analysis of any particular section. The second part of the preceding phrase means that we admit that the completeness of the object recovery must be reasonable i.e., it must be sufficient for the particular application. For instance, if we seek for a road as a long narrow object of approximately equal width that begins near the lower side of the image rectangle, we may find several objects that satisfy only a part of these conditions and the entire road cannot be completely reconstructed by using any single section found. To find the entire road or to make a decision that it does not exist in the frame, we have to consider objects that fit each other and may generate the entire object. Since we do not use knowledge at the previous stages of processing, the correspondence between sections and homogeneous parts of actual objects may be ambiguous as well. This is the same for the correspondence between cluster grains and local objects. Since, except for obvious cases, this problem cannot be reliably solved at a local level without knowledge, at previous stages of processing, we left the problem of correspondence until the stage of image understanding is considered. Let us consider a section that satisfies all the qualitative properties listed in the linguistic description of the desired object or a part of them with certain levels of confidentiality. However, its characteristics found do not completely correspond to the desired object. For instance, the metric characteristics found on the basis of analysis of the binary image may not suit the pattern. The road may be too narrow (for instance it determines only one road edge that matches the previous results or satisfies certain necessary conditions) or too short or begins rather far from the lower side of the image rectangle. If certain important characteristics of the object can not be reliably found (otherwise, we do nothing), we try to complete it by other parts. Basing on the levels of confidence for linguistic characteristics like "long", "begins near the lower side of the image rectangle", "of constant width", "its left (right) adjacent section is an object that is not a part of road", and so on, we find out what kind of adjacent objects is reasonable to adjoin. It follows from the aforesaid that to assemble objects from parts, we must

be able to find a set of adjacent sections for every section. Consider, for instance, how to describe the set of adjacent objects. In this section, we show how to use the tools introduced for qualitative and conceptual description of the interaction between the compressed objects found. For every interval of a cluster grain, we have a list of adjacent intervals and a list of close cluster grains that considerably intersect this interval. Left adjacent sections can be both horizontally overlapping and nonoverlapping. We call two sections horizontally overlapping if their projections on the horizontal axis are overlapping. In the opposite case, the two sections are nonoverlapping. Two overlapping sections are geometrically distinguishable if the intersection of their systems of intervals is unessential (this notion can be rigorously defined by estimates of their measures of proximity [2-4]). Two overlapping sections may also be partly geometrically indistinguishable if their systems of intervals have considerable intersection on the common part for several strips. Two geometrically distinguishable sections are called adjacent if there are no other sections between them (this can be described in the language of systems of intervals). Basing on the definition of good or bad contrast between adjacent cluster grains, we can produce the corresponding definition of quality of contrast between adjacent sections with the corresponding levels of confidence. As it is clear from Fig. 1, we can infer approximate characteristic of the object's boundary analyzing the boundary of the approximate geometric object that correspond to a section. We can consider the subset of adjacent objects for the section and analyze the boundary of the obtained union. Then we join those sections that have parts of the boundary that meets the requirements for the boundary of the entire object. In inferring, we use also the global linguistic characteristics of the considered sections. The section that has adjacent sections with different linguistic characteristics (for instance, "high" and/or "extensive") and with a good contrast from it reasonably seems to be the extreme part of the object. Certain overlapping sections can correspond to intersecting polygonal lines (see Fig. 4). It is worth noting that we perform the extension of cluster grains from left to right (however, this is only the matter of habit, and the process can be directed opposite). This means that their overlapping parts can be divided into two subparts. The second subpart is the same for both sections. These sections are the first candidates to be united. For these sections, basing on their systems of intervals in strips, measures of their agreement can be defined. In the case, if these measures gains sufficiently small values, we check whether their union better conforms to the prescribed pattern than the single parts. These ideas are applicable for both large and small objects. Consider for instance an example of complex road scene of a busy street under rather difficult conditions (with wet covering and puddles after a rain). Figures 5-6 show the image with a section superimposed and the results of classification of the (R/(R+B)) image. For complex scenes, with objects of various colors its reasonable to use extended excessive system of opponents colors like (R/(R+B), B/(B+G), G/(G+R), I). The computational cost does not considerably increase because of introduction of

1998 IEEE International Conference on Intelligent Vehicles

33

the excessive component. However, the opportunities to find reliable compound cluster grains are improved due to possible clear extraction of these cluster grains that correspond to a colored object on two of the three ratio color components.

green, yellow-blue, and black-white) opponent processes for a region and its adjacent regions in the human brain. The developed theory allows us to find adjacent regions and to analyze our ratio opponent processes for them as well as for the extracted region in real time. This gives the foundation for producing a subsystem of approximate reasoning that will be able to automatically infer the color for a particular region under difficult and changeable conditions in the way that the vision system of a human being does it. It requires experimentation with the developed software and the analysis of results for a wide variety of model and real-world scenes. This development is under investigation now and will be the subject of the following publications. V. CONCLUSIONS

Fig. 5. A car extracted in an image of traffic

Fig. 6. The results of classification of the image of Fig. 5. It is easy to see that sections 3 and 4 that correspond to the car are overlapping and they both are clearly separated in color and geometry from the adjacent sections. Since due to illumination, mud on the surface of the car its color may vary in a certain range, we have to deal with several sections and cluster grains in the strip in order to recover the whole object. The technique presented above allows us to find the whole object. The developed approach enables us to find changes in a considered scene. The appearance of unknown object that clearly differ from the stable scene in color and/or geometry can be recognized and its motion in an image sequence can be tracked only on the basis of analysis of the sequence of concise results of classification like those presented in Fig. 6. This can be performed without detailed and exhausting shape analysis for the object extracted. It is worth noting that the proposed construction (cluster grains for the opponent processes defined by ratios and the intensity function) gives the opportunity to produce a subsystem of approximate reasoning for automatic finding out the color of a particular region. As known from works of neurophysiologists [8], the particular color is determined on the basis of simultaneous analysis of the three (red-

In this paper, we proceeded with the study of a new method of conceptual and qualitative image description and segmentation [2-7] and of its application to real-time image understanding. We considered problems of color contrasting and assembling objects from parts and proposed systems of approximate reasoning for these purposes. The developed mathematical technique allows us to find the set of adjacent regions for every region in the image and to simultaneously analyze the geometry, color, intensity characteristics for the extracted objects and their adjacent. This means that the required mathematical technique that enables us to solve problems of finding colors of actual objects in unknown scenes under difficult conditions, in the way that a human being does it, seems to be developed. The future work with the program package will provide an experimental foundation for producing a subsystem of approximate reasoning for automated color classification in unknown scenes and precise our understanding of the problem. REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8] [9]

K.I. Kiy, A.V. Klimontovich, and G.A. Buivolov, "Visionbased system for road following in real time", in Proc. of ICAR'95, 7th International Conference on Advanced Robotics, Barcelona, Spain, 1995, vol. 1, pp. 115-124 K.I Kiy, "Topologically geometric method of image processing and its application to analysis of road scenes ", Izv. Ross. Akad. Nauk, Ser. Technical cybernetics, pp. 244-248, no. .5, 1992. K.I. Kiy, "New topological and fuzzy structures in image segmentation and image understanding tasks", in Proc. of IDA95 the Intelligent Data Analysis Symposium, vol. 1, pp. 85-89, Baden-Baden, Germany, Aug. 17-19, 1995. K.I. Kiy, "A new fuzzy theory based approach to image processing. Its implementation and applications", in Proc. of EUFIT'96, Fourth European Congress on Intelligent Technique and Soft Computing Germany, Aachen, 1996. K.I. Kiy, "An implementation of a new approach to image concise description and segmentation", in Proc. of CVPR'97 IEEE International Conference on Computer Vision and Pattern Recognition, Demo Session, , San Juan, USA, 1997, p. 4. K.I. Kiy, "An unsupervised color vision system for driving unmanned vehicles", to appear in Proc. of SPIE AeroSense'98 Symposium (Enhanced and Synthetic Vision 1998), Orlando, USA, 1998. K.I. Kiy, "New unsupervised real-time methods of image description and segmentation and their application to image understanding", to appear D.H. Hubel, "Eye, Brain, and Vision", Scientific American Library, Division on HPHLP, New York, 1988. P.K. Sahoo, S. Soltani, and A.K. Wong, "A survey of thresholding techniques", CVGIP, 1988, vol. 41, pp. 233-260.

`

1998 IEEE International Conference on Intelligent Vehicles

34