Extraction by ??????????RLSExtraction by HLSThe original image?In the case of delicate hues like the image below, the extractionresults differ between RGB and HLS. It is good to use them according toyour needs.? Page Top?In selecting the colors of the above three examples, trying to specifythe range with RGB, it is impossible for R and G to be 0 to 108 and B to 147 to255. ?If specified by HLS,you can specify H as 160 only, L as 120 only, S as 36 to 240 only.
H: 160 L: 120S: 36 R: 108 G: 108 B: 147 H: 160 L: 120S: 125 R: 61 G: 61 B: 194 H: 160 L: 120S: 240 R: 0 G: 0 B: 255?For example, if you select blue (R: 0 G: 0 B: 255) in the color settingdialog box and select it in the direction directly below the color selectionrange on the right side of the screen, the value of hue or brightness does notchange However, the appearance of colors and RGB are totally different values.?RGB is suitable for extracting a uniform color to some extent, but it isvery difficult to select the extraction range, for example, if you want toextract even the same color even if the brightness is different.?HLS can see similar values ??when opening the color setting dialog boxin Windows. In the lower right corner of the screen there are items suchas “Tint”, “Vividness” and”Brightness”. “Hue” is the same as H (hue).
?In the ” Digital Image ” page, we explainedpixel values ??of Red (Red) / Green (Green) / Blue (Blue) as explanation ofcolor images. When handling color images, you may use H (hue) / L(brightness) / S (vividness) in addition to this RGB. Range specification in HLS? Page Top???????Image fromwhich ?????????onlythe original image red color isextractedSo far image processing has been done on grayimages, but recently as industrial color cameras have also increased,processing for color images has come to be commonly seen. Processing toextract a range of a certain color from a color image instead of grayscaleconverting a color image as before is also one such process.?By labeling, you can check the area and length for each chunk and selectaccording to the result.??Original image ???????????labeling processing?When binarized image is labeled, it becomes as follows.
The samenumbers are attached to the connected parts.Adding a label (number) for each clump in the imageis called labeling. Labeling is useful for identifying objects andchecking the number.??????????????After contraction processingafter original image expansion processing????????????????In the example below, we connect figures that were broken.
?Expansion is performed N times, then shrinkage is performed N times iscalled closing. Closing effects such as filling in figures and joiningcutting parts are obtained.Closing? Page Top??????????????After expansion processafter original image contraction process?????????????????????????????????In the example below, we remove small pixels around the figure, leavingonly the shape we want to extract.?N times of shrinking and then N times of inflation is called opening. Byopening, you can get the effect of removing protruding parts of the figure andseparating the coupling part.
opening? Page Top???Original image ???????????contraction processing ????????????????????????????????Contraction processed image?In the example below, contrary to the expansion process above, pixels onthe top, bottom, left and right of the pixel are contracted.Shrinkage treatment? Page Top???Original image ???????????expansion processing Image ????????????????????????????????after expansion processing?There are several methods for expansion processing, but in the examplebelow, one pixel is expanded one pixel up, down, left, and right for one pixel. Onesquare in the figure below represents one pixel.
Expansion process?Theprocess of combining expansion and contraction several times is calledmorphological processing, which is effective for smoothing binarized images(reducing unevenness to make them smoother), removing isolated points (fillingup), and so on.?Processing that inflates a figure in a binary black and white image byone pixel, and contrarily reduces it by one pixel is called contraction.Morphology?In addition, simple binarization processing is performed with onethreshold as described above, but you can also specify the two thresholds andextract the range of luminance between them.
Original image????????????????binarized image?By binarizing the image, it becomes easy to extract the detection targetfrom the image. In addition, judgment processing etc. can be executed athigh speed.
?????????????An image binarized with thevalue threshold value “100” of each pixel?Binarization of an image is a process of converting an image withshading to two tones of white and black. We define a certain thresholdvalue and replace it with white if the value of each pixel is above thethreshold and black if it is below. One square in the figure belowrepresents one pixel.Binarization???The median filter replaces the value of each pixel with the median valueof the surrounding pixels. This process produces images that do not damagethe edges of the input image compared to moving average filters.Median filtering? Page Top????????????????After applying originalimage average filter?The moving average filter replaces the value of each pixel with theaverage value of neighboring pixels. When this processing is done, animage with an edge blurring as a whole is generated.Moving average filter processing? Page TopTypical processing of noise removal includes moving average filter processing and median filter processing .
Select thenoise removal method according to what you extract in the image processing.?In order to efficiently extract target information from images, it isnecessary to remove noise as much as possible. This preprocessing iscalled noise removal or smoothing.
Noisy image exampleThe image may contain noise (noise) caused bydefects in the camera’s image pickup device or the like. Noise is a randomfine fluctuation component, it is not useful for image processing or itinterferes with processing.?Which element is extracted when image processing is performed is decidedby criteria such as the feature of the substance to be inspected appears moreconspicuously. ?Alternatively, whenprocessing image data using all element values ??of RGB as it is with colorimage data, or when using a monochrome camera instead of a color camera at thetime of shooting.?I think that you can see that the dark red part and the dark blue parthave the same brightness. ?Do not you feel thatthere is less discomfort compared to the image with only the RGB elementsextracted for the above color image? What do you think. I waswatching with such a color in the era of old black and white television.Conversion by NTSCweighted average method?In order to convert to human eyes without any sense of incompatibility,we assign a constant weight to each value (decide the proportion of the threevalues) and change that value to grayscale.
This weighting coefficient(NTSC Coefficients) is the same as the standard used for televisionbroadcasting in Japan and the United States.?Instead of extracting only one of the R element value, G element value,and B element value, there is a method of taking the average of the threevalues. However, an image which is simply converted from R + G + B dividedby 3 gives a feeling of discomfort to the original color image. This isbecause the manner in which the brightness is different is different dependingon the shade, such as the human eye can understand the change in the brightnessof the green well but the change in the brightness of the blue is insensitive.NTSC weighted average method? Page Top?Extract only Greenelement Only extract ????Blue element?Similarly, when extracting only the G element andextracting only the B element, it becomes the following image. It becomesan image with brightness different from the above image. The whitishportion of the original image is whiter (close to 255) even after conversionbecause all values ??of the RGB values ??are higher (closer to 255).
???????????????Extract only the originalimage Red element??Below the left is the original image, the right is the image extractedonly the R element value. The red part becomes whitish because the Relement value is high (closer to 255). Conversely, in the blue part, the Rvalue is low (it is closer to 0), so it becomes darker.?There is a conversion method that takes only the R element value fromeach pixel of the color image and adopts it as an 8-bit grayscalevalue. For example, if the value of a pixel of a color image is (R 255, G0, B 0), the pixel value at that position is 255, and if it is (R 128, G 0, B255), the pixel value at that position is 128 … and so on, extract only thepixel value of R.
Extract R element? Page Top?If you convert a color image with 24-bit RGB to an 8-bit grayscaleimage, you can process it faster. For grayscale conversion, thereare a method of extracting one element value of RGB or a NTSC (Television Broadcasting Standard) weighted averagemethod which averages by taking constant weighting to eachelement of RGB, and so on.In image processing, in order to efficientlyperform calculation processing, we use grayscale images more than color images.?For people who are aware, image data has header information thatcontains the size (width and height) of the entire image and the number of bitsof the image, in addition to the data representing the value of each pixel Iwill. ?Since there areinformation necessary for display and printing, such as whether it is an 8-bitimage or a 24-bit image, what is the size of the image ..
. etc., the computerdisplays and prints based on that information is.?A color image is 24 bits per pixel, and a grayscale image is 8 bits perpixel. Why is it displayed properly when displaying or printing on thescreen of a PC, although the size (number of bits) of one pixel isdifferent? Do not you have any doubts? How to distinguish betweencolor and grayscale images?? Page Top256 gradations of grayscale image?For RGB color images, images that express black and white shading arecalled grayscale images.
A grayscale image represents one pixel by 8 bits,does not include color information but contains only brightness information. Inthis 8-bit image, it is possible to express the shading to 2 8 8 = 256tones. The pixel value 0 is black, and the pixel value 255 is white. Grayscale image? Page Top?By the way, in colorprinting etc., “subtractive color mixing” is used which uses thethree primary colors C (cyan), M (magenta), and Y (yellow) and darkens as thecolors are mixed.RGB elements are mixed by “additive color mixture” to generate color. Theoverlapping color becomes white.
The white part is (R 255, G 255, B 255).?For example, the red part of the figure below is (R 255, G 0, B 0). Theblue part is (R 0, G 0, B 255), and the green part is (R 0, G 255, B 0).?In a color image, the color of one pixel is represented by three primarycolors of R (red), G (green), and B (blue). A 24-bit image is often used,each representing 8 bits of RGB elements of one pixel. That is, in a24-bit image, one pixel consists of 24 bits (8 bits × 3 colors). Color image? Page TopColor image andgrayscale image?The value of each pixel is called a pixel value, and the image is classifiedinto a color image , a grayscale image, etc. depending on thecapacity and properties of the pixel value .
?The smallest elements (grain of the upper image) that constitute theimage, which are arranged in this lattice form, are called pixels (pixels =pixels). Each pixel expresses light intensity and color depending on thenumerical value.Grains of various colors are lined uplike this ….
A digital image is composed of elements arranged ina lattice pattern. For example, when enlarging the portion surrounded by the yellow rectangle ofthe image below …?Selection of cameras, lenses, lighting conditions and so on is importantin order to extract objects efficiently with simple image processing?The image processing introduced here is an example, and there arevarious other processes. Although it is possible to accomplish the purposethrough such complicated image processing, in order to increase processingspeed, image processing by a program should be as simple as possible andprocessing must be done efficiently.
Digital image Color image and grayscale image Grayscale conversion Noise removal Binarization Morphology labeling Color extractionVarious imageprocessing?We are developing a system to shoot with industrial area camera and linesensor camera and perform various inspection and measurement by imageprocessing.You can extract the outline of the object byexamining the image taken with the camera, and check the length and area.?Shot image and processedimageHigh-resolution cameras perform inspection andmeasurement on behalf of human eyes. In addition, it is also possible to inspect fine parts and the like which arehard to examine with human eyes.
“Over the human eye, replacing thehuman eye”A method of recognizing and measuring objects byconverting digital image data is called image processing. By imageprocessing, defect detection and color judgment of industrial products etc. canbe done.Image Processing