This content is powered by Balige Publishing. Visit this link (collaboration with Rismon Hasiholan Sianipar) PART 1
Run shithomasi_corner.py and see the result as shown in Figure below.
Now, you will modify feature_detection.ui to implement Shi-Tomasi Corner detection. Add a Group Box widget and set its objectName property as gbShiTomasi.
Inside the group box, put two Spin Box widgets. Set their objectName properties to sbCorner and sbEuclidean. Set value property of sbCorner to 25 and that of sbEuclidean to 10.
Then, add a Double Spin Box widget. Set its objectName property as dsbQuality, its value property to 0.01, its maximum property to 1.00, and its singleStep property to 0.01.
Now, the form looks as shown in Figure below.
Define a new method, shi_tomasi_detection(), to implement Shi-Tomasi Corner Detection as follows:
Below is the full script of feature_detection.py so far:
Feature Detection Using Python GUI (PyQt) Part 3
In this tutorial, you will learn how to use OpenCV, NumPy library and other libraries to perform feature extraction with Python GUI (PyQt). The feature detection techniques used in this chapter are Harris Corner Detection, Shi-Tomasi Corner Detector, Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Features from Accelerated Segment Test (FAST), Binary Robust Independent Elementary Features (BRIEF), and Oriented FAST and Rotated BRIEF (ORB).
Tutorial Steps To Detect Image Features Using Shi-Tomasi Corner Detection
Below is the case when you want to find 30 best corners in the image.OpenCV has a function, cv.goodFeaturesToTrack(). The method finds N strongest corners in the image by Shi-Tomasi method (or Harris Corner Detection, if you specify it). As usual, image should be a grayscale image. Then you specify number of corners you want to find. Then you specify the quality level, which is a value between 0-1, which denotes the minimum quality of corner below which everyone is rejected. Then we provide the minimum euclidean distance between corners detected.
With all this information, the function finds corners in the image. All corners below quality level are rejected. Then it sorts the remaining corners based on quality in the descending order. Then function takes first strongest corner, throws away all the nearby corners in the range of minimum distance and returns N strongest corners.
#shithomasi_corner.py import numpy as np import cv2 as cv from matplotlib import pyplot as plt img = cv.imread('chessboard.png') gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY) corners = cv.goodFeaturesToTrack(gray,30,0.01,10) corners = np.int0(corners) for i in corners: x,y = i.ravel() cv.circle(img,(x,y),3,[0,255,0],-1) cv.imshow('Shi-Tomasi Corner Detection',img) if cv.waitKey(0) & 0xff == 27: cv.destroyAllWindows()
Run shithomasi_corner.py and see the result as shown in Figure below.
Now, you will modify feature_detection.ui to implement Shi-Tomasi Corner detection. Add a Group Box widget and set its objectName property as gbShiTomasi.
Inside the group box, put two Spin Box widgets. Set their objectName properties to sbCorner and sbEuclidean. Set value property of sbCorner to 25 and that of sbEuclidean to 10.
Then, add a Double Spin Box widget. Set its objectName property as dsbQuality, its value property to 0.01, its maximum property to 1.00, and its singleStep property to 0.01.
Now, the form looks as shown in Figure below.
def initialization(self,state): self.cboFeature.setEnabled(state) self.gbHarris.setEnabled(state) self.gbShiTomasi.setEnabled(state)
Define a new method, shi_tomasi_detection(), to implement Shi-Tomasi Corner Detection as follows:
def shi_tomasi_detection(self): self.test_im = self.img.copy() number_corners = self.sbCorner.value() euclidean_dist = self.sbEuclidean.value() min_quality = self.dsbQuality.value() gray = cv2.cvtColor(self.test_im,cv2.COLOR_BGR2GRAY) corners = cv2.goodFeaturesToTrack(gray,\ number_corners,min_quality,euclidean_dist) corners = np.int0(corners) for i in corners: x,y = i.ravel() cv2.circle(self.test_im,(x,y),5,[0,255,0],-1) cv2.cvtColor(self.test_im, cv2.COLOR_BGR2RGB, self.test_im) self.display_image(self.test_im, self.labelResult)
Modify choose_feature() so that when user choose Shi-Tomasi Corner Detector from combo box, it will invoke shi_tomasi_detection() as follows:
def choose_feature(self): strCB = self.cboFeature.currentText() if strCB == 'Harris Corner Detection': self.gbHarris.setEnabled(True) self.gbShiTomasi.setEnabled(False) self.harris_detection() if strCB == 'Shi-Tomasi Corner Detector': self.gbHarris.setEnabled(False) self.gbShiTomasi.setEnabled(True) self.shi_tomasi_detection()
Connect valueChanged() signal of the sbCorner, bEuclidean, and dsbQuality to shi_tomasi_detection() method and put them inside __init__() method as follows:
def __init__(self): QMainWindow.__init__(self) loadUi("feature_detection.ui",self) self.setWindowTitle("Feature Detection") self.pbReadImage.clicked.connect(self.read_image) self.initialization(False) self.cboFeature.currentIndexChanged.connect(self.choose_feature) self.hsBlockSize.valueChanged.connect(self.set_hsBlockSize) self.hsKSize.valueChanged.connect(self.set_hsKSize) self.hsK.valueChanged.connect(self.set_hsK) self.hsThreshold.valueChanged.connect(self.set_hsThreshold) self.sbCorner.valueChanged.connect(self.shi_tomasi_detection) self.sbEuclidean.valueChanged.connect(self.shi_tomasi_detection) self.dsbQuality.valueChanged.connect(self.shi_tomasi_detection)
Run feature_detection.py, open an image, and choose Shi-Tomasi Corner Detection from combo box. Change the parameters and the result is shown in both figures below:
Below is the full script of feature_detection.py so far:
#feature_detection.py import sys import cv2 import numpy as np from PyQt5.QtWidgets import* from PyQt5 import QtGui, QtCore from PyQt5.uic import loadUi from matplotlib.backends.backend_qt5agg import (NavigationToolbar2QT as NavigationToolbar) from PyQt5.QtWidgets import QDialog, QFileDialog from PyQt5.QtGui import QIcon, QPixmap, QImage from PyQt5.uic import loadUi class FormFeatureDetection(QMainWindow): def __init__(self): QMainWindow.__init__(self) loadUi("feature_detection.ui",self) self.setWindowTitle("Feature Detection") self.pbReadImage.clicked.connect(self.read_image) self.initialization(False) self.cboFeature.currentIndexChanged.connect(self.choose_feature) self.hsBlockSize.valueChanged.connect(self.set_hsBlockSize) self.hsKSize.valueChanged.connect(self.set_hsKSize) self.hsK.valueChanged.connect(self.set_hsK) self.hsThreshold.valueChanged.connect(self.set_hsThreshold) self.sbCorner.valueChanged.connect(self.shi_tomasi_detection) self.sbEuclidean.valueChanged.connect(self.shi_tomasi_detection) self.dsbQuality.valueChanged.connect(self.shi_tomasi_detection) def read_image(self): self.fname = QFileDialog.getOpenFileName(self, 'Open file', \ 'd:\\',"Image Files (*.jpg *.gif *.bmp *.png)") self.pixmap = QPixmap(self.fname[0]) self.labelImage.setPixmap(self.pixmap) self.labelImage.setScaledContents(True) self.img = cv2.imread(self.fname[0], cv2.IMREAD_COLOR) self.cboFeature.setEnabled(True) def initialization(self,state): self.cboFeature.setEnabled(state) self.gbHarris.setEnabled(state) self.gbShiTomasi.setEnabled(state) def set_hsBlockSize(self, value): self.leBlockSize.setText(str(value)) self.harris_detection() def set_hsKSize(self, value): self.leKSize.setText(str(value)) self.harris_detection() def set_hsK(self, value): self.leK.setText(str(round((value/100),2))) self.harris_detection() def set_hsThreshold(self, value): self.leThreshold.setText(str(round((value/100),2))) self.harris_detection() def choose_feature(self): strCB = self.cboFeature.currentText() if strCB == 'Harris Corner Detection': self.gbHarris.setEnabled(True) self.gbShiTomasi.setEnabled(False) self.harris_detection() if strCB == 'Shi-Tomasi Corner Detector': self.gbHarris.setEnabled(False) self.gbShiTomasi.setEnabled(True) self.shi_tomasi_detection() def harris_detection(self): self.test_im = self.img.copy() gray = cv2.cvtColor(self.test_im,cv2.COLOR_BGR2GRAY) gray = np.float32(gray) blockSize = int(self.leBlockSize.text()) kSize = int(self.leKSize.text()) K = float(self.leK.text()) dst = cv2.cornerHarris(gray,blockSize,kSize,K) #dilated for marking the corners dst = cv2.dilate(dst,None) # Threshold for an optimal value, it may vary depending on the image. Thresh = float(self.leThreshold.text()) self.test_im[dst>Thresh*dst.max()]=[0,0,255] cv2.cvtColor(self.test_im, cv2.COLOR_BGR2RGB, self.test_im) self.display_image(self.test_im, self.labelResult) def display_image(self, img, label): height, width, channel = img.shape bytesPerLine = 3 * width qImg = QImage(img, width, height, \ bytesPerLine, QImage.Format_RGB888) pixmap = QPixmap.fromImage(qImg) label.setPixmap(pixmap) label.setScaledContents(True) def shi_tomasi_detection(self): self.test_im = self.img.copy() number_corners = self.sbCorner.value() euclidean_dist = self.sbEuclidean.value() min_quality = self.dsbQuality.value() gray = cv2.cvtColor(self.test_im,cv2.COLOR_BGR2GRAY) corners = cv2.goodFeaturesToTrack(gray,number_corners,min_quality,euclidean_dist) corners = np.int0(corners) for i in corners: x,y = i.ravel() cv2.circle(self.test_im,(x,y),5,[0,255,0],-1) cv2.cvtColor(self.test_im, cv2.COLOR_BGR2RGB, self.test_im) self.display_image(self.test_im, self.labelResult) if __name__=="__main__": app = QApplication(sys.argv) w = FormFeatureDetection() w.show() sys.exit(app.exec_())
Feature Detection Using Python GUI (PyQt) Part 3
No comments:
Post a Comment