This content is powered by Balige Publishing. Visit this link (collaboration with Rismon Hasiholan Sianipar). See PART 1
Step 29: Define display_corr() method to display correlation matrix of training and testing data:
Step 30: Define rb_corr() method to read text property of rbCorrTraining and rbCorrTesting widgets to determine which correlation matrix will be displayed on widgetGraph widget:
Step 31: Connect toggled() event of both rbCorrTraining and rbCorrTesting widgets with rb_corr() method and put them inside __init__() method as follows:
Step 32: Run recognize_traffic_sign.py. Click on Training Data in gbCorr group box. You will see the correlation matrix of training data as shown in Figure below.
Step 32: Define create_model() method to create and train CNN model. It also saves history dictionary in a file:
Step 33: Define show_history() method to read history dictionary from file, read selected item in cboAccuracy, and determine which plot will be displayed in widgetGraph:
Step 34: Connect currentIndexChanged() event of cbAccuracy widget with show_history() and put it inside __init__() method as follows:
Step 35: Run recognize_traffic_sign.py. Click on LOAD DATA button and on TRAINING MODEL button. From cbAccuracy combo box, choose Accuracy vs Epoch item to see the accuracy versus epoch graph as shown in Figure below.
Then, from cbAccuracy combo box, choose Loss vs Epoch item to see the loss versus epoch graph as shown in Figure below.
Step 36: Write this Python script to create class dictionary and save it as dict_class.py:
Step 37: Run dict_class.py to create class dictionary that is saved as dict.csv.
Step 38: Then, define read_class() method to read dictionary from csv file:
Step 39: Invoke read_class() method at the end of load_data() method as follows:
Step 40: Open gui_traffic.ui with Qt Designer. Put another Group Box onto form and set its objectName property as gbTesting. Inside itu, place three Label widgets and set their objectName property to labelImage, labelPred1, and labelPred1.
Step 41: Inside the group box, put two Push Button widgets and set their text properties to OPEN IMAGE and RECOGNIZE. Set their objectName properties to pbOpen and pbRecog.
Step 42: In the right side of those two push buttons, put a Widget from Containers panel. Set its objectName property to widgetHistIm. Right click on the widget and promote it to the same widget_class.
The modified form now looks as shown figure below:
Step 43: In recognize_traffic_sign.py, modify set_state() method by disabling gbTesting widget when form start running:
Step 44: Define open_image(), display_image(), and hist_image() method to open file dialog, display the image on labelImage widget, and display the histogram of image on widgetHistIm:
Step 45: Run recognize_traffic_sign.py, click LOAD DATA button, and then click on OPEN IMAGE button to see the image and its histogram as shown in Figure below.
Open another image to prove the result as shown in Figure below.
Step 46: Define recognize_image() method to recognize sign in the image using classes dictionary and model that has been created before:
Step 47: Connect clicked() event of pbRecog to recog_image() method and put it inside __init__() method as
Step 48: Run recognize_traffic_sign.py, click LOAD DATA, click TRAINING MODEL to create and train model, click LOAD IMAGE button to load test image, and then click RECOGNIZE button to predict traffic sign and its label as shown in Figures below.
Below is the final version of recognize_traffic_sign.py:
Step 29: Define display_corr() method to display correlation matrix of training and testing data:
def display_corr(self, widget, df, title): widget.canvas.axis1.clear() corr = df.corr() sns.heatmap(corr, cmap="Blues", annot=True, cbar=False, \ ax=widget.canvas.axis1) widget.canvas.axis1.set_title(title) widget.canvas.draw()
Step 30: Define rb_corr() method to read text property of rbCorrTraining and rbCorrTesting widgets to determine which correlation matrix will be displayed on widgetGraph widget:
def rb_corr(self,b): if b.text() == "Training Data": if b.isChecked() == True: self.display_corr(self.widgetGraph, self.df_train,\ 'Correlation Matrix of Training Data') else: self.display_corr(self.widgetGraph, self.test_df,\ 'Correlation Matrix of Testing Data') if b.text() == "Testing Data": if b.isChecked() == True: self.display_corr(self.widgetGraph, self.test_df, \ 'Correlation Matrix of Testing Data') else: self.display_corr(self.widgetGraph, self.df_train,\ 'Correlation Matrix of Training Data')
Step 31: Connect toggled() event of both rbCorrTraining and rbCorrTesting widgets with rb_corr() method and put them inside __init__() method as follows:
def __init__(self): QMainWindow.__init__(self) loadUi("gui_traffic.ui",self) self.setWindowTitle("GUI Demo of Recognizing Traffic Signs") self.addToolBar(NavigationToolbar(self.widgetGraph.canvas, self)) self.set_state(False) self.pbLoad.clicked.connect(self.load_data) self.rbDataTraining.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTraining)) self.rbDataTesting.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTesting)) self.rbHistTraining.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTraining)) self.rbHistTesting.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTesting)) self.rbCorrTraining.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTraining)) self.rbCorrTesting.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTesting))
Step 32: Run recognize_traffic_sign.py. Click on Training Data in gbCorr group box. You will see the correlation matrix of training data as shown in Figure below.
Step 32: Define create_model() method to create and train CNN model. It also saves history dictionary in a file:
def create_model(self): #Building the model model = Sequential() model.add(Conv2D(filters=32, kernel_size=(5,5), \ activation='relu', input_shape=self.X_train.shape[1:])) model.add(Conv2D(filters=32, kernel_size=(5,5), \ activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Conv2D(filters=64, kernel_size=(3, 3), \ activation='relu')) model.add(Conv2D(filters=64, kernel_size=(3, 3), \ activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(rate=0.5)) model.add(Dense(43, activation='softmax')) #Compilation of the model model.compile(loss='categorical_crossentropy', \ optimizer='adam', metrics=['accuracy']) #Training model epochs = 15 history = model.fit(self.X_train, self.y_train, \ batch_size=64, epochs=epochs, \ validation_data=(self.X_test, self.y_test)) model.save('traffic_classifier.h5') #Save history as dictionary np.save('my_history.npy',history.history)
Step 33: Define show_history() method to read history dictionary from file, read selected item in cboAccuracy, and determine which plot will be displayed in widgetGraph:
def display_history(self, widget, historydict1, label1, historydict2, label2, xlabel, ylabel, title): widget.canvas.axis1.clear() widget.canvas.axis1.plot(historydict1, \ label=label1, linewidth=3.0) widget.canvas.axis1.plot(historydict2, \ label=label2, linewidth=3.0) widget.canvas.axis1.set_xlabel(xlabel) widget.canvas.axis1.set_ylabel(ylabel) widget.canvas.axis1.set_title(title) widget.canvas.axis1.grid() widget.canvas.axis1.legend() widget.canvas.draw() def show_history(self): history=np.load('my_history.npy',\ allow_pickle='TRUE').item() strCB = self.cbAccuracy.currentText() if strCB == 'Accuracy vs Epoch': self.display_history(self.widgetGraph, \ history['accuracy'], 'training accuracy', \ history['val_accuracy'], 'val accuracy', \ 'Epoch', 'Accuracy', 'Accuracy') if strCB == 'Loss vs Epoch': self.display_history(self.widgetGraph, \ history['loss'], 'training loss', \ history['val_loss'], 'val loss', 'Epoch', \ 'Loss', 'Loss')
Step 34: Connect currentIndexChanged() event of cbAccuracy widget with show_history() and put it inside __init__() method as follows:
def __init__(self): QMainWindow.__init__(self) loadUi("gui_traffic.ui",self) self.setWindowTitle("GUI Demo of Recognizing Traffic Signs") self.addToolBar(NavigationToolbar(self.widgetGraph.canvas, self)) self.set_state(False) self.pbLoad.clicked.connect(self.load_data) self.rbDataTraining.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTraining)) self.rbDataTesting.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTesting)) self.rbHistTraining.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTraining)) self.rbHistTesting.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTesting)) self.rbCorrTraining.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTraining)) self.rbCorrTesting.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTesting)) self.pbTraining.clicked.connect(self.create_model) self.cbAccuracy.currentIndexChanged.connect(self.show_history)
Step 35: Run recognize_traffic_sign.py. Click on LOAD DATA button and on TRAINING MODEL button. From cbAccuracy combo box, choose Accuracy vs Epoch item to see the accuracy versus epoch graph as shown in Figure below.
Then, from cbAccuracy combo box, choose Loss vs Epoch item to see the loss versus epoch graph as shown in Figure below.
Step 36: Write this Python script to create class dictionary and save it as dict_class.py:
#dict_class.py import csv # Dictionary to map classes. classes = { 0:'Speed limit (20km/h)', 1:'Speed limit (30km/h)', 2:'Speed limit (50km/h)', 3:'Speed limit (60km/h)', 4:'Speed limit (70km/h)', 5:'Speed limit (80km/h)', 6:'End of speed limit (80km/h)', 7:'Speed limit (100km/h)', 8:'Speed limit (120km/h)', 9:'No passing', 10:'No passing veh over 3.5 tons', 11:'Right-of-way at intersection', 12:'Priority road', 13:'Yield', 14:'Stop', 15:'No vehicles', 16:'Veh > 3.5 tons prohibited', 17:'No entry', 18:'General caution', 19:'Dangerous curve left', 20:'Dangerous curve right', 21:'Double curve', 22:'Bumpy road', 23:'Slippery road', 24:'Road narrows on the right', 25:'Road work', 26:'Traffic signals', 27:'Pedestrians', 28:'Children crossing', 29:'Bicycles crossing', 30:'Beware of ice/snow', 31:'Wild animals crossing', 32:'End speed + passing limits', 33:'Turn right ahead', 34:'Turn left ahead', 35:'Ahead only', 36:'Go straight or right', 37:'Go straight or left', 38:'Keep right', 39:'Keep left', 40:'Roundabout mandatory', 41:'End of no passing', 42:'End no passing veh > 3.5 tons' } with open('dict.csv', 'w') as csv_file: writer = csv.writer(csv_file) for key, value in classes.items(): writer.writerow([key, value])
Step 37: Run dict_class.py to create class dictionary that is saved as dict.csv.
Step 38: Then, define read_class() method to read dictionary from csv file:
def read_class(self): #Reads dictionary from csv file self.dict_class = {row[0] : row[1] for _, \ row in pd.read_csv("dict.csv").iterrows()}
Step 39: Invoke read_class() method at the end of load_data() method as follows:
def load_data(self): self.data = [] self.labels = [] classes = 43 self.curr_path = os.getcwd() #Retrieving the images and their labels for i in range(classes): path = os.path.join(self.curr_path,'train',str(i)) images = os.listdir(path) for a in images: try: image = Image.open(path + '\\'+ a) image = image.resize((30,30)) image = np.array(image) self.data.append(image) self.labels.append(i) except: print("Error loading image") #Converting lists into numpy arrays self.data = np.array(self.data) self.labels = np.array(self.labels) print(self.data.shape, self.labels.shape) table = {'image_path': path, 'target': self.labels} self.df = pd.DataFrame(data=table) self.df = self.df.sample(frac = 1).reset_index(drop=True) #Creates dataset and dataframe self.create_dataset_dataframe() #Disables pbLoad widget self.pbLoad.setEnabled(False) #Enables back widgets self.set_state(True) #Checks rbDataTraining widget self.rbDataTraining.setChecked(True) #Displays data on table widget self.display_table(self.df_train,self.twData) #Clears and Displays histogram of training data hist_train = self.df.target.value_counts() self.display_histogram(self.widgetGraph, hist_train, \ 'Class', 'Samples', \ 'The distribution of number of training samples in each class') #Checks rbDataTraining widget self.rbHistTraining.setChecked(True) #Reads class dictionary self.read_class()
Step 40: Open gui_traffic.ui with Qt Designer. Put another Group Box onto form and set its objectName property as gbTesting. Inside itu, place three Label widgets and set their objectName property to labelImage, labelPred1, and labelPred1.
Step 41: Inside the group box, put two Push Button widgets and set their text properties to OPEN IMAGE and RECOGNIZE. Set their objectName properties to pbOpen and pbRecog.
Step 42: In the right side of those two push buttons, put a Widget from Containers panel. Set its objectName property to widgetHistIm. Right click on the widget and promote it to the same widget_class.
The modified form now looks as shown figure below:
Step 43: In recognize_traffic_sign.py, modify set_state() method by disabling gbTesting widget when form start running:
def set_state(self,state): self.gbHistogram.setEnabled(state) self.gbCorr.setEnabled(state) self.pbTraining.setEnabled(state) self.cbAccuracy.setEnabled(state) self.gbDataset.setEnabled(state) self.gbTesting.setEnabled(state)
Step 44: Define open_image(), display_image(), and hist_image() method to open file dialog, display the image on labelImage widget, and display the histogram of image on widgetHistIm:
def open_image(self): self.fname = QFileDialog.getOpenFileName(self, 'Open file', 'd:\\',"Image Files (*.jpg *.gif *.bmp *.png)") self.pixmap = QPixmap(self.fname[0]) self.img = cv2.imread(self.fname[0], cv2.IMREAD_COLOR) self.display_image(self.pixmap, self.labelImage) self.hist_image(self.img, self.widgetHistIm, \ 'Histogram of Test Image') def display_image(self, pixmap, label): label.setPixmap(pixmap) label.setScaledContents(True) def hist_image(self, img, qwidget1, title): qwidget1.canvas.axis1.clear() channel = len(img.shape) if channel == 2: #grayscale image histr = cv2.calcHist([img],[0],None,[256],[0,256]) qwidget1.canvas.axis1.plot(histr,\ color = 'yellow',linewidth=3.0) qwidget1.canvas.axis1.set_ylabel('Frequency',color='red') qwidget1.canvas.axis1.set_xlabel('Intensity', color='red') qwidget1.canvas.axis1.tick_params(axis='x', colors='red') qwidget1.canvas.axis1.tick_params(axis='y', colors='red') qwidget1.canvas.axis1.set_title(title,color='red') qwidget1.canvas.axis1.set_facecolor('xkcd:light tan') qwidget1.canvas.axis1.grid() qwidget1.canvas.draw() else : #color image color = ('b','g','r') for i,col in enumerate(color): histr = cv2.calcHist([img],[i],None,[256],[0,256]) qwidget1.canvas.axis1.plot(histr,\ color = col,linewidth=3.0) qwidget1.canvas.axis1.set_ylabel('Frequency',\ color='red') qwidget1.canvas.axis1.set_xlabel('Intensity', \ color='red') qwidget1.canvas.axis1.tick_params(axis='x', \ colors='red') qwidget1.canvas.axis1.tick_params(axis='y', \ colors='red') qwidget1.canvas.axis1.set_title(title,color='red') qwidget1.canvas.axis1.set_facecolor('xkcd:light tan') qwidget1.canvas.axis1.grid() qwidget1.canvas.draw()
Step 45: Run recognize_traffic_sign.py, click LOAD DATA button, and then click on OPEN IMAGE button to see the image and its histogram as shown in Figure below.
Open another image to prove the result as shown in Figure below.
def recog_image(self): #Loads model model = load_model('traffic_classifier.h5') #Reads class dictionary self.read_class() #Resizes image image = cv2.resize(self.img, (30, 30)) #Normalize image to range [0 1] image = cv2.normalize(image, None, alpha=0, beta=1, \ norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) #Reshapes image image = image.reshape(1,30,30,3) # Prediction of this image pred = model.predict_classes(image)[0] self.labelPred1.setText('Predicted Label = '+str(pred)) sign = self.dict_class[pred] self.labelPred2.setText('Sign= '+ str(sign))
Step 47: Connect clicked() event of pbRecog to recog_image() method and put it inside __init__() method as
def __init__(self): QMainWindow.__init__(self) loadUi("gui_traffic.ui",self) self.setWindowTitle("GUI Demo of Recognizing Traffic Signs") self.addToolBar(NavigationToolbar(self.widgetGraph.canvas, self)) self.set_state(False) self.pbLoad.clicked.connect(self.load_data) self.rbDataTraining.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTraining)) self.rbDataTesting.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTesting)) self.rbHistTraining.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTraining)) self.rbHistTesting.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTesting)) self.rbCorrTraining.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTraining)) self.rbCorrTesting.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTesting)) self.pbTraining.clicked.connect(self.create_model) self.cbAccuracy.currentIndexChanged.connect(self.show_history) self.pbOpen.clicked.connect(self.open_image) self.pbRecog.clicked.connect(self.recog_image)
Step 48: Run recognize_traffic_sign.py, click LOAD DATA, click TRAINING MODEL to create and train model, click LOAD IMAGE button to load test image, and then click RECOGNIZE button to predict traffic sign and its label as shown in Figures below.
Below is the final version of recognize_traffic_sign.py:
#recognize_traffic_sign.py from PyQt5.QtWidgets import * from PyQt5.QtGui import QIcon, QPixmap, QImage from PyQt5.uic import loadUi from matplotlib.backends.backend_qt5agg import (NavigationToolbar2QT as NavigationToolbar) from matplotlib.colors import ListedColormap import csv import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import cv2 from PIL import Image import os from sklearn.model_selection import train_test_split from keras.utils import to_categorical import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential,load_model from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout from tensorflow.keras import backend as K from sklearn.metrics import accuracy_score from widget_class import widget_class class DemoGUI_TrafficSign(QMainWindow): def __init__(self): QMainWindow.__init__(self) loadUi("gui_traffic.ui",self) self.setWindowTitle("GUI Demo of Recognizing Traffic Signs") self.addToolBar(NavigationToolbar(self.widgetGraph.canvas, self)) self.set_state(False) self.pbLoad.clicked.connect(self.load_data) self.rbDataTraining.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTraining)) self.rbDataTesting.toggled.connect(\ lambda:self.rb_dataset(self.rbDataTesting)) self.rbHistTraining.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTraining)) self.rbHistTesting.toggled.connect(\ lambda:self.rb_histogram(self.rbHistTesting)) self.rbCorrTraining.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTraining)) self.rbCorrTesting.toggled.connect(\ lambda:self.rb_corr(self.rbCorrTesting)) self.pbTraining.clicked.connect(self.create_model) self.cbAccuracy.currentIndexChanged.connect(self.show_history) self.pbOpen.clicked.connect(self.open_image) self.pbRecog.clicked.connect(self.recog_image) def read_class(self): #Reads dictionary from csv file self.dict_class = {row[0] : row[1] for _, row in \ pd.read_csv("dict.csv").iterrows()} def set_state(self,state): self.gbHistogram.setEnabled(state) self.gbCorr.setEnabled(state) self.pbTraining.setEnabled(state) self.cbAccuracy.setEnabled(state) self.gbDataset.setEnabled(state) #self.gbTesting.setEnabled(state) def load_data(self): self.data = [] self.labels = [] classes = 43 self.curr_path = os.getcwd() #Retrieving the images and their labels for i in range(classes): path = os.path.join(self.curr_path,'train',str(i)) images = os.listdir(path) for a in images: try: image = Image.open(path + '\\'+ a) image = image.resize((30,30)) image = np.array(image) self.data.append(image) self.labels.append(i) except: print("Error loading image") #Converting lists into numpy arrays self.data = np.array(self.data) self.labels = np.array(self.labels) print(self.data.shape, self.labels.shape) table = {'image_path': path, 'target': self.labels} self.df = pd.DataFrame(data=table) self.df = self.df.sample(frac = 1).reset_index(drop=True) #Creates dataset and dataframe self.create_dataset_dataframe() #Disables pbLoad widget self.pbLoad.setEnabled(False) #Enables back widgets self.set_state(True) #Checks rbDataTraining widget self.rbDataTraining.setChecked(True) #Displays data on table widget self.display_table(self.df_train,self.twData) #Clears and Displays histogram of training data hist_train = self.df.target.value_counts() self.display_histogram(self.widgetGraph, hist_train, 'Class', \ 'Samples', 'The distribution of number of training samples in each class') #Checks rbDataTraining widget self.rbHistTraining.setChecked(True) #Reads class dictionary self.read_class() def create_dataset_dataframe(self): #Splitting training and testing dataset self.X_train, self.X_test, self.y_train, self.y_test = \ train_test_split(self.data, self.labels, test_size=0.2, \ random_state=42) print(self.X_train.shape, self.X_test.shape, self.y_train.shape, \ self.y_test.shape) #Converting the labels into one hot encoding self.y_train = to_categorical(self.y_train, 43) self.y_test = to_categorical(self.y_test, 43) #Creates testing dataframe self.test_df = pd.read_csv(str(self.curr_path) +'/Test.csv') #Creates training dataframe self.df_train = pd.read_csv(str(self.curr_path) +'/Train.csv') nRow, nCol = self.df_train.shape print(f'There are {nRow} rows and {nCol} columns') print(self.df_train.head(5)) def display_table(self,df, tableWidget): # show data on table widget self.write_df_to_qtable(df,tableWidget) styleH = "::section {""background-color: red; }" tableWidget.horizontalHeader().setStyleSheet(styleH) styleV = "::section {""background-color: red; }" tableWidget.verticalHeader().setStyleSheet(styleV) # Takes a df and writes it to a qtable provided. df headers # become qtable headers @staticmethod def write_df_to_qtable(df,table): headers = list(df) table.setRowCount(df.shape[0]) table.setColumnCount(df.shape[1]) table.setHorizontalHeaderLabels(headers) # getting data from df is computationally costly # so convert it to array first df_array = df.values for row in range(df.shape[0]): for col in range(df.shape[1]): table.setItem(row, col, \ QTableWidgetItem(str(df_array[row,col]))) def rb_histogram(self,b): hist_train = self.df.target.value_counts() hist_test = self.test_df.ClassId.value_counts() if b.text() == "Training Data": if b.isChecked() == True: self.display_histogram(self.widgetGraph, hist_train, \ 'Class', 'Samples', \ 'The distribution of number of training samples in each class') else: self.display_histogram(self.widgetGraph, hist_test, \ 'Class', 'Samples', \ 'The distribution of number of testing samples in each class') if b.text() == "Testing Data": if b.isChecked() == True: self.display_histogram(self.widgetGraph, hist_test, \ 'Class', 'Samples', \ 'The distribution of number of testing samples in each class') else: self.display_histogram(self.widgetGraph, hist_train, \ 'Class', 'Samples', \ 'The distribution of number of training samples in each class') def display_histogram(self, widget, hist, xlabel, ylabel, title): widget.canvas.axis1.clear() sns.barplot(hist.index,hist, ax=widget.canvas.axis1) widget.canvas.axis1.set_xlabel(xlabel) widget.canvas.axis1.set_ylabel(ylabel) widget.canvas.axis1.set_title(title) widget.canvas.axis1.grid() widget.canvas.draw() def rb_dataset(self,b): if b.text() == "Training Data": if b.isChecked() == True: self.display_table(self.df_train,self.twData) else: self.display_table(self.test_df,self.twData) if b.text() == "Testing Data": if b.isChecked() == True: self.display_table(self.test_df,self.twData) else: self.display_table(self.df_train,self.twData) def display_corr(self, widget, df, title): widget.canvas.axis1.clear() corr = df.corr() sns.heatmap(corr, cmap="Blues", annot=True, cbar=False, \ ax=widget.canvas.axis1) widget.canvas.axis1.set_title(title) widget.canvas.draw() def rb_corr(self,b): if b.text() == "Training Data": if b.isChecked() == True: self.display_corr(self.widgetGraph, self.df_train, \ 'Correlation Matrix of Training Data') else: self.display_corr(self.widgetGraph, self.test_df, \ 'Correlation Matrix of Testing Data') if b.text() == "Testing Data": if b.isChecked() == True: self.display_corr(self.widgetGraph, self.test_df, \ 'Correlation Matrix of Testing Data') else: self.display_corr(self.widgetGraph, self.df_train, \ 'Correlation Matrix of Training Data') def create_model(self): #Building the model model = Sequential() model.add(Conv2D(filters=32, kernel_size=(5,5), \ activation='relu', input_shape=self.X_train.shape[1:])) model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(rate=0.5)) model.add(Dense(43, activation='softmax')) #Compilation of the model model.compile(loss='categorical_crossentropy', \ optimizer='adam', metrics=['accuracy']) #Training model epochs = 15 history = model.fit(self.X_train, self.y_train, batch_size=64, \ epochs=epochs, validation_data=(self.X_test, self.y_test)) model.save('traffic_classifier.h5') #Save history as dictionary np.save('my_history.npy',history.history) def display_history(self, widget, historydict1, label1, historydict2, \ label2, xlabel, ylabel, title): widget.canvas.axis1.clear() widget.canvas.axis1.plot(historydict1, label=label1,linewidth=3.0) widget.canvas.axis1.plot(historydict2, label=label2,linewidth=3.0) widget.canvas.axis1.set_xlabel(xlabel) widget.canvas.axis1.set_ylabel(ylabel) widget.canvas.axis1.set_title(title) widget.canvas.axis1.grid() widget.canvas.axis1.legend() widget.canvas.draw() def show_history(self): history=np.load('my_history.npy',allow_pickle='TRUE').item() strCB = self.cbAccuracy.currentText() if strCB == 'Accuracy vs Epoch': self.display_history(self.widgetGraph, history['accuracy'],\ 'training accuracy', history['val_accuracy'], \ 'val accuracy', 'Epoch', 'Accuracy', 'Accuracy') if strCB == 'Loss vs Epoch': self.display_history(self.widgetGraph, history['loss'], \ 'training loss', history['val_loss'], 'val loss', 'Epoch', \ 'Loss', 'Loss') def open_image(self): self.fname = QFileDialog.getOpenFileName(self, 'Open file', 'd:\\',"Image Files (*.jpg *.gif *.bmp *.png)") self.pixmap = QPixmap(self.fname[0]) self.img = cv2.imread(self.fname[0], cv2.IMREAD_COLOR) self.display_image(self.pixmap, self.labelImage) self.hist_image(self.img, self.widgetHistIm, \ 'Histogram of Test Image') def display_image(self, pixmap, label): label.setPixmap(pixmap) label.setScaledContents(True) def hist_image(self, img, qwidget1, title): qwidget1.canvas.axis1.clear() channel = len(img.shape) if channel == 2: #grayscale image histr = cv2.calcHist([img],[0],None,[256],[0,256]) qwidget1.canvas.axis1.plot(histr,\ color = 'yellow',linewidth=3.0) qwidget1.canvas.axis1.set_ylabel('Frequency' color='red') qwidget1.canvas.axis1.set_xlabel('Intensity', color='red') qwidget1.canvas.axis1.tick_params(axis='x', colors='red') qwidget1.canvas.axis1.tick_params(axis='y', colors='red') qwidget1.canvas.axis1.set_title(title,color='red') qwidget1.canvas.axis1.set_facecolor('xkcd:light tan') qwidget1.canvas.axis1.grid() qwidget1.canvas.draw() else : #color image color = ('b','g','r') for i,col in enumerate(color): histr = cv2.calcHist([img],[i],None,[256],[0,256]) qwidget1.canvas.axis1.plot(histr,\ color = col,linewidth=3.0) qwidget1.canvas.axis1.set_ylabel('Frequency', color='red') qwidget1.canvas.axis1.set_xlabel('Intensity', ='red') qwidget1.canvas.axis1.tick_params(axis='x', colors='red') qwidget1.canvas.axis1.tick_params(axis='y', colors='red') qwidget1.canvas.axis1.set_title(title,color='red') qwidget1.canvas.axis1.set_facecolor('xkcd:light tan') qwidget1.canvas.axis1.grid() qwidget1.canvas.draw() def recog_image(self): #Loads model model = load_model('traffic_classifier.h5') #Reads class dictionary self.read_class() #Resizes image image = cv2.resize(self.img, (30, 30)) #Normalize image to range [0 1] image = cv2.normalize(image, None, alpha=0, beta=1, \ norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) #Reshapes image image = image.reshape(1,30,30,3) # Prediction of this image pred = model.predict_classes(image)[0] self.labelPred1.setText('Predicted Label = '+str(pred)) print("Predicted label = ", pred) sign = self.dict_class[pred] self.labelPred2.setText('Sign= '+ str(sign)) if __name__ == '__main__': import sys app = QApplication(sys.argv) ex = DemoGUI_TrafficSign() ex.show() sys.exit(app.exec_())
No comments:
Post a Comment