Award Date

May 2019

Degree Type

Thesis

Degree Name

Master of Science in Computer Science

Department

Computer Science

First Committee Member

Justin Zhan

Second Committee Member

Laxmi Gewali

Third Committee Member

Wolfgang Bein

Fourth Committee Member

Ge Kan

Number of Pages

80

Abstract

We present a method for boosting the performance of the Convolutional Neural Network (CNN) by reducing the covariance between the feature maps of the convolutional layers.

In a CNN, the units of a hidden layer are segmented into the feature/activation maps. The units within a feature map share the weight matrix (filter), or in simple terms look for the same feature. A feature map is the output of one filter applied to the previous layer. CNN search for features such as straight lines, and as these features are spotted, they get reported to the feature map. During the learning process, the convolutional neural network defines what it perceives as important. Each feature map is looking for something else: one feature map could be looking for horizontal lines while the other for vertical lines or curves. Reducing the covariance between the feature maps of a convolutional layer maximizes the variance between the feature maps out of that layer. This supplements the decrement in the redundancy of the feature maps and consequently maximizes the information represented by the feature maps.

Keywords

Artificial Neural Network; Convolutional Neural Network; Covariance; Data Shadow Tween; Feature maps; Image Classification

Disciplines

Computer Sciences

File Format

pdf

Degree Grantor

University of Nevada, Las Vegas

Language

English

Rights

IN COPYRIGHT. For more information about this rights statement, please visit http://rightsstatements.org/vocab/InC/1.0/


Share

COinS