Files

Download

Download Full Text (598 KB)

Description

With a surge in popularity of machine learning as a whole, many researchers have sought optimization methods to reduce the complexity of neural networks; however, only recent attempts have been made to optimize neural networks via quantum computing methods. In this paper, we describe the training process of a feed forward neural network (FFNN) and the time complexity of the training process. We highlight the inefficiencies of the FFNN training process, particularly when implemented with gradient descent, and introduce a call to action for optimization of a FFNN. Afterward, we discuss the strides made in quantum computing to improve the time complexity of machine learning; we study recent attempts to improve the time complexity of a neural network, including through a complete quantum analog and a hybrid analog. We propose to use the QAOA protocol to create a hybrid analog to an FFNN; supported by previous implementations of QAOA, we believe there is potential for expanding the QAOA algorithm toward a neural network. Finally, we describe the process for which we hope to implement the QAOA algorithm and how we will compare it to the runtime of a FFNN.

Publication Date

Spring 2021

Language

English

Keywords

QAOA; Machine learning; Neural networks; Quantum computing

Disciplines

Computer Sciences | OS and Networks | Theory and Algorithms

File Format

pdf

File Size

535 KB

Comments

Faculty Mentor: Bernard Zygelman, Ph.D.

Toward A Quantum Neural Network: Proposing the QAOA Algorithm to Replace a Feed Forward Neural Network


Share

COinS