Document Type
Conference Proceeding
Publication Date
6-2002
Publisher
University of Nevada, Las Vegas
Publisher Location
Las Vegas, NV
Volume
1
First page number:
249
Last page number:
255
Abstract
The article presents methods of dealing with huge data in the domain of neural networks. The decomposition of neural networks is introduced and its efficiency is proved by the authors’ experiments. The examinations of the effectiveness of argument reduction in the above filed, are presented. Authors indicate, that decomposition is capable of reducing the size and the complexity of the learned data, and thus it makes the learning process faster or, while dealing with large data, possible. According to the authors experiments, in some cases, argument reduction, makes the learning process harder.
Keywords
Back propagation (Artificial intelligence); Computer algorithms; Decomposition method; Field programmable gate arrays; Neural networks (Computer science)
Disciplines
Computer Engineering | Controls and Control Theory | Digital Circuits | Electrical and Computer Engineering | Signal Processing | Systems and Communications
Language
English
Repository Citation
Selvaraj, H.,
Niewiadomski, H.,
Buciak, P.,
Pleban, M.,
Sapiecha, P.,
Luba, T.,
Muthukumar, V.
(2002).
Implementation of Large Neural Networks Using Decomposition.
, 1
249-255.
Las Vegas, NV: University of Nevada, Las Vegas.
https://digitalscholarship.unlv.edu/ece_fac_articles/306
Included in
Controls and Control Theory Commons, Digital Circuits Commons, Signal Processing Commons, Systems and Communications Commons