Alexandria Digital Research Library

Towards efficient implementation of neuromorphic systems with emerging device technologies

Author:
Merrikh Bayat, Farnood
Degree Grantor:
University of California, Santa Barbara. Electrical & Computer Engineering
Degree Supervisor:
Dmitri Strukov
Place of Publication:
[Santa Barbara, Calif.]
Publisher:
University of California, Santa Barbara
Creation Date:
2015
Issued Date:
2015
Topics:
Computer engineering
Keywords:
Flash
Memristor
Neuromorphic system
Floating-gate transistor
Crossbar
Genres:
Online resources and Dissertations, Academic
Dissertation:
Ph.D.--University of California, Santa Barbara, 2015
Description:

Nowadays with unbounded expansion of digital world, powerful information processing systems governed by deep learning algorithms are becoming more and more popular. In this situation, usage of fast, powerful, intelligent and trainable deep learning methods seems critical and unavoidable. However, despite of their inherent structural and conceptual differences, all of these intelligent methods and systems share one common property i.e. having enormous number of trainable parameters. However, from a hardware point of view, the size of a practical computing system is always determined based on available resources. In this dissertation, we study these deep learning methods from a hardware point of view and investigate the possibility of their hardware implementation based on two new emerging technologies i.e. resistive switching and floating gate (flash) devices. For this purpose, memristive devices are fabricated with high density in crossbar structure to create a network which then trained with modified RPROB rule to successfully classify images. In addition, biologically plausible spike-timing dependent plasticity rule and its dependence to initial state is demonstrated experimentally on these nano-scale devices. Similar procedure is followed for the other technology, i.e. flash devices. We modified and fabricated the conventional design of digital flash memories which provide us with the ability of individual programming of floating-gate transistors. Having large-scale neural networks in mind, an efficient and high speed tuning method is developed based on acquired dynamic and static models which are then tested experimentally on commercial devices. We have also experimentally investigated the possibility of implementing vector-to-matrix multiplier using these devices which is the main building block of most deep learning methods. Finally, a multi-layer neural network is designed and fabricated using this technology to classify handwritten digits.

Physical Description:
1 online resource (157 pages)
Format:
Text
Collection(s):
UCSB electronic theses and dissertations
ARK:
ark:/48907/f31834pm
ISBN:
9781339084589
Catalog System Number:
990045715970203776
Rights:
Inc.icon only.dark In Copyright
Copyright Holder:
Farnood Merrikh Bayat
Access: This item is restricted to on-campus access only. Please check our FAQs or contact UCSB Library staff if you need additional assistance.