DSpace - Tor Vergata >
Facoltà di Scienze Matematiche Fisiche e Naturali >
Tesi di dottorato in scienze matematiche e fisiche >

Please use this identifier to cite or link to this item: http://hdl.handle.net/2108/761

Full metadata record

DC FieldValueLanguage
contributor.advisorDel Giudice, Paolo-
contributor.advisorSalina, Gaetano-
contributor.advisorIndiveri, Giacomo-
contributor.advisorDouglas, Rodney-
contributor.authorGiulioni, Massimiliano-
date.accessioned2009-01-15T13:44:34Z-
date.available2009-01-15T13:44:34Z-
date.issued2009-01-15T13:44:34Z-
identifier.urihttp://hdl.handle.net/2108/761-
descriptionCo-dottorato Italia-Svizzeraen
description.abstractThe brain is an incredible system with a computational power that goes further beyond those of our standard computer. It consists of a network of 1011 neurons connected by about 1014 synapses: a massive parallel architecture that suggests that brain performs computation according to completely new strategies which we are far from understanding. To study the nervous system a reasonable starting point is to model its basic units, neurons and synapses, extract the key features, and try to put them together in simple controllable networks. The research group I have been working in focuses its attention on the network dynamics and chooses to model neurons and synapses at a functional level: in this work I consider network of integrate-and-fire neurons connected through synapses that are plastic and bistable. A synapses is said to be plastic when, according to some kind of internal dynamics, it is able to change the “strength”, the efficacy, of the connection between the pre- and post-synaptic neuron. The adjective bistable refers to the number of stable states of efficacy that a synapse can have; we consider synapses with two stable states: potentiated (high efficacy) or depressed (low efficacy). The considered synaptic model is also endowed with a new stop-learning mechanism particularly relevant when dealing with highly correlated patterns. The ability of this kind of systems of reproducing in simulation behaviors observed in biological networks, give sense to an attempt of implementing in hardware the studied network. This thesis situates at this point: the goal of this work is to design, control and test hybrid analog-digital, biologically inspired, hardware systems that behave in agreement with the theoretical and simulations predictions. This class of devices typically goes under the name of neuromorphic VLSI (Very-Large-Scale Integration). Neuromorphic engineering was born from the idea of designing bio-mimetic devices and represents a useful research strategy that contributes to inspire new models, stimulates the theoretical research and that proposes an effective way of implementing stand-alone power-efficient devices. In this work I present two chips, a prototype and a larger device, that are a step towards endowing VLSI, neuromorphic systems with autonomous learning capabilities adequate for not too simple statistics of the stimuli to be learnt. The main novel features of these chips are the implemented type of synaptic plasticity and the configurability of the synaptic connectivity. The reported experimental results demonstrate that the circuits behave in agreement with theoretical predictions and the advantages of the stop-learning synaptic plasticity when highly correlated patterns have to be learnt. The high degree of flexibility of these chips in the definition of the synaptic connectivity is relevant in the perspective of using such devices as building blocks of parallel, distributed multi-chip architectures that will allow to scale up the network dimensions to systems with interesting computational abilities capable to interact with real-world stimuli.en
description.tableofcontents1 Introduction - 2 Models for a compact VLSI implementation - 2.1 Neurons - 2.1.1 Hodgkin and Huxley model - 2.1.2 A VLSI implementation of the Hodgkin and Huxley model - 2.1.3 Two-dimensional neuron models - 2.1.4 Morris-Lecar model - 2.1.5 FitzHugh-Nagumo model - 2.1.6 IF model - 2.1.7 IF model on Silicon - 2.2 Synapses - 2.2.1 Fixed synapses in a simple VLSI network - 2.2.2 Plastic synapses - 2.2.3 Effective model of a plastic bistable synapse - 2.2.4 VLSI implementation of the effective synaptic model - 2.2.5 The Calcium self-regulating mechanism - 2.3 Conclusions - 3 CLANN - 3.1 Introduction: main ideas - 3.2 Architecture - 3.3 Signal flow - 3.4 Neuron and Synapse, block level - 3.5 Measuring parameters through neural and synaptic dynamics - 3.6 LTP/LTD probabilities: measurements vs chip-oriented simulation - 3.7 Learning overlapping patterns - 3.8 Summary and Discussion - C.1 Circuits details and layout - C.1.1 Synapse - C.1.2 Neuron - C.1.3 Calcium - C.1.4 Shaper and other circuits - 4 FLANN - 4.1 Architecture - 4.2 Signal flow - 4.3 Block level description - 4.4 Synapse and shaper: circuits and layout - 4.4.1 Synapse - 4.4.2 Shaper - 4.4.3 Synapse layout - 4.5 Calcium circuit - 4.5.1 Differential pair integrator - 4.5.2 Comparators - 4.5.3 Current conveyors - 4.6 New AER input circuit - 4.7 Preliminary characterization tests: synaptic efficacy - 4.8 Conclusions - Conclusionsen
format.extent2756521 bytes-
format.mimetypeapplication/pdf-
language.isoenen
subjectspiking neuronsen
subjectneural networksen
subject.classificationFIS/01 Fisica sperimentaleen
titleNetworks of spiking neurons and plastic synapses: implementation and controlen
typeDoctoral thesisen
degree.nameDottorato in fisicaen
degree.levelDottoratoen
degree.disciplineFacoltà di Scienze Matematiche Fisiche e Naturalien
degree.grantorUniversità degli studi di Roma Tor Vergataen
date.dateofdefenseA.A. 2007/2008en
Appears in Collections:Tesi di dottorato in scienze matematiche e fisiche

Files in This Item:

File Description SizeFormat
PhD_thesis.pdfThesis2691KbAdobe PDFView/Open

Show simple item record

All items in DSpace are protected by copyright, with all rights reserved.