Monday, August 5, 2019
Modelling of Meromorphic Retina
Modelling of Meromorphic Retina CHAPTER 1 INTRODUCTION and literature review 1. INTRODUCTION The world depends on how we sense it; perceive it and how we act is according to our perception of this world. But where from this perception comes? Leaving the psychological part, we perceive by what we sense and act by what we perceive. The senses in humans and other animals are the faculties by which outside information is received for evaluation and response. Thus the actions of humans depend on what they sense. Aristotle divided the senses into five, namely: Hearing, Sight, Smell, Taste and Touch. These have continued to be regarded as the classical five senses, although scientists have determined the existence of as many as 15 additional senses. Sense organs buried deep in the tissues of muscles, tendons, and joints, for example, give rise to sensations of weight, position of the body, and amount of bending of the various joints; these organs are called proprioceptors. Within the semicircular canal of the ear is the organ of equilibrium, concerned with the sense of balance. General senses, which produce information concerning bodily needs (hunger, thirst, fatigue, and pain), are also recognized. But the foundation of all these is still the list of five that was given by Aristotle. Our world is a visual world. Visual perception is by far the most important sensory process by which we gather and extract information from our environment. Vision is the ability to see the features of objects we look at, such as color, shape, size, details, depth, and contrast. Vision is achieved when the eyes and brain work together to form pictures of the world around us. Vision begins with light rays bouncing off the surface of objects. Light reflected from objects in our world forms a very rich source of information and data. The light reflected has a short wavelength and high transmission speed that allow us a spatially accurate and fast localization of reflecting surfaces. The spectral variations in wavelength and intensity in the reflected light resemble the physical properties of object surfaces, and provide means to recognize them. The sources that light our world are usually inhomogeneous. The sun, our natural light source, for example, is in good approximation a point sou rce. Inhomogeneous light sources cause shadows and reflections that are highly correlated with the shape of objects. Thus, knowledge of the spatial position and extent of the light source enables further extraction of information about our environment. Our world is also a world of motion. We and most other animals are moving creatures. We navigate successfully through a dynamic environment, and we use predominantly visual information to do so. A sense of motion is crucial for the perception of our own motion in relation to other moving and static objects in the environment. We must predict accurately the relative dynamics of objects in the environment in order to plan appropriate actions. Take for example the following situation that illustrates the nature of such a perceptual task: the batsman a cricket team is facing a bowler. In order to get the boundary on the ball, he needs an accurate estimate of the real motion trajectory of the ball such that he can precisely plan and orchestrate his body movements to hit the ball. There is little more than just visual information available to him in order to solve the task. And once he is in motion the situation becomes much more complicated because visual motion information now represents the relative motion between him and the ball while the important coordinate frame remains static. Yet, despite its difficulty, with appropriate training some of us become astonishingly good at performing this task. High performance is important because we live in a highly competitive world. The survival of the fittest applies to us as to any other living organism, although the fields of competition might have slightly shifted and diverted during recent evolutionary trends. This competitive pressure not only promotes a visual motion perception system that can determine quickly what is moving where, in which direction, and at what speed; but it also forces this system to be efficient. Efficiency is crucial in biological systems. It encourages solutions that consume the smallest amount of resources of time, substrate, and energy. The requirement for efficiency is advantageous because it drives the system to be quicker, to go further, to last longer, and to have more resources left to solve and perform other tasks at the same time. Thus, being the complex sensory-motor system as the batsman is, he cannot dedicate all of the resources available to solve a single task. Compared to human perceptual abilities, nature provides us with even more astonishing examples of efficient visual motion perception. Consider the various flying insects that navigate by visual perception. They weigh only fractions of grams, yet they are able to navigate successfully at high speeds through complicated environments in which they must resolve visual motions up to 2000 deg/s. 1.1 ARTIFICIAL SYSTEMS What applies to biological systems applies also to a large extent to any artificial autonomous system that behaves freely in a real-world environment. When humankind started to build artificial autonomous systems, it was commonly accepted that such systems would become part of our everyday life by the year 2001. Numberless science-fiction stories and movies have encouraged visions of how such agents should behave and interfere with human society. And many of these scenarios seem realistic and desirable. Briefly, we have a rather good sense of what these agents should be capable of. But the construction is still eluding. The semi- autonomous rover of NASAs recent Mars missions or demonstrations of artificial pets are the few examples. Remarkably the progress in this field is slow than the other fields of electronics. Unlike transistor technology in which explosion of density is defined by the Moores law and also in terms of the computational powers the performance of autonomous systems is still not to the par. To find out the reason behind it we have to understand the limitation of traditional approaches. The autonomous system is the one that perceives, takes decision and plans action at a cognitive level, in doing so it must show some degree of intelligence. Returning back to the batsman example, he knows exactly what he has to do to dispatch the ball to the boundary, he has to get into a right position and then hit the ball with a precise timing. In this process, the photons hit the retina and then muscle force is applied. The batsman is not aware that this much is going on into his body. The batsman has a nervous system, and one of its many functions is to instantiate a transformation layerbetween the environme nt and his cognitive mind. The brain reduces and preprocesses the huge amount of noisy sensory data, categorizes and extracts the relevant information, and translates it into a form that is accessible to cognitive reasoning. Thus it is clear here that the there is cluster of process that takes place in a biological cognitive system in a very short time duration. And also that an important part of this whole process is transduction although it is not the one that can solely perform the whole complex task. Thus perception is the interpretationof sensory information with respect to the perceptual goal. The process is shown in the fig-1. 1.2 DIFFERENCE BETWEEN BIOLOGICAL SYSTEMS AND COMPUTERS The brain is fundamentally differently organized than a computer and science is still a long way from understanding how the whole thing works. A computer is really easy to understand by comparison. Features (or organization principles) that clearly distinguish a brain from a computer are: Massive parallelism, Distributed storage, Asynchronous processing, and Self organization. The computer is still a basically serially driven machine with a centralized storage and minimal self organization. The table 1.1 enlists these differences. Table 1.1 Differences in the organization principles and operation of computer and brain The digital computation may become so fast that it may solve the present problems and also it may become possible that the autonomous systems are made by digital components that are as powerful as efficient and as intelligent as we may imagine in our wildest dreams. However there are doubts in it and so we have to switch to an implementation framework that can realize all these things. 1.3 NEURAL COMPUTATIONS WITH THE HELP OF ANALOG INTEGRATED CIRCUITS It was Carver Mead who, inspired by the course ââ¬Å"The Physics of Computationâ⬠he jointly taught with John Hopfield and Richard Feynman at Caltech in 1982, first proposed the idea of embodying neural computation in silicon analog very large-scale integrated (aVLSI) circuits. Biological neural networks are examples of wonderfully engineered and efficient computational systems. When researchers first began to develop mathematical models for how nervous systems actually compute and process information, they very soon realized that one of the main reasons for the impressive computational power and efficiency of neural networks is the collective computation that takes place among their highly connected neurons. And in researches, it is also well established that these computations are not undertaken digitally although the digital way is much simpler. Real neurons have a cell membrane with a capacitance that acts as a low-pass filter to the incoming signal through its dendrites; they have dendritic trees that non-linearly add signals from other neurons, and so forth. Network structure and analog processing seem to be two key properties of nervous systems providing them with efficiency and computational power, but nonetheless two properties that digital compute rs typically do not share or exploit. 1.4 LITERATURE REVIEW 1. Biological information-processing systems operate on completely different principles from those with which most engineers are familiar. For many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those we have been able to implement using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals, rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power . For this reason, adaptive analog technology can be expected to utilize the full potential of wafer scale silicon fabrication 2. The architecture and realization of microelectronic components for a retina-implant system that will provide visual sensations to patients suffering from photoreceptor degeneration. Special circuitry has been developed for a fast single-chip CMOS image sensor system, which provides high dynamic range of more than seven decades (without any electronic or mechanical shutter) corresponding to the performance of the human eye. This image sensor system is directly coupled to a digital filter and a signal processor that compute the so-called receptive-field function for generation of the stimulation data. These external components are wireless, linked to an implanted flexible silicon multielectrode stimulator, which generates electrical signals for electro stimulation of the intact ganglion cells. All components, including additional hardware for digital signal processing and wireless data and power transmission, have been fabricated using in-house standard CMOS technology 3. The circuits inspired by the nervous system that either help verifying neuron physiological models, or that are useful components in artificial perception/action systems. Research also aims at using them in implants. These circuits are computational devices and intelligent sensors that are very differently organized than digital processors. Their storage and processing capacity is distributed. They are asynchronous and use no clock signal. They are often purely analog and operate time continuous. They are adaptive or can even learn on a basic level instead of being programmed. A short introduction into the area of brain research is also included in the course. The students will learn to exploit mechanisms employed by the nervous system for compact energy efficient analog integrated circuits. They will get insight into a multidisciplinary research area. The students will learn to analyze analog CMOS circuits and acquire basic knowledge in brain research methods. 4. Smart vision systems will be an inevitable component of future intelligent systems. Conventional vision systems, based on the system level integration (or even chip level integration) of an image (usually a CCD) camera and a digital processor, do not have the potential for application in general purpose consumer electronic products. This is simply due to the cost, size, and complexity of these systems. Because of these factors conventional vision systems have mainly been limited to specific industrial and military applications. Vision chips, which include both the photo sensors and parallel processing elements (analog or digital), have been under research for more than a decade and illustrate promising capabilities. 5. Dr. Carver Mead, professor emeritus of California Institute of Technology (Caltech), Pasadena pioneered this field. He reasoned that biological evolutionary trends over millions of years have produced organisms that engineers can study to develop better artificial systems. By giving senses and sensory-based behavior to machines, these systems can possibly compete with human senses and brings an intersection between biology, computer science and electrical engineering. Analog circuits, electrical circuits operated with continuous varying signals, are used to implement these algorithmic processes with transistors operated in the sub-threshold or weak inversion region (a region of operation in which transistors are designed to conduct current though the gate voltage is slightly lower than the minimum voltage, called threshold voltage, required for normal conduction to take place) where they exhibit exponential current voltage characteristics and low currents. This circuit paradigm pr oduces high density and low power implementations of some functions that are computationally intensive when compared with other paradigms (triode and saturation operational regions). {A triode region is operating transistor with gate voltage above the threshold voltage but with the drain-source voltage lower than the difference between the gate-source voltage and threshold voltage. For saturation region, the gate voltage is still above the threshold voltage but with the drain-source voltage above the difference between the gate-source voltage and threshold voltage. Transistor has four terminals: drain, gate, source and bulk. Current flows between the drain and the source when enough voltage is applied through the gate that enables conduction. The bulk is the body of the transistor.}. As the systems mature, human parts replacements would become a major application area of the Neuromorphic electronics. The fundamental principle is by observing how biological systems perform these func tions robust artificial systems are designed. 6. In This proposed work a circuit level model of Neuromorphic Retina, this is a crude electronic model of biologically inspired smart visual sensors. These visual sensors have integrated image acquisition and parallel processing. Having these features neuromorphic retina mimics the neural circuitry of bionic eye. The proposed electronic model contains adaptive photoreceptors as light sensors and other circuit components such as averaging circuits, circuits representing ganglion cells, neuronal firing circuits etc that junction to sense brightness, size, orientation and shape to distinguish objects in closer proximity. Although image-processing features are available with modern robots but most of the issues related to image processing are taken care by software resources. Whereas machine vision with the help of neuromorphic retina is empowered with image processing at the front end. With added hardware resources, processing at the front end can reduce a lot of engineering resources for making electronic devices with sense of vision. 1.5 OBJECTIVES OF THE PRESENT WORK This project work describes a circuit level model of Neuromorphic Retina, which is a crude electronic model of biologically inspired smart visual sensors. These visual sensors have integrated image acquisition and parallel processing. Having these features neuromorphic retina mimics the neural circuitry of bionic eye. The proposed electronic model contains adaptive photoreceptors as light sensors and other neural firing circuits etc at junction to sense brightness, size, orientation and shape to distinguish objects in closer proximity. Although, image processing features are available with modern robots but most of the issues related to image processing are taken care by software resources. Whereas, machine vision with the help of neuromorphic retina is empowered with image processing at the front end. In this paper it has been shown that with added hardware resources, processing at the front end it can reduce a lot of engineering resources as well as time for making electronic devic es with sense of vision. . The objectives of present work are: Modelling of Neuromorphic Retina The photoreceptor block The horrizontal cell block The transistor mesh implemented with cmos technology The integerated block The integrated block of prs, horizontal cells and bipolar cells The spike generation circuit 1.6 Concluding Remarks In this chapter, the function of the artificial system, difference between brain and computer work is described. The present work is focused on designing of neuromorphic retina layer circuits. Many successful studies have been carried out by the researchers to study the behavior and failure of neuromorphic retina. Some investigators have performed the experimental work to study the phenomenon of the neuromorphic retina. Chapter 2 conations the biological neurons and the electronics of neuromorphic retina in this the descriptions of silicon neurons, electrical nodes as neurons, perceptrons, integrate fire neurons, biological significance of neuromorphic systems, neuromorphic electronics engineering methods, process of developing a neuromorphic chip. Chapter 3 describes the artificial silicon retina, physiology of vision, the retina, photon to electrons, why we require the neuromorphic retina?, the equivalent electronic structure, visual path to brain. In chapter 4 designing and implementation of neuromorphic retina in this the description of the photoreceptor block, the horrizontal cell block, the integerated block, the integrated block of photoreceptors, horizontal cells and bipolar cells, the spike generation circuit. In chapter 5 the design analyses and test results of neuromorphic retina layers. The results are summarized in the form of conclusion in Chapter 6 CHAPTER-2 BIOLOGICAL neurons AND neuromorphic electronics 2.1 INTRODUCTION Neuromorphic systems are inspired by the structure, function and plasticity of biological nervous systems. They are artificial neural systems that mimic algorithmic behavior of the biological animal systems through efficient adaptive and intelligent control techniques. They are designed to adapt, learn from their environments, and make decisions like biological systems and not to perform better than them. There are no efforts to eliminate deficiencies inherent in biological systems. This field, called Neuromorphic engineering, is evolving a new era in computing with a great promise for future medicine, healthcare delivery and industry. It relies on plenty of experiences which nature offers to develop functional, reliable and effective artificial systems. Neuromorphic computational circuits, designed to mimic biological neurons, are primitives based on the optical and electronic properties of semiconductor materials 2.1 BIOLOGICAL NEURONS Biological neurons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neurons have three main parts: a central cell body, called the soma, and two different types of branched, treelike structures that extend from the soma, called dendrites and axons. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons. The dendrites send impulses to the soma while the axon sends impulses away from the soma. Functionally, there are three different types of neurons: Sensory neurons They carry information from sense receptors (nerves that help us see, smell, hear taste and feel) to the central nervous system which includes the brain and the spinal cord. Motor neurons They carry information from the CNS to effectors (muscles or glands that release all kind of stuff, from water to hormones to ear wax) Interneuron They connect sensory neurons and motor neurons. It has a cell body (or soma) and root-like extensions called mygdale. Amongst the mygdale, one major outgoing trunk is the axon, and the others are dendrites. The signal processing capabilities of a neuron is its ability to vary its intrinsic electrical potential (membrane potential) through special electro-physical and chemical processes. The portion of axon immediately adjacent to the cell body is called axon hillock. This is the point at which action potentials are usually generated. The branches that leave the main axon are often called collaterals. Certain types of neurons have axons or dendrites coated with a fatty insulating substance called myelin. The coating is called the myelin sheath and the fiber is said to be myelinated. In some cases, the myelin sheath is surrounded by another insulating layer, sometimes called neurilemma. This layer, thinner than the myelin sheath and continuous over the nodes of Ranvier, is made up o thin cells called Schwann cells. Now, how do these things work? Inside and just outside of the neurons are sodium ions (Na+) and potassium ions (K+). Normally, when the neuron is just sitting not sending any messages, K+ accumulate inside the neuron while Na+ is kicked out to the area just outside the neuron. Thus, there is a lot of K+ in the neuron and a lot of Na+ just outside of it. This is called the resting potential. Keeping the K+ in and the Na+ is not easy; it requires energy from the body to work. An impulse coming in from the dendrites, reverses this balance, causing K+ to leave the neuron and Na+ to come in. This is known as depolarization. As K+ leave Na+ enter the neuron, energy is released, as the neuron no longer is doing any work to keep K+ in and Na+ out. This energycreates an electrical impulse or action potential that is transmitted from the soma to axon. As the impulse leaves the axon, the neuron repolarizes, that is it takes K+ back in and kicks Na+ out and restores itself to resting potential, ready to send another impulse. This process occurs extremely quickly. A neuron theoretically can send roughly 266 messages in one second. The electrical impulse may stimulate other neurons from its synaptic knobs to propagate the message. Experiments have shown that the membrane voltage variation during the generation of an action potential is generally in a form of a spike (a short pulse figure 2.2), and the shape of this pulse in neurons is rather stereotype and mathematically predictable. 2.2 SILICON NEURONS Neuromorphic engineers are more interested in the physiological rather than the anatomical model of a neuron though, which is concerned with the functionality rather than only classifying its parts. And their preference lies with models that can be realized in aVLSI circuits. Luckily many of the models of neurons have always been formulated as electronic circuits since many of the varying observables in biological neurons are voltages and currents. So it was relatively straight forward to implement them in VLSI electronic circuits. There exist now many aVLSI models of neurons which can be classified by their level of detail that is represented in them. A summary can be found in table 3.1. The most detailed ones are known as ââ¬Ësilicon neurons. A bit cruder on the level of detail are ââ¬Ëintegrate and fire neurons and even more simplifying are ââ¬ËPerceptrons also known as ââ¬ËMc Culloch Pitts neurons. The simplest way however of representing a neuron in electronics is to represent neurons as electrical nodes. Table 2.1 VLSI models of neurons 2.2.1 Electrical Nodesasneurons The most simple of all neuronal models is to just represent a neurons activity by a voltage or a current in an electrical circuit, and input and output are identical, with no transfer function in-between. If a voltage node represents a neuron, excitatory bidirectional connections can be realized simply by resistive elements between the neurons. If you want to add the possibility for inhibitory and mono directional connections, followers can be used instead of resistors. Or if a current represents neuronal activity then a simple current mirror can implement a synapse. Many useful processing networks can be implemented in this manner or in similar ways. For example a resistive network can compute local averages of current inputs. 2.2.2 Perceptrons A perceptron is a simple mathematical model of a neuron. As real neurons it is an entity that is connected to others of its kind by one output and several inputs. Simple signals pass through these connections. In the case of the perceptron these signals are not action potentials but real numbers. To draw the analogy to real neurons these numbers may represent average frequencies of action potentials. The output of a perceptron is a monotonic function (referred to as activation function) of the weighted sum of its inputs (see figure 3.3). Perceptrons are not so much implemented in analog hardware. They have originally been formulated as a mathematical rather than an electronic model and traditional computers are good at those whereas it is not so straight forward to implement simple mathematics into aVLSI. Still there exist aVLSI implementations of perceptrons since they still promise the advantage of a real fully parallel, energy and space conservative implementation. A simple aVLSI implementation of a perceptron is given in the schematics in figure 3.4. This particular implementation works well enough in theory, in practice however it is on one hand not flexible enough (particularly the activation function), on the other already difficult to tune by its bias voltages and prone to noise on the a chip. Circuits that have really been used are based on this one but were more extensive to deal with the problems. 2.2.3 Integrate Fire Neurons This model of a neuron sticks closer to the original in terms of its signals. Its output and its inputs are pulse signals. In terms of frequencies it actually can be modeled by a perceptron and vice versa. It is however much better suited to be implemented in aVLSI. And the spike communication also has distinct advantages in noise robustness. That is also thought to be a reason, why the nervous system uses that kind of communication. An integrate and fire neuron integrates weighted charge inputs triggered by presynaptic action potentials. If the integrated voltage reaches a threshold, the neuron fires a short output pulse and the integrator is reset. These basic properties are depicted in figure 2.5. 2.3 BIOLOGICAL SIGNIFICANCE OF NEUROMORPHIC SYSTEMS The fundamental philosophy of neuromorphic engineering is to utilize algorithmic inspiration of biological systems to engineer artificial systems. It is a kind of technology transfer from biology to engineering that involves the understanding of the functions and forms of the biological systems and consequent morphinginto silicon chips. The fundamental biological unit mimicked in the design of neuromorphic systems is the neurons. Animal brain is composed of these individual units of computation, called neurons and the neurons are the elementary signaling parts of the nervous systems. By examining the retina for instance, artificial neurons that mimic the retinal neurons and chemistry are fabricated on silicon (most common material), gallium arsenide (GaAs) or possibly prospective organic semiconductor materials. 2.4 NEUROMORPHIC ELECTRONICS ENGINEERING METHODS Neuromorphic systems design methods involves the mapping of models of perfection and sensory processing in biological systems onto analog VLSI systems which emulate the biological functions at the same time resembling their structural architecture. These systems are mainly designed with complementary metal oxide semiconductors (CMOS) transistors that enable low power consumption, higher chip density and integration, lower cost. These transistors are biased to operate in the sub-threshold region to enable the realizations of high dynamic range of currents which are very important for neural systems design. Elements of adaptation and learning (a sort of higher level of adaptation in which past experience is used to effectively readjust the response of a system to previously unseen input stimuli) are incorporated into neuromorphic systems since they are expected to emulate the behavior of the biological systems and compensate for imperfections in t Modelling of Meromorphic Retina Modelling of Meromorphic Retina CHAPTER 1 INTRODUCTION and literature review 1. INTRODUCTION The world depends on how we sense it; perceive it and how we act is according to our perception of this world. But where from this perception comes? Leaving the psychological part, we perceive by what we sense and act by what we perceive. The senses in humans and other animals are the faculties by which outside information is received for evaluation and response. Thus the actions of humans depend on what they sense. Aristotle divided the senses into five, namely: Hearing, Sight, Smell, Taste and Touch. These have continued to be regarded as the classical five senses, although scientists have determined the existence of as many as 15 additional senses. Sense organs buried deep in the tissues of muscles, tendons, and joints, for example, give rise to sensations of weight, position of the body, and amount of bending of the various joints; these organs are called proprioceptors. Within the semicircular canal of the ear is the organ of equilibrium, concerned with the sense of balance. General senses, which produce information concerning bodily needs (hunger, thirst, fatigue, and pain), are also recognized. But the foundation of all these is still the list of five that was given by Aristotle. Our world is a visual world. Visual perception is by far the most important sensory process by which we gather and extract information from our environment. Vision is the ability to see the features of objects we look at, such as color, shape, size, details, depth, and contrast. Vision is achieved when the eyes and brain work together to form pictures of the world around us. Vision begins with light rays bouncing off the surface of objects. Light reflected from objects in our world forms a very rich source of information and data. The light reflected has a short wavelength and high transmission speed that allow us a spatially accurate and fast localization of reflecting surfaces. The spectral variations in wavelength and intensity in the reflected light resemble the physical properties of object surfaces, and provide means to recognize them. The sources that light our world are usually inhomogeneous. The sun, our natural light source, for example, is in good approximation a point sou rce. Inhomogeneous light sources cause shadows and reflections that are highly correlated with the shape of objects. Thus, knowledge of the spatial position and extent of the light source enables further extraction of information about our environment. Our world is also a world of motion. We and most other animals are moving creatures. We navigate successfully through a dynamic environment, and we use predominantly visual information to do so. A sense of motion is crucial for the perception of our own motion in relation to other moving and static objects in the environment. We must predict accurately the relative dynamics of objects in the environment in order to plan appropriate actions. Take for example the following situation that illustrates the nature of such a perceptual task: the batsman a cricket team is facing a bowler. In order to get the boundary on the ball, he needs an accurate estimate of the real motion trajectory of the ball such that he can precisely plan and orchestrate his body movements to hit the ball. There is little more than just visual information available to him in order to solve the task. And once he is in motion the situation becomes much more complicated because visual motion information now represents the relative motion between him and the ball while the important coordinate frame remains static. Yet, despite its difficulty, with appropriate training some of us become astonishingly good at performing this task. High performance is important because we live in a highly competitive world. The survival of the fittest applies to us as to any other living organism, although the fields of competition might have slightly shifted and diverted during recent evolutionary trends. This competitive pressure not only promotes a visual motion perception system that can determine quickly what is moving where, in which direction, and at what speed; but it also forces this system to be efficient. Efficiency is crucial in biological systems. It encourages solutions that consume the smallest amount of resources of time, substrate, and energy. The requirement for efficiency is advantageous because it drives the system to be quicker, to go further, to last longer, and to have more resources left to solve and perform other tasks at the same time. Thus, being the complex sensory-motor system as the batsman is, he cannot dedicate all of the resources available to solve a single task. Compared to human perceptual abilities, nature provides us with even more astonishing examples of efficient visual motion perception. Consider the various flying insects that navigate by visual perception. They weigh only fractions of grams, yet they are able to navigate successfully at high speeds through complicated environments in which they must resolve visual motions up to 2000 deg/s. 1.1 ARTIFICIAL SYSTEMS What applies to biological systems applies also to a large extent to any artificial autonomous system that behaves freely in a real-world environment. When humankind started to build artificial autonomous systems, it was commonly accepted that such systems would become part of our everyday life by the year 2001. Numberless science-fiction stories and movies have encouraged visions of how such agents should behave and interfere with human society. And many of these scenarios seem realistic and desirable. Briefly, we have a rather good sense of what these agents should be capable of. But the construction is still eluding. The semi- autonomous rover of NASAs recent Mars missions or demonstrations of artificial pets are the few examples. Remarkably the progress in this field is slow than the other fields of electronics. Unlike transistor technology in which explosion of density is defined by the Moores law and also in terms of the computational powers the performance of autonomous systems is still not to the par. To find out the reason behind it we have to understand the limitation of traditional approaches. The autonomous system is the one that perceives, takes decision and plans action at a cognitive level, in doing so it must show some degree of intelligence. Returning back to the batsman example, he knows exactly what he has to do to dispatch the ball to the boundary, he has to get into a right position and then hit the ball with a precise timing. In this process, the photons hit the retina and then muscle force is applied. The batsman is not aware that this much is going on into his body. The batsman has a nervous system, and one of its many functions is to instantiate a transformation layerbetween the environme nt and his cognitive mind. The brain reduces and preprocesses the huge amount of noisy sensory data, categorizes and extracts the relevant information, and translates it into a form that is accessible to cognitive reasoning. Thus it is clear here that the there is cluster of process that takes place in a biological cognitive system in a very short time duration. And also that an important part of this whole process is transduction although it is not the one that can solely perform the whole complex task. Thus perception is the interpretationof sensory information with respect to the perceptual goal. The process is shown in the fig-1. 1.2 DIFFERENCE BETWEEN BIOLOGICAL SYSTEMS AND COMPUTERS The brain is fundamentally differently organized than a computer and science is still a long way from understanding how the whole thing works. A computer is really easy to understand by comparison. Features (or organization principles) that clearly distinguish a brain from a computer are: Massive parallelism, Distributed storage, Asynchronous processing, and Self organization. The computer is still a basically serially driven machine with a centralized storage and minimal self organization. The table 1.1 enlists these differences. Table 1.1 Differences in the organization principles and operation of computer and brain The digital computation may become so fast that it may solve the present problems and also it may become possible that the autonomous systems are made by digital components that are as powerful as efficient and as intelligent as we may imagine in our wildest dreams. However there are doubts in it and so we have to switch to an implementation framework that can realize all these things. 1.3 NEURAL COMPUTATIONS WITH THE HELP OF ANALOG INTEGRATED CIRCUITS It was Carver Mead who, inspired by the course ââ¬Å"The Physics of Computationâ⬠he jointly taught with John Hopfield and Richard Feynman at Caltech in 1982, first proposed the idea of embodying neural computation in silicon analog very large-scale integrated (aVLSI) circuits. Biological neural networks are examples of wonderfully engineered and efficient computational systems. When researchers first began to develop mathematical models for how nervous systems actually compute and process information, they very soon realized that one of the main reasons for the impressive computational power and efficiency of neural networks is the collective computation that takes place among their highly connected neurons. And in researches, it is also well established that these computations are not undertaken digitally although the digital way is much simpler. Real neurons have a cell membrane with a capacitance that acts as a low-pass filter to the incoming signal through its dendrites; they have dendritic trees that non-linearly add signals from other neurons, and so forth. Network structure and analog processing seem to be two key properties of nervous systems providing them with efficiency and computational power, but nonetheless two properties that digital compute rs typically do not share or exploit. 1.4 LITERATURE REVIEW 1. Biological information-processing systems operate on completely different principles from those with which most engineers are familiar. For many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those we have been able to implement using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals, rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power . For this reason, adaptive analog technology can be expected to utilize the full potential of wafer scale silicon fabrication 2. The architecture and realization of microelectronic components for a retina-implant system that will provide visual sensations to patients suffering from photoreceptor degeneration. Special circuitry has been developed for a fast single-chip CMOS image sensor system, which provides high dynamic range of more than seven decades (without any electronic or mechanical shutter) corresponding to the performance of the human eye. This image sensor system is directly coupled to a digital filter and a signal processor that compute the so-called receptive-field function for generation of the stimulation data. These external components are wireless, linked to an implanted flexible silicon multielectrode stimulator, which generates electrical signals for electro stimulation of the intact ganglion cells. All components, including additional hardware for digital signal processing and wireless data and power transmission, have been fabricated using in-house standard CMOS technology 3. The circuits inspired by the nervous system that either help verifying neuron physiological models, or that are useful components in artificial perception/action systems. Research also aims at using them in implants. These circuits are computational devices and intelligent sensors that are very differently organized than digital processors. Their storage and processing capacity is distributed. They are asynchronous and use no clock signal. They are often purely analog and operate time continuous. They are adaptive or can even learn on a basic level instead of being programmed. A short introduction into the area of brain research is also included in the course. The students will learn to exploit mechanisms employed by the nervous system for compact energy efficient analog integrated circuits. They will get insight into a multidisciplinary research area. The students will learn to analyze analog CMOS circuits and acquire basic knowledge in brain research methods. 4. Smart vision systems will be an inevitable component of future intelligent systems. Conventional vision systems, based on the system level integration (or even chip level integration) of an image (usually a CCD) camera and a digital processor, do not have the potential for application in general purpose consumer electronic products. This is simply due to the cost, size, and complexity of these systems. Because of these factors conventional vision systems have mainly been limited to specific industrial and military applications. Vision chips, which include both the photo sensors and parallel processing elements (analog or digital), have been under research for more than a decade and illustrate promising capabilities. 5. Dr. Carver Mead, professor emeritus of California Institute of Technology (Caltech), Pasadena pioneered this field. He reasoned that biological evolutionary trends over millions of years have produced organisms that engineers can study to develop better artificial systems. By giving senses and sensory-based behavior to machines, these systems can possibly compete with human senses and brings an intersection between biology, computer science and electrical engineering. Analog circuits, electrical circuits operated with continuous varying signals, are used to implement these algorithmic processes with transistors operated in the sub-threshold or weak inversion region (a region of operation in which transistors are designed to conduct current though the gate voltage is slightly lower than the minimum voltage, called threshold voltage, required for normal conduction to take place) where they exhibit exponential current voltage characteristics and low currents. This circuit paradigm pr oduces high density and low power implementations of some functions that are computationally intensive when compared with other paradigms (triode and saturation operational regions). {A triode region is operating transistor with gate voltage above the threshold voltage but with the drain-source voltage lower than the difference between the gate-source voltage and threshold voltage. For saturation region, the gate voltage is still above the threshold voltage but with the drain-source voltage above the difference between the gate-source voltage and threshold voltage. Transistor has four terminals: drain, gate, source and bulk. Current flows between the drain and the source when enough voltage is applied through the gate that enables conduction. The bulk is the body of the transistor.}. As the systems mature, human parts replacements would become a major application area of the Neuromorphic electronics. The fundamental principle is by observing how biological systems perform these func tions robust artificial systems are designed. 6. In This proposed work a circuit level model of Neuromorphic Retina, this is a crude electronic model of biologically inspired smart visual sensors. These visual sensors have integrated image acquisition and parallel processing. Having these features neuromorphic retina mimics the neural circuitry of bionic eye. The proposed electronic model contains adaptive photoreceptors as light sensors and other circuit components such as averaging circuits, circuits representing ganglion cells, neuronal firing circuits etc that junction to sense brightness, size, orientation and shape to distinguish objects in closer proximity. Although image-processing features are available with modern robots but most of the issues related to image processing are taken care by software resources. Whereas machine vision with the help of neuromorphic retina is empowered with image processing at the front end. With added hardware resources, processing at the front end can reduce a lot of engineering resources for making electronic devices with sense of vision. 1.5 OBJECTIVES OF THE PRESENT WORK This project work describes a circuit level model of Neuromorphic Retina, which is a crude electronic model of biologically inspired smart visual sensors. These visual sensors have integrated image acquisition and parallel processing. Having these features neuromorphic retina mimics the neural circuitry of bionic eye. The proposed electronic model contains adaptive photoreceptors as light sensors and other neural firing circuits etc at junction to sense brightness, size, orientation and shape to distinguish objects in closer proximity. Although, image processing features are available with modern robots but most of the issues related to image processing are taken care by software resources. Whereas, machine vision with the help of neuromorphic retina is empowered with image processing at the front end. In this paper it has been shown that with added hardware resources, processing at the front end it can reduce a lot of engineering resources as well as time for making electronic devic es with sense of vision. . The objectives of present work are: Modelling of Neuromorphic Retina The photoreceptor block The horrizontal cell block The transistor mesh implemented with cmos technology The integerated block The integrated block of prs, horizontal cells and bipolar cells The spike generation circuit 1.6 Concluding Remarks In this chapter, the function of the artificial system, difference between brain and computer work is described. The present work is focused on designing of neuromorphic retina layer circuits. Many successful studies have been carried out by the researchers to study the behavior and failure of neuromorphic retina. Some investigators have performed the experimental work to study the phenomenon of the neuromorphic retina. Chapter 2 conations the biological neurons and the electronics of neuromorphic retina in this the descriptions of silicon neurons, electrical nodes as neurons, perceptrons, integrate fire neurons, biological significance of neuromorphic systems, neuromorphic electronics engineering methods, process of developing a neuromorphic chip. Chapter 3 describes the artificial silicon retina, physiology of vision, the retina, photon to electrons, why we require the neuromorphic retina?, the equivalent electronic structure, visual path to brain. In chapter 4 designing and implementation of neuromorphic retina in this the description of the photoreceptor block, the horrizontal cell block, the integerated block, the integrated block of photoreceptors, horizontal cells and bipolar cells, the spike generation circuit. In chapter 5 the design analyses and test results of neuromorphic retina layers. The results are summarized in the form of conclusion in Chapter 6 CHAPTER-2 BIOLOGICAL neurons AND neuromorphic electronics 2.1 INTRODUCTION Neuromorphic systems are inspired by the structure, function and plasticity of biological nervous systems. They are artificial neural systems that mimic algorithmic behavior of the biological animal systems through efficient adaptive and intelligent control techniques. They are designed to adapt, learn from their environments, and make decisions like biological systems and not to perform better than them. There are no efforts to eliminate deficiencies inherent in biological systems. This field, called Neuromorphic engineering, is evolving a new era in computing with a great promise for future medicine, healthcare delivery and industry. It relies on plenty of experiences which nature offers to develop functional, reliable and effective artificial systems. Neuromorphic computational circuits, designed to mimic biological neurons, are primitives based on the optical and electronic properties of semiconductor materials 2.1 BIOLOGICAL NEURONS Biological neurons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neurons have three main parts: a central cell body, called the soma, and two different types of branched, treelike structures that extend from the soma, called dendrites and axons. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons. The dendrites send impulses to the soma while the axon sends impulses away from the soma. Functionally, there are three different types of neurons: Sensory neurons They carry information from sense receptors (nerves that help us see, smell, hear taste and feel) to the central nervous system which includes the brain and the spinal cord. Motor neurons They carry information from the CNS to effectors (muscles or glands that release all kind of stuff, from water to hormones to ear wax) Interneuron They connect sensory neurons and motor neurons. It has a cell body (or soma) and root-like extensions called mygdale. Amongst the mygdale, one major outgoing trunk is the axon, and the others are dendrites. The signal processing capabilities of a neuron is its ability to vary its intrinsic electrical potential (membrane potential) through special electro-physical and chemical processes. The portion of axon immediately adjacent to the cell body is called axon hillock. This is the point at which action potentials are usually generated. The branches that leave the main axon are often called collaterals. Certain types of neurons have axons or dendrites coated with a fatty insulating substance called myelin. The coating is called the myelin sheath and the fiber is said to be myelinated. In some cases, the myelin sheath is surrounded by another insulating layer, sometimes called neurilemma. This layer, thinner than the myelin sheath and continuous over the nodes of Ranvier, is made up o thin cells called Schwann cells. Now, how do these things work? Inside and just outside of the neurons are sodium ions (Na+) and potassium ions (K+). Normally, when the neuron is just sitting not sending any messages, K+ accumulate inside the neuron while Na+ is kicked out to the area just outside the neuron. Thus, there is a lot of K+ in the neuron and a lot of Na+ just outside of it. This is called the resting potential. Keeping the K+ in and the Na+ is not easy; it requires energy from the body to work. An impulse coming in from the dendrites, reverses this balance, causing K+ to leave the neuron and Na+ to come in. This is known as depolarization. As K+ leave Na+ enter the neuron, energy is released, as the neuron no longer is doing any work to keep K+ in and Na+ out. This energycreates an electrical impulse or action potential that is transmitted from the soma to axon. As the impulse leaves the axon, the neuron repolarizes, that is it takes K+ back in and kicks Na+ out and restores itself to resting potential, ready to send another impulse. This process occurs extremely quickly. A neuron theoretically can send roughly 266 messages in one second. The electrical impulse may stimulate other neurons from its synaptic knobs to propagate the message. Experiments have shown that the membrane voltage variation during the generation of an action potential is generally in a form of a spike (a short pulse figure 2.2), and the shape of this pulse in neurons is rather stereotype and mathematically predictable. 2.2 SILICON NEURONS Neuromorphic engineers are more interested in the physiological rather than the anatomical model of a neuron though, which is concerned with the functionality rather than only classifying its parts. And their preference lies with models that can be realized in aVLSI circuits. Luckily many of the models of neurons have always been formulated as electronic circuits since many of the varying observables in biological neurons are voltages and currents. So it was relatively straight forward to implement them in VLSI electronic circuits. There exist now many aVLSI models of neurons which can be classified by their level of detail that is represented in them. A summary can be found in table 3.1. The most detailed ones are known as ââ¬Ësilicon neurons. A bit cruder on the level of detail are ââ¬Ëintegrate and fire neurons and even more simplifying are ââ¬ËPerceptrons also known as ââ¬ËMc Culloch Pitts neurons. The simplest way however of representing a neuron in electronics is to represent neurons as electrical nodes. Table 2.1 VLSI models of neurons 2.2.1 Electrical Nodesasneurons The most simple of all neuronal models is to just represent a neurons activity by a voltage or a current in an electrical circuit, and input and output are identical, with no transfer function in-between. If a voltage node represents a neuron, excitatory bidirectional connections can be realized simply by resistive elements between the neurons. If you want to add the possibility for inhibitory and mono directional connections, followers can be used instead of resistors. Or if a current represents neuronal activity then a simple current mirror can implement a synapse. Many useful processing networks can be implemented in this manner or in similar ways. For example a resistive network can compute local averages of current inputs. 2.2.2 Perceptrons A perceptron is a simple mathematical model of a neuron. As real neurons it is an entity that is connected to others of its kind by one output and several inputs. Simple signals pass through these connections. In the case of the perceptron these signals are not action potentials but real numbers. To draw the analogy to real neurons these numbers may represent average frequencies of action potentials. The output of a perceptron is a monotonic function (referred to as activation function) of the weighted sum of its inputs (see figure 3.3). Perceptrons are not so much implemented in analog hardware. They have originally been formulated as a mathematical rather than an electronic model and traditional computers are good at those whereas it is not so straight forward to implement simple mathematics into aVLSI. Still there exist aVLSI implementations of perceptrons since they still promise the advantage of a real fully parallel, energy and space conservative implementation. A simple aVLSI implementation of a perceptron is given in the schematics in figure 3.4. This particular implementation works well enough in theory, in practice however it is on one hand not flexible enough (particularly the activation function), on the other already difficult to tune by its bias voltages and prone to noise on the a chip. Circuits that have really been used are based on this one but were more extensive to deal with the problems. 2.2.3 Integrate Fire Neurons This model of a neuron sticks closer to the original in terms of its signals. Its output and its inputs are pulse signals. In terms of frequencies it actually can be modeled by a perceptron and vice versa. It is however much better suited to be implemented in aVLSI. And the spike communication also has distinct advantages in noise robustness. That is also thought to be a reason, why the nervous system uses that kind of communication. An integrate and fire neuron integrates weighted charge inputs triggered by presynaptic action potentials. If the integrated voltage reaches a threshold, the neuron fires a short output pulse and the integrator is reset. These basic properties are depicted in figure 2.5. 2.3 BIOLOGICAL SIGNIFICANCE OF NEUROMORPHIC SYSTEMS The fundamental philosophy of neuromorphic engineering is to utilize algorithmic inspiration of biological systems to engineer artificial systems. It is a kind of technology transfer from biology to engineering that involves the understanding of the functions and forms of the biological systems and consequent morphinginto silicon chips. The fundamental biological unit mimicked in the design of neuromorphic systems is the neurons. Animal brain is composed of these individual units of computation, called neurons and the neurons are the elementary signaling parts of the nervous systems. By examining the retina for instance, artificial neurons that mimic the retinal neurons and chemistry are fabricated on silicon (most common material), gallium arsenide (GaAs) or possibly prospective organic semiconductor materials. 2.4 NEUROMORPHIC ELECTRONICS ENGINEERING METHODS Neuromorphic systems design methods involves the mapping of models of perfection and sensory processing in biological systems onto analog VLSI systems which emulate the biological functions at the same time resembling their structural architecture. These systems are mainly designed with complementary metal oxide semiconductors (CMOS) transistors that enable low power consumption, higher chip density and integration, lower cost. These transistors are biased to operate in the sub-threshold region to enable the realizations of high dynamic range of currents which are very important for neural systems design. Elements of adaptation and learning (a sort of higher level of adaptation in which past experience is used to effectively readjust the response of a system to previously unseen input stimuli) are incorporated into neuromorphic systems since they are expected to emulate the behavior of the biological systems and compensate for imperfections in t
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.