There is a relationship between neural networks and neuromorphic computing. The term neuromorphic has developed over time. Initially, it dealt with emulating the biophysics of neurons and synapses. More recently, it has grown to include descriptions of spike-based processing systems and neural architectures that implement neuron and synapse circuits. Neuromorphic computing definitions vary from a high-fidelity mimicking of neuroscience principles to a higher-level, loosely brain-inspired set of design principles. There is also a fruitful interchange between the more accurate neuron model approach of SNNs and the lower fidelity of the replication approach of artificial neural networks (ANN) (Christensen 2022). This paper will not explore neuromorphic computing. A future article will discuss neuromorphic computing and consider how a brain computing architecture can help improve computational capability, but it is briefly commented on here. Although significant differences exist in neuromorphic computing systems’ implementation, they all utilize a Von Neumann computing construct. Trying to quantize a fixed number of neurons into a chip is not how a biological brain operates. Brain interconnects take place in a three-dimensional space. Electronics cannot do this. Crossbar interconnects are inefficient and do match the dynamic, programmable, and low-power manner synapses connect. Neuron computational engines are embedded in a fundamentally different way as compared to electronics. Simple, streamlined, and optimized neuron computational engines are very different from the CPUs found in electronic computing platforms. (Shrestha 2022). 1. Neuron network models Since most neurons are deep within the brain, it is tough to access and experimentally uncover the functions of neurons. Fig. 21 shows the two types of neural network models considered. An artificial neural network is much easier to implement. Depending on the model used, it can range from modest to average realization of what occurs in a spiking neural network. Fig. 22 shows a high-level process flow of the Loihi neural network microarchitecture. This represents its level of adoption of biological neural network concepts. The Loihi and Loihi 2 chips have sought to implement a spiking neural network in chip electronics that can be used with conventional electronics. Biomimicry is done at a functional level because it is not conceived in biological materials (Davies 2018). 2. Artificial neural networks Artificial neural networks aim to reflect the behavior of the human brain and provide a basis for creating brain-inspired computing models. A neural network structure aims to create computer architectures that recognize patterns and solve problems. If it is done well, the resulting capability should approach a focused cognitive function seen in the human brain. Thus, applied neuroscience can be viewed as intersecting with the overlapping fields of artificial intelligence, machine learning, and deep learning. Schaeffer et al. argue that there is no free lunch for deep learning in neuroscience. Deep learning is part of the machine learning methods family that deals with representation learning. Researchers have been using neural networks to model and mimic the function of brain grid cells. Grid cells are a type of neuron that is crucial to the brain’s navigation system. They help individuals know where they are in a 3-D position and move within the confines of that domain. Feeding training information into a deep learning neural network is not enough to produce the brain function that results in successful navigation with grid cells. Only when applying specific constraints that are not part of the neural network can successful navigation take place. As a result, it takes more than just neural network hardware to generate an operational neural network. The authors’ main observation is that deep learning models cannot reproduce grid cells capability simply from task training (Schaeffer 2022). When looking to design a system, how should a neural network be architected? A classical Von Neumann architecture must have a processing unit, memory, and input and output capabilities. A neural network is only composed of neurons, so each designed neuron must have processing, memory, and input and output functions. These are straightforward concepts, but at what biomimicry level should a debdd BDD Neural Net Models <<block>> <<domain>> Neural Network Models parts : Spiking Neural Networks {unique} : Artificial Neural Networks {unique} <<block>> Spiking Neural Networks parts property data-driven computing : Spiking Neural Networks : Spike-Timing-Dependent Plasticity (STDP) {unique} <<block>> Artificial Neural Networks parts property conventional rule-based computing : Artificial Neural Networks : Hodgkins-Hurley Model {unique} : Leaky Integrate=and-Fire Model {unique} : Izhikevich Model {unique} <<block>> Hodgkins-Hurley Model <<block>> Leaky Integrate=and-Fire Model <<block>> Izhikevich Model <<block>> Spike-Timing-Dependent Plasticity (STDP) ANN SNN Figure 21. Neural network models. JOHANSEN Human brain function and the creation model 2023 ICC 306
RkJQdWJsaXNoZXIy MTM4ODY=