• Spiking Neural Networks is a term usually referring to artificial neural network implementations that seek to capture biological neural networks’ capability that “spikes” or pulses when neurons are activated as required to participate in a group-distributed task. Otherwise, the neurons stay in a very low-energy mode. Since this is based on biological function, it is sometimes used to describe biological neural circuits. B. Research questions Three research questions that build upon each other are answered in this paper. These questions bound the efforts for this research effort. The portions of the Results section that refers to each question are highlighted. The discussion section will provide the most complete answer to each one that can be inferred from the architectural model and assessment. First, how does a Creation Model provide additional insight and context for the implementation and mission of human beings? Since God created the heavens and the Earth, God the engineer had a master plan for His implementation. With human beings being made last as the crowning part of creation, there are many ways and many levels in which they engage with these resources. Second, what modifications to the full compute stack model are required to capture unique human brain function? The full compute stack model cannot capture our Imago Dei faculties without modifications. Human brain function, above all others, shows a clear differentiation from animals with the human spirit and the manifold engagements that occur with the Holy Spirit. Third, what observations about human brain function can be made from the neuron and neural network architectural models? Much work has been done in both neuroscience and neuromorphic computing. With so many basic features in neuroscience that still are unknown, capturing architecture models can provide a framework for how to view this complex information. With these three questions answered, it will be clearer why human brain function is above all else and how this occurs, with the focus in this paper being from the perspective of neuronal and neural circuits. C. Scope This paper focuses on neurons and neural networks’ contributions to brain function. Since both neuroscience and neuromorphic computing research is being done at this level, it is beneficial to draw these two domains together in the context of the three research questions shared above. This paper does not explore the higher-level implications of neural networks, like generative artificial intelligence. Thoughts from Jovanovic and Campbell, who discuss generative artificial intelligence capability, help draw out a few scope-related points. “Generative modeling is an artificial intelligence (AI) technique that generates synthetic artifacts by analyzing training examples; learning their patterns and distributions, and then creating realistic facsimiles. Generative AI (GAI) uses genitive modeling and advances in deep learning (DL) to produce diverse content at scale by utilizing existing media such as text, graphics, audio, and video (Jovanovic 2022).” Regarding GAI and this paper’s research questions, there are similarities and differences between GAI and human brain function. Human brains learn, but not like GAI. Human brains can think abstractly and interface with the Holy Spirit, which GAI cannot. This difference is pointed out by the limitations found in the full compute stack model without modification. The details of these differences will be explored in future papers that build upon the neuron and neural network observations in this paper. D. Use of systems engineering tools Systems engineering is an engineering discipline that focuses on successfully designing and integrating functional modules to work together as a system. Often, a systems engineer will know only some of the details of how each module works, and they have a great deal of insight into how it integrates all the parts. When considering the human brain, this approach can uncover interrelationships and dependencies that must be accounted for. E. Compute architecture Computing architectural levels can be considered as layers feeding into one another, starting at the lowest level and working up to the more involved levels. The full compute stack provides a framework to place functional elements in a structured context. This is used in computer science and applied to neuromorphic computing, the discipline of computing that mimics brain function. Deoxyribonucleic acid (DNA) sequencing has found an innovative way to access the bits of information in the double helix and get that densely packed information. Unfortunately, it is a destructive process that breaks the DNA strands into many pieces. Then, through a painstaking process, the pieces are assembled in an orderly fashion into the original sequence. A layout of what it should look like is commonly used to help fit the pieces together. Like the picture on the jigsaw pieces, it guides how to make them fit. As discussed by Waterson, the term used for mapping and assembly of DNA sequences is scaffolding. A set of overlapping DNA sequences can be put together in a consensus region called a contig. Massing together contigs can form scaffolds. These steps help to construct the full genetic sequence of an organism. (Waterson, 2002). The methodological concept of scaffolding is applied in this architecture modeling exploration. For the application in this paper, the full compute stack is an architecture that can be used as a scaffolding or a guide to determine how functional elements should relate to one another (Schuman 2022). F. Neurons The neuron is the primary building block in the brain, the central nervous system, and the interfaces with senses, muscles, and control functions. This powerful nerve cell has computing, memory, and input and output capabilities. It is a small computer. A unique feature of neurons is that they link together. The brain is full of these linked nerve cells. The number of connections between the neurons is orders of magnitudes greater than the number of neurons. In contrast to a classical computer with separate units for (1) the central processing unit, (2) the memory, and (3) the input and output within a Von Neumann architecture, each neuron has a version of all three of these features in the same unit. As will be discussed in the results section, the computing capability of one neuron is not analogous to a laptop central processing unit, but it is at a lower level of computing functionality within the central processing unit. JOHANSEN Human brain function and the creation model 2023 ICC 289
RkJQdWJsaXNoZXIy MTM4ODY=