For Engineers 



(This page demands us to be hard on the problem to arrive at the correct solution)  


1. How do we know that we have reached a solution?


To understand how the nervous system generates internal sensations—such as perception and memory—we need a hypothesis that explains these first-person experiences. Since these sensations cannot be directly studied within biological systems, the mechanism must be theoretically derived, its predictions tested, compared across species, and ultimately replicated in engineered systems—the gold standard of verification. Unlike biology, engineering and physical sciences routinely handle abstract models, offering useful theoretical tools for this task. The diverse and constrained findings across levels of the nervous system suggest there may be a unique solution. By applying principles from physics and mathematics, we aim to derive a mechanism that accounts for all observed features and enables triangulation of disparate findings. Loss- and gain-of-function states offer further clues for validation. However, engineers typically want to understand how a mechanism is derived before attempting replication. This page outlines the derivation process to clarify the necessary experimental steps and to motivate efforts toward engineering a working model of the proposed system.


Let’s consider how experts from different scientific fields approach complex problems. A mathematician would say, “Identify all the equations—non-redundant ones—and make sure every variable appears at least once. Then solve the system.” A physicist might respond, “Propose a model—preferably an equation. I’ll be interested only if it can explain observations across all levels. Let me know when it does.” This approach requires assembling all available data and using it to construct a unifying framework that reveals the system’s underlying structure. An engineer, on the other hand, would ask, “Where’s your blueprint? I want to verify it and see the system in action.” Engineers focus on how the parts fit and function together in a working model. Despite their differences, all these perspectives share a core principle: to solve a complex system, we must reach a point where all its components can be meaningfully interconnected.


"A problem well stated is a problem half-solved.” - Charles Kettering


The nature of the problem can be illustrated with an analogy from physics and engineering. Imagine you’ve completed graduate studies in these fields—but without ever learning about electromagnetism (EM). You’re familiar only with direct current, and no resources on EM are available to you. Now, you're handed an electric fan and a hydroelectric power generator, with the freedom to examine and dismantle both. Your task is to uncover the underlying principle that connects their operation—a principle you've never been taught. To succeed, you must work backward from observation, distilling each system’s functions until you discover a common mechanism that explains them both. This mirrors the challenge of understanding internal sensations in the nervous system: reverse-engineering an unknown principle from complex, interrelated systems.


How can you tell if you're on the right track? One way is to determine whether you've identified the need for brushes in the hydroelectric generator. If so, it's a strong indication that you've grasped the alternating nature of the current. Along the way, you'll make various correlations and use them to understand the system's basic operations. True understanding comes when you can explain all your findings in a coherent, interconnected manner—only then can you claim to have uncovered the underlying mechanism. Each observation serves as a piece of a larger puzzle, prompting you to refine your hypothesis repeatedly. Sometimes, initial pieces may seem to fit, only to be dismantled when a new piece doesn’t align. Eventually, you’re likely to uncover the core principle of electromagnetism, even if you call it something else. If derived correctly, you'll test whether a current-carrying conductor is deflected in a magnetic field, and whether current is induced when a conductor cuts through a magnetic field. That’s a perfect conclusion.


Similarly, we are faced with a complex system whose basic operational principle must be discovered to truly understand and repair it. The first step is to logically analyze findings across multiple levels and test whether the proposed principle consistently explains them. A correct solution should align with all prior experimental observations—though not necessarily with every past interpretation. Just as the principle of electromagnetism can be demonstrated with a U-shaped magnet, a conductor, and a battery, the nervous system's core mechanism might one day be illustrated using a simple circuit.


In this work, I aim to address the following questions: 1) How do neurons generate and transmit electrical potentials? 2) In what ways does this differ from conventional electric current? 3) What features of neuronal signaling make translation into electronic circuits possible? 4) How can we explain complex brain functions simply, and how feasible is it to replicate them? 5) How does the semblance hypothesis enable the derivation of a circuit-like mechanism? 6) How can we compare electromagnetic principles with the induction of inner sensations? 7) How do these functions relate to fundamental electronic circuit principles?


Foot Notes


1. What is there in an equation? An explanation - pdf.


2. A physicist's way of explaining "explanation" - Video. This presentation naturally leads to the question, “How can we make hard-to-vary assertions about the mechanism of brain functions?” “How can we seek good explanations – the ones that can’t be easily varied while still explaining?” We can attain the underlined stage only after solving the system, which is implicit (In other words, good explanation can come only when the correct underlying solution has reached - that may not have become explicit while still explaining! So how can we reach a state where can we solve the system that will allow us to continue explaining? Explanation for the seasons became possible as we made enough observations, including the tilt of Earth’s axis. As we made more observations, we were putting them together to make sense of all those observations. In the case the nervous system, we have already made very large number of observations at several levels, such as biochemistry, cell biology, electrophysiology, system neuroscience, behavior, psychology, consciousness studies, and imaging studies. At this juncture, our priority should be to attempt to put those observations together to reach a stage that will allow us to make hard-to-vary explanations - an indication that we have already arrived at a solution for the system. While undertaking this, we should be prepared to use unknown factor (an unseen thing or a factor with an unseen property or a biological feature that can explain a property that cannot be directly sensed by our sensory systems) in our attempt to interconnect all the findings. We can get a sense of it by the following examples.


A physicist’s view of "explanation" Video naturally leads to the question: How can we make reliable assertions about brain function mechanisms? “How can we seek good explanations – the ones that can’t be easily varied while still explaining?” To reach this level of certainty, we must first solve the underlying system—even if the solution isn’t immediately explicit. In other words, a robust explanation emerges only once the correct solution is found. But how do we get to a point where we can solve the system and continue explaining? Take the explanation of the seasons as an example. It became possible once we accumulated enough observations, including the tilt of the Earth’s axis. As more observations were made, we were able to synthesize them into a coherent explanation. Similarly, in studying the nervous system, we’ve already gathered a vast amount of data from various fields such as biochemistry, electrophysiology, cell biology, neuroscience, behavior, psychology, and imaging. Our goal now is to integrate these findings to reach a point where we can make stable, reliable explanations—an indication that we’ve solved the system. Along the way, we may need to incorporate unknown factors—unseen elements, properties, or biological features that can explain phenomena not directly perceptible by our senses. This process can be illustrated by the following examples.


A system of algebraic equations with a unique solution illustrates how the solution binds the equations together. In solving the system, we uncover the interconnections between the equations—this is the deep principle (and beauty) behind a solvable system (Two methods here: Video1, Video2). Note that for a system with n variables, n equations are required to find the solution (details in Video2). Biological systems, however, are unlikely to behave like perfect equations, so we’ll need more than n observations. The challenge is to select the relevant observations from a vast pool of data. So, how do we approach this with biological observations? For each observation, we draw causal connections to others. These causal observations are assigned as variables, forming short equations that interconnect them. The next step is to compile all these equations to solve the system. To do so, we need to identify all equations and assign variables appropriately. At this stage, we can include one unknown variable that represents a change at the correct level, which can be formed during learning and trigger inner sensations during memory retrieval. We can adjust this unknown variable until the system is solved. Knowing that a unique solution exists helps narrow down potential mechanisms. The constraints of the equations provide insight into the features of the unknown variable.


In this process, redundant equations—no matter how informative—do not contribute to solving the system. Likewise, complex equations with many variables (from the total n variables) are not helpful. Given this, it’s reasonable to conclude that brain science experiments have reached a saturated phase, where many experimental results, explaining correlations or causations across different levels, only produce redundant equations that don’t advance solving the system.


At this stage, we should focus on the non-redundant findings from various levels to solve the system. If there aren’t enough equations (observations) to incorporate certain variables, attempts to solve the system will reveal this. If successful, the unknown factor—whose value can be derived through a combination of induction and trial-and-error—will interconnect the findings across levels, providing the solution. This solution can be verified by: a) examining all prior observations (retrodiction), and b) testing its predictions. Once all variables are identified, we can generate many new equations. Only then will we be able to answer the question raised by Professor Deutsch in the video: “How can we seek good explanations—the ones that can’t be easily varied while still explaining?” Now is the time to make this effort, as waiting will only lead to further accumulation of data, making it harder to attempt this integration.


References


Minsky M. (1980) K-lines: a theory of memory. Cognitive Science. 4:117–133 Article


McDonnell et al., (2014) Engineering Intelligent Electronic Systems Based on Computational Neuroscience. Proceedings of the IEEE | Vol. 102, No. 5, May 2014 Article


Vadakkan K.I (2014) An electronic circuit model of the inter-postsynaptic functional LINK designed to study the formation of internal sensations in the nervous system. Advances in Artificial Neural Systems. Article


Vadakkan K.I (2011) Processing semblances induced through inter-postsynaptic functional LINKs, presumed biological parallels of K-lines proposed for building artificial intelligence. Frontiers in Neuroengineering. 4:8 PubMed

 

2. Replication in an engineered system


Why should we be able to replicate the mechanism using electronic circuits?

The first-person internal sensations of higher brain functions (such as memories) can only be objectively understood by making their contents accessible to another person (Vadakkan, 2013). Achieving this would allow us to understand how the nervous system operates. Since it's not feasible to conduct experiments on biological systems to directly explore first-person sensations, we must replicate the mechanism in artificial systems. This approach holds the promise of deciphering how internal sensations arise. If we replicate the same operational units found in the nervous system, this effort should succeed. Moreover, as we are building these artificial systems, we can devise methods to expose their inner workings to a second person. The Semblance Hypothesis has examined various nervous system functions observed across different fields of brain science, offering a theoretically feasible mechanism for how these functions operate.

Why should we be able to test the semblance hypothesis in artificial systems?


Nerve conduction occurs along neuronal processes, but when it reaches the synapses, there is a synaptic delay of up to 2 milliseconds. While this may seem small, it is actually a significant delay. If the system responsible for internal sensations can be formed with units that have a substantial delay at the synapses (nodes), then the conduction speed along the inter-nodal segments becomes less important. For instance, consider traffic intersections—how quickly or by what method one arrives at the intersection is less relevant because there is always a delay at the intersection.


In big cities, it's possible to catch up with a car on a bicycle over several blocks—something many of us have likely experienced! This suggests that as long as there is a synaptic delay of up to 2 milliseconds, the method of conduction along the neuronal process doesn't significantly affect the emergence of system properties. If this argument holds, then we should be able to replicate the mechanism using electronic circuits.


There is a notable difference in depolarization spread between unmyelinated and myelinated neurons. Myelination causes depolarization to jump between nodes (nodes of Ranvier), bypassing the myelinated segments. This highlights why the method of conduction through inter-nodal areas is less important; what truly matters is its arrival at the synapses, where a synaptic delay occurs. These characteristics strongly support the idea of replicating the mechanism using electronic circuits.


Previously, we demonstrated how the system effects of internal sensation formation can be tested by assembling basic operational units (Vadakkan, 2011; 2012). Electronics control the flow of electrons using various components, with electrons traveling at very high speeds. When an electron is added at one end of a conductor, it effectively pushes an electron out from the other end. In contrast, neurotransmission spreads depolarization at only 2 meters per second in unmyelinated fibers. For myelinated neurons (in both the central and peripheral nervous systems), depolarization jumps across the myelin, increasing conduction speed to 120 meters per second—still much slower than the speed of electricity (nearly 3 × 10⁸ m/s). By slowing the flow of electricity at the synapses to match the 1-2 millisecond delay, an artificial system can be built. Using these principles, an electronic circuit was developed (Vadakkan, 2014). Ongoing research testing electronic circuits for artificial intelligence (McDonnell et al., 2014) is expected to further advance these efforts.


While the previous argument seems reasonable, the role of the extracellular matrix (ECM) requires a modification of this view. Higher cognitive functions, such as learning and memory retrieval, occur only when extracellular potentials oscillate within a specific frequency range. Any deviation from this range alters consciousness, as shown by EEG results. Therefore, translating the IPL function through electron flow in electronic circuits is not sufficient. The generation of internal sensations must be viewed as a system property that operates within a specific frequency range for oscillating extracellular potentials. As such, engineered systems designed to generate first-person internal sensations must be embedded in an ECM to exhibit membrane properties and generate functions that mimic the nervous system. Can we build such a system? Initially, we should focus on creating stable systems with outputs that have certain limitations. Over time, this will progress toward dynamic systems where new neurons are formed and incorporated into the circuitry. These challenges will drive advancements in material engineering science.


References

Vadakkan K.I. (2013) A supplementary circuit rule-set for neuronal wiring. Frontiers in Human Neuroscience. Article


Vadakkan K.I. (2011) Processing semblances induced through inter-postsynaptic functional LINKs, presumed biological parallels of K-lines proposed for building artificial intelligence. Frontiers in Neuroengineering 4:8 Article


Vadakkan K.I. (2012) The nature of "internal sensations" of higher brain functions may be derived from the design rules for artificial machines that can produce them. Journal of Biological Engineering. 5;6(1):21 Article


Vadakkan K.I. (2014) An electronic circuit model of the inter-postsynaptic functional LINK designed to study the formation of internal sensation in the nervous system Article


McDonnell et al., (2014) Engineering Intelligent Electronic Systems Based on Computational Neuroscience. Proceedings of the IEEE | Vol. 102, No. 5, May 2014 Article

 


3. Developing artificial Intelligence requires understanding how first-person internal sensations are produced and vice versa


The development of artificial intelligence (AI) is essential for the future, even though we still don’t fully understand the mechanisms of natural intelligence (NI). Current AI systems do not account for the creation of internal sensations during memory retrieval or action planning, as no mechanisms from the nervous system can be directly applied. Once these mechanisms are understood, transferring them to engineered systems (McDonnell et al., 2014) will be possible. However, understanding the formation of internal sensations in biological systems presents challenges. Since these are first-person properties (like perception, memory, and consciousness) beyond the reach of third-person experimentation, we must hypothesize a biological mechanism that explains nervous system functions and replicate it in engineered systems. Testing internal sensations in artificial systems will lead to AI development (Vadakkan, 2015a). The challenge is to create a working hypothesis that can explain findings across multiple levels, filtering irrelevant observations to build a comprehensive framework. Given the large number of functions to explain, the solution will be unique and simple. Once the theoretical solution is identified, we can begin exploring it through engineered systems.


Natural intelligence goes beyond memory storage and retrieval. Intelligent systems must make decisions for unique situations they haven't directly experienced. These decisions, like internal sensations related to memory retrieval, are accessible only to the owner of the nervous system. Intelligence, in this regard, shares similarities with the internal sensations tied to retrieved memories. Intelligent decisions in response to novel stimuli rely on mechanisms from previous associative learning events. When the cue is entirely new, and no direct association has been formed, the system must draw on past learning to predict an appropriate response. This internal formulation of a solution is a hypothesis, which the system will then verify through further evidence (Abbott, 2008). The idea generated in response to a novel cue depends on the number of previous associative learning events, often involving rare connections across diverse environments. This resulting idea, a first-person internal sensation, is accessible only within the nervous system. Once accessed, the system produces behavioral or communicative outputs (speech, writing) to convey the idea to others. Replicating this process in engineered systems, especially for large-scale systems, is a potential way to advance. Since there are no direct experimental methods for studying first-person internal sensations, the work of researchers (Abbott, 2008; Edelman, 2012; Gallistel & Balsam, 2014; Grillner, 2014; Laughlin, 2014; Mardar, 2015) serves as motivation to explore these sensations further.


To replicate human intelligence in engineered systems, we must first understand how internal sensations of higher brain functions are generated within the nervous system. Since third parties cannot access first-person internal sensations, we face a "frame of reference" challenge in our approach to studying nervous system functions. Hypotheses are needed to explain how first-person internal sensations arise within the nervous system, aligning with known neuronal circuits and findings across various neuroscience fields. As the specificity of a cue stimulus changes, so does the internal sensation of a retrieved memory. This suggests that a natural computational process of different internal sensation units is happening at physiological time scales. The mechanism for inducing these sensations should be simple, with slight modifications enabling the induction of different higher brain functions. Given the interconnected nature of these internal sensations with numerous findings at different levels, the solution is likely unique. Therefore, a theoretical examination that explains most major findings is probably correct.

Building on this, the theoretical investigation of internal sensation formation led to the development of the semblance hypothesis (Vadakkan, 2007, 2013, 2015c). This hypothesis has been extended to explore features observed across different branches of neuroscience and various disorders (Vadakkan, 2012; 2015b).

 

While explaining a wide range of findings from different levels supports the likelihood of a mechanism, it must ultimately be verified through replication in engineered systems. The goal is to induce internal sensations as first-person properties in these systems. Since third-person experimenters cannot access another system’s first-person sensations, these must be verified within the engineered systems. In addition to matching behavioral outputs, the computational results of internal sensations can be directly examined through the system's operations. This cycle of computation, experimentation, internal sensation readouts, and behavioral outputs will drive the development of AI.


Operating the system presents several challenges. First, a framework for perception is needed to enable the engineered system to process sensory information. Examining key aspects of visual perception, such as homogeneity above the flicker fusion frequency, percept formation in refraction, pressure phosphenes, and cortical column functions, has led to a feasible framework for the biological mechanism of internal sensation in perception (Vadakkan, 2015d). Once transferred to engineered systems, the next step is to integrate associative learning, enabling the system to retrieve memories when sensory cues are perceived (Vadakkan, 2011). An engineered system mimicking the human brain is expected to have consciousness (Minsky, 1986, 1991). Since consciousness is a binding problem and linked to oscillatory potentials, a framework for consciousness can be built using the integral of internal sensations during normal resting oscillations (Vadakkan, 2010). This framework can be applied and tested in artificial systems. Additionally, the system should incorporate behavioral motor outputs in response to retrieved memories triggered by a perceived cue (Vadakkan, 2015c), with provisions for unconscious behaviors, such as those in procedural learning.


Testing these frameworks in engineered systems requires optimizing segments that provide third-person outputs on the internal sensations being generated. These outputs will guide the system's optimization to achieve the desired internal sensations. As system complexity increases, the quality of internal sensations improves, though the computations to integrate these sensations become more complex. A circuit was later designed to replicate the mechanism by utilizing the synaptic delay of 1-2 milliseconds at chemical synapses (Vadakkan, 2014). Even simpler organisms with fewer neurons exhibit basic intentionality, suggesting they generate internal sensations. Given the vast diversity of animal species on Earth—over 10 million predicted (Mora et al., 2011)—there is optimism that replicating these mechanisms in engineered systems can mimic one of these nervous systems.


In summary, the development of true AI in engineered systems will align with the gold standard testing of the mechanism behind the formation of first-person internal sensations of higher brain functions (Vadakkan, 2015a). Given the potential benefits, every effort should be made to transfer natural intelligence (NI) to develop AI. As these systems will experience internal sensations similar to those of animals, ethical considerations must be addressed when building such machines. Additionally, regulations should be established to prevent any harmful consequences in the future. Ensuring these two elements are in place as we begin transferring NI to AI will signal that we are ready for this advancement. The establishment of a first-person neuroscience to complement third-person neuroscience is the crucial first step toward achieving these goals.


References

Abbott LF (2008) Theoretical neuroscience rising. Neuron 60:489 PubMed


Edelman S (2012) Six challenges to theoretical and philosophical psychology. Front Psychol. 3: 219 PubMed


Gallistel CR, Balsam PD (2014) Time to rethink the neural mechanisms of learning and memory. Neurobiol Learn Mem. 108:136 PubMed


Grillner S (2014) Megascience efforts and the brain Neuron 82(6):1209 PubMed


Laughlin RB (2014) Physics, Emergence, and the Connectome. Neuron. 83(6):125 PubMed


Mardar E (2015) Understanding brains: details, intuition, and big data. PLoS Biol. 13(5):e1002147. PubMed


McDonnell MD, Boahen K, Ijspeert A, Sejnowski TJ (2014) Engineering Intelligent Electronic Systems Based on Computational Neuroscience. Proc IEEE Inst Electr Electron Eng. 102(5):64 PubMed


Minsky M. (1986). The Society of Mind. New York: Simon and Schuster Inc Publishers. Book


Minsky M. (1991). “Machinery of Consciousness,” Proceedings National Research Council of Canada, 75th Anniversary Symposium on Science in Society. Article


Mora C, Tittensor DP, Adl S, Simpson AGB, Worm B (2011) How many species are there on earth and in the ocean? PLoS Biology. 9 (8) Article ID e1001127, 2011. PubMed


Vadakkan K.I. (2007) Semblance of activity at the shared post-synapses and extracellular matrices - A structure function hypothesis of memory. ISBN:978-0-5954-7002-0. Book


Vadakkan KI (2010) Framework of consciousness from semblance of activity at functionally LINKed postsynaptic membranes. Frontiers in Psychology (Consciousness Research) 1:168. PubMed


Vadakkan KI (2011) Processing semblances induced through inter-postsynaptic functional LINKs, presumed biological parallels of K-lines proposed for building artificial intelligence. Frontiers in Neuroengineering. 4:8. PubMed


Vadakkan KI (2012) A structure-function mechanism for schizophrenia. Frontiers in Psychiatry 3:108. PubMed


Vadakkan KI (2013) A supplementary circuit rule-set for the neuronal wiring. Frontiers in Human Neuroscience. 7:170. PubMed


Vadakkan KI (2014) An electronic circuit model of the inter-postsynaptic functional LINK designed to study the formation of internal sensations in the nervous system. Advances in Artificial Neural Systems. doi:10.1155/2014/318390. Article


Vadakkan KI (2015a) The necessity of studying higher brain functions from a first-person frame of reference. F1000 Research doi: 10.12688/f1000research.6720.1. Article


Vadakkan KI (2015b) A pressure-reversible cellular mechanism of general anesthetics capable of altering a possible mechanism for consciousness. SpringerPlus 4:485. PubMed


Vadakkan KI (2015c) The functional role of all postsynaptic potentials examined from a first-person frame of reference. Reviews in the Neurosciences doi: 10.1515/revneuro-2015-0036. PubMed


Vadakkan KI (2015d) A framework for the first-person internal sensation of visual perception in mammals and a comparable circuitry for olfactory perception in Drosophila Article