-
Reimers, Felix Simon; Huse Ramstad, Ola; Sæbø, Solve; Sandvig, Axel; Sandvig, Ioanna & Nichele, Stefano
(2024).
Performance of C.Elegans Connectomes as Computational Reservoirs.
-
Reimers, Felix Simon; Huse Ramstad, Ola; Sæbø, Solve; Sandvig, Axel; Sandvig, Ioanna & Nichele, Stefano
(2024).
Examination of Computational Performance of Biological Neural Networks with Reservoir Computing.
-
Nichele, Stefano
(2024).
What’s a Machine? From In-Silico to In-Vitro AI.
-
Ramstad, Ola Huse; Wijdeven, Rosanne Francisca van de; Heiney, Kristine; Nichele, Stefano; Sandvig, Axel & Sandvig, Ioanna
(2024).
MICRO- AND MESOSCALE ASPECTS OF NEURODEGENERATION IN MULT-NODAL HUMAN NEURAL NETWORKS CARRYING THE LRRK2 G2019S MUTATION.
-
Ramstad, Ola Huse; Wijdeven, Rosanne Francisca van de; Heiney, Kristine; Nichele, Stefano; Sandvig, Axel & Sandvig, Ioanna
(2024).
MICRO- AND MESOSCALE ASPECTS OF NEURODEGENERATION IN MULTI-NODAL HUMAN NEURAL NETWORKS CARRYING THE LRRK2 G2019S MUTATION.
-
Reimers, Felix Simon; Jain, Sanyam; Aarati, Shrestha & Nichele, Stefano
(2023).
Pathfinding Neural Cellular Automata with Local Self-Attention.
-
Farner, Jørgen Jensen; Huse Ramstad, Ola; Nichele, Stefano & Heiney, Kristine
(2023).
Beyond weight plasticity: Local learning with propagation delays in spiking neural networks.
-
Jain, Sanyam; Shrestha, Aarati & Nichele, Stefano
(2023).
Capturing Emerging Complexity in Lenia.
-
-
Heiney, Kristine Anne; Józsa, Mónika; Rule, Michael; Nichele, Stefano & O'Leary, Timothy
(2023).
Encoding of behaviour by pairwise neuronal interactions correlates with representational drift.
-
Heiney, Kristine Anne; Józsa, Mónika; Rule, Michael; Nichele, Stefano & O'Leary, Timothy
(2023).
Neural correlations, information, and representational drift.
-
-
Nadizar, Giorgia; Medvet, Eric; Huse Ramstad, Ola; Nichele, Stefano; Pellegrino, Felice Andrea & Zullich, Marco
(2022).
Erratum: Merging pruning and neuroevolution: Towards robust and efficient controllers for modular soft robots (The Knowledge Engineering Review DOI: 10.1017/S0269888921000151).
Knowledge engineering review (Print).
ISSN 0269-8889.
37.
doi:
10.1017/S0269888922000017.
-
Heiney, Kristine Anne; Józsa, Mónika; Rule, Michael; Nichele, Stefano & O'Leary, Timothy
(2022).
Encoding of behaviour by pairwise neuronal interactions correlates with representational drift.
-
Heiney, Kristine Anne; Rule, Michael; Józsa, Mónika; Nichele, Stefano & O'Leary, Timothy
(2022).
Changing to stay the same.
-
Heiney, Kristine Anne; Józsa, Mónika; Rule, Michael; Nichele, Stefano & O'Leary, Timothy
(2022).
Encoding of behaviour by pairwise neuronal interactions correlates with representational drift.
Vis sammendrag
In some brain areas, populations of neurons can change their behavioural tuning gradually over days. At the single cell level, the rate of this drift varies considerably, raising the question: Does the amount of information contained in interactions between cells predict how much they drift? We reanalysed data from Driscoll et al. (2017), in which the activity of mouse posterior parietal cortex (PPC) neurons was tracked over weeks as mice performed a learned navigation task.
We quantified the informativeness of pairwise interactions by measuring the mutual information between behaviour and pairwise neuronal responses, relative to a conditionally shuffled baseline. This conditional shuffling preserves the tuning of single cells while destroying noise correlations or higher-order dependencies in a pair. The amount of pairwise information relative to this baseline can be synergistic (positive) or redundant (negative).
After thresholding these measures, we found that neurons with high average pairwise redundancy typically also showed greater average pairwise synergy. Furthermore, the average redundancy was positively correlated with tuning stability, more strongly than was single-cell informativeness. This suggests that redundant subpopulations of neurons form a more stable ‘backbone’ that is less susceptible to drift and more informative about behaviour on average. Thus, redundant coupling is more predictive of stable tuning than the information conveyed about behaviour at the single cell level.
-
Heiney, Kristine Anne; Huse Ramstad, Ola; Fiskum, Vegard; Sandvig, Axel; Sandvig, Ioanna & Nichele, Stefano
(2022).
Neuronal avalanche dynamics and functional connectivity as indicators of the computational capacity of in vitro neuronal networks.
Vis sammendrag
The brain has long been a source of inspiration for the development of novel computational methods and architectures. Most notably, modern machine learning (ML) and deep learning (DL) algorithms in the field of artificial intelligence (AI) are based on an artificial neuron model, and such approaches have enabled us to process data on a previously unimaginable scale. However, current ML and DL approaches are computationally expensive and task-specific, suggesting we have much yet to learn from the efficiency and learning capabilities of the brain.
In this work, we evaluated the dynamics and functional connectivity of networks of cortical neurons as they matured over approximately two months in vitro. The electrophysiological activity of the networks was captured daily using microelectrode arrays (MEAs). To assess the dynamics, we identified patterns of activity termed ‘neuronal avalanches’ and used this activity to compute the branching ratio and complexity of the observed activity. The branching ratio describes how activity is propagated through the network and allows the activity to be classified into different dynamical regimes, while the complexity provides a measure that is maximized when activity indicates a balance between integration and segregation in the network. Graphs representing the functional connectivity of the network before were extracted from their electrophysiological behavior, and these graphs were used to evaluate the relationship between the avalanche dynamics and the connectivity of the network.
This work serves as a starting point for the data-driven development of biologically plausible models to inform AI algorithms. Future work will involve information theoretical analysis of the network behavior as well as investigations into the role of plasticity in determining the dynamical state. By evaluating the interplay between computation and connectivity, we hope to drive the development of novel powerful AI models.
-
Pontes Filho, Sidney; Walker, Kathryn; Najarro, Elias; Nichele, Stefano & Risi, Sebastian
(2022).
A Unified Substrate for Body-Brain Co-evolution.
Vis sammendrag
A successful development of a complex multicellular organism took millions of years of evolution. The genome of such a multicellular organism guides the development of its body from a single cell, including its control system. Our goal is to imitate this natural process using a single neural cellular automaton (NCA) as a genome for modular robotic agents. In the introduced approach, called Neural Cellular Robot Substrate (NCRS), a single NCA guides the growth of a robot and the cellular activity which controls the robot during deployment. We also introduce three benchmark environments, which test the ability of the approach to grow different robot morphologies. We evolve the NCRS with covariance matrix adaptation evolution strategy (CMA-ES), and covariance matrix adaptation MAP-Elites (CMA-ME) for quality diversity and observe that CMA-ME generates more diverse robot morphologies with higher fitness scores. While the NCRS is able to solve the easier tasks in the benchmark, the success rate reduces when the difficulty of the task increases. We discuss directions for future work that may facilitate the use of the NCRS approach for more complex domains.
-
Jensen Farner, Jørgen; Weydahl, Håkon; Jahren, Ruben; Huse Ramstad, Ola; Nichele, Stefano & Heiney, Kristine Anne
(2021).
Evolving spiking neuron cellular automata and networks to emulate in vitro neuronal activity.
Vis sammendrag
Neuro-inspired models and systems have great potential for applications in unconventional computing. Often, the mechanisms of biological neurons are modeled or mimicked in simulated or physical systems in an attempt to harness some of the computational power of the brain. However, the biological mechanisms at play in neural systems are complicated and challenging to capture and engineer; thus, it can be simpler to turn to a data-driven approach to transfer features of neural behavior to artificial substrates. In the present study, we used an evolutionary algorithm to produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro. The aim of this approach was to develop a method of producing models capable of exhibiting complex behavior that may be suitable for use as computational substrates. Our models were able to produce a level of network-wide synchrony and showed a range of behaviors depending on the target data used for their evolution, which was from a range of neuronal culture densities and maturities. The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
-
Pontes Filho, Sidney & Nichele, Stefano
(2021).
A Conceptual Bio-Inspired Framework for the Evolution of Artificial General Intelligence.
Vis sammendrag
In this work, a conceptual bio-inspired parallel and distributed learning framework for the emergence of general intelligence is proposed, where agents evolve through environmental rewards and learn throughout their lifetime without supervision, i.e., self-learning through embodiment. The chosen control mechanism for agents is a biologically plausible neuron model based on spiking neural networks. Network topologies become more complex through evolution, i.e., the topology is not fixed, while the synaptic weights of the networks cannot be inherited, i.e., newborn brains are not trained and have no innate knowledge of the environment. What is subject to the evolutionary process is the network topology, the type of neurons, and the type of learning. This process ensures that controllers that are passed through the generations have the intrinsic ability to learn and adapt during their lifetime in mutable environments. We envision that the described approach may lead to the emergence of the simplest form of artificial general intelligence.
-
Pontes Filho, Sidney; Gulden Dahl, Annelene; Nichele, Stefano & Mello, Gustavo
(2021).
A Deep Learning-Based Tool for Automatic Brain Extraction from Functional Magnetic Resonance Images of Rodents.
Vis sammendrag
Removing skull artifacts from functional magnetic images (fMRI) is a well understood and frequently encountered problem. Because the fMRI field has grown mostly due to human studies, many new tools were developed to handle human data. Nonetheless, these tools are not equally useful to handle the data derived from animal studies, especially from rodents. This represents a major problem to the field because rodent studies generate larger datasets from larger populations, which implies that preprocessing these images manually to remove the skull becomes a bottleneck in the data analysis pipeline. In this study, we address this problem by implementing a neural network-based method that uses a U-Net architecture to segment the brain area into a mask and removing the skull and other tissues from the image. We demonstrate several strategies to speed up the process of generating the ground-truth of the dataset using watershedding, and several strategies for data augmentation that allowed to train robustly the U-Net to perform the segmentation. Finally, we deployed the trained network freely available.
-
-
Øye, Olav-Johan; Normann, Maria Storvig & Nichele, Stefano
(2021).
Slik kan biologien inspirere kunstig intelligens.
[Internett].
oslomet.no.
Vis sammendrag
Mennesker løser kompliserte oppgaver med svært lite energiforbruk, mens kunstig intelligens bruker mye strøm. Hvordan kan det bli hvis kunstig intelligens lærer av biologien?
-
Heiney, Kristine; Huse Ramstad, Ola; Pontes-Filho, Sidney; Glover, Tom; Lindell, Trym & Jensen Farner, Jørgen
[Vis alle 8 forfattere av denne artikkelen]
(2021).
Bridging the computational gap: From biological to artificial substrates.
-
Heiney, Kristine; Huse Ramstad, Ola; Fiskum, Vegard; Sandvig, Axel; Sandvig, Ioanna & Nichele, Stefano
(2021).
From in vitro to computational models of the brain: What can criticality tell us about neural computation?
Vis sammendrag
It has been hypothesized that some biological systems, including parts of the brain, may operate near the critical point of a phase transition, delicately balanced between ordered and disordered behavior. Criticality maximizes a number of properties that are favorable for computation, such as dynamic range, information transmission, and the number of input–output mappings; put more simply, systems at criticality are well-suited to take inputs for some computational problem, perform transformations on the inputs as they propagate through the system, and give meaningful outputs.
In this talk, I will present our ongoing work on evaluating the closeness of in vitro neuronal networks to criticality and discuss the implications of our findings for understanding neural computation and the development of bioinspired computing methods and hardware.
To determine the closeness of networks to criticality, the spatiotemporal scaling behavior of neuronal avalanches is observed, and we compare this behavior across networks with different seeding densities as they mature. The functional connectivity is also obtained for these networks and the small-worldness assessed using the small-world propensity measure. Additionally, the excitation-to-inhibition ratio was chemically manipulated to evaluate its effect on the network dynamics.
Although none of our networks tended to mature toward the critical state, we observed consistent differences across the two considered densities in the avalanche scaling and branching ratio indicative of a greater dominance of network bursts in the higher-density networks. Additionally, the lower-density networks tended to show small-world organization in their functional connectivity. Chemical perturbation to increase inhibition brought the networks closer to criticality, and the higher-density networks showed greater disruption to both their functional activity and avalanche behavior under chemical perturbation.
From these findings, we aim to find connectivity and activity patterns correlated with closeness to criticality so that we may emulate these features in models for bioinspired computation. Future work will also involve evaluating measures of information processing in these networks as indicators of the suitability of these networks for computation.
-
Heiney, Kristine; Tufte, Gunnar & Nichele, Stefano
(2021).
On Artificial Life and Emergent Computation in Physical Substrates.
Vis sammendrag
In living systems, we often see the emergence of the ingredients necessary for computation---the capacity for information transmission, storage, and modification---begging the question of how we may exploit or imitate such biological systems in unconventional computing applications. What can we gain from artificial life in the advancement of computing technology? Artificial life provides us with powerful tools for understanding the dynamic behavior of biological systems and capturing this behavior in manmade substrates. With this approach, we can move towards a new computing paradigm concerned with harnessing emergent computation in physical substrates not governed by the constraints of Moore's law and ultimately realize massively parallel and distributed computing technology. In this paper, we argue that the lens of artificial life offers valuable perspectives for the advancement of high-performance computing technology. We first present a brief foundational background on artificial life and some relevant tools that may be applicable to unconventional computing. Two specific substrates are then discussed in detail: biological neurons and ensembles of nanomagnets. These substrates are the focus of the authors' ongoing work, and they are illustrative of the two sides of the approach outlined here---the close study of living systems and the construction of artificial systems to produce life-like behaviors. We conclude with a philosophical discussion on what we can learn from approaching computation with the curiosity inherent to the study of artificial life. The main contribution of this paper is to present the great potential of using artificial life methodologies to uncover and harness the inherent computational power of physical substrates toward applications in unconventional high-performance computing.
-
-
Nichele, Stefano
(2021).
Towards a less artificial Artificial Intelligence.
-
Tapio, Hege; Bergaust, Kristin; Nichele, Stefano; Christensen-Scheel, Boel & Wettre, Mikkel
(2020).
FeLT - Futures of Living Technologies, Project Launch Symposium .
Vis sammendrag
Launch of FeLT, an interdisciplinary artistic research project started in October 2020. Presentations live and pre-recorded by participants and collaborators:
Baltan Laboratories (NL), Olga Mink, Bioartsociety (FI)Erich Berger,
Cesar & Lois (BR/US) Lucy HG Solomon
Coalesce Center for Biological Art(US) Paul Vanouse,
MetaMorf(NO), Espen Gangvik,
SEADS collective(INT), Angelo Vermeulen,
Symbiotica(AU)
RIXC(LV), Rasa Smite
Maria Castellanos (ES),
Mikkel Wettre(NO),
Hege Tapio(NO),
Haakon H Roen(NO)
Trym A. Eidsvik(NO)
-
Pontes-Filho, Sidney; Olsen, Kristoffer; Yazidi, Anis; Halvorsen, Pål; Riegler, Michael & Nichele, Stefano
(2020).
Towards the evolution of spiking neural networks for self-supervision with neuroplasticity.
-
Heiney, Kristine; Huse Ramstad, Ola; Fiskum, Vegard; Sandvig, Ioanna; Sandvig, Axel & Nichele, Stefano
(2020).
Computational behavior of biological neural networks through the lens of criticality.
Vis sammendrag
The human brain is a powerful computational machine, and harnessing some of this power in manmade computing systems would enable advances in terms of computational speed, power consumption, and fault tolerance. One interesting phenomenon theorized to occur in the brain is self-organized criticality. The critical state is a dynamical regime poised between ordered behavior and chaotic behavior and recognized as good for computation. Systems in the critical state have been shown to optimize performance by many measures, including dynamic range, complexity, and information transmission. It has been proposed that the brain self-organizes into this state by striking a balance between excitation and inhibition to optimize its computational capacity. Networks of neurons in vitro can be classified critical if the size of network-wide cascades of activity, termed “neuronal avalanches,” follows a power-law distribution. This method of classification provides a path to identifying networks that are likely suited for computation.
Our lab has conducted experiments to assess the dynamical state of networks of dissociated cortical neurons as they mature in vitro. It was found that the networks progressed from a period of low activity to a subcritical state before settling into a supercritical state, characterized by tight network-wide synchronization. It has been reported that not all dissociated networks self-organize into the critical state, begging the question: what can be done to induce criticality in these cases? To adjust the excitation-to-inhibition ratio toward the balance expected at criticality, small volumes of an inhibitory neurotransmitter (GABA) were incrementally added to the networks. After perturbation with GABA, it was observed that the networks showed avalanche activity with power-law scaling, manifested in more complex patterns of activity in the network, indicating that it is possible to manipulate networks into the critical state.
The ability to identify the dynamical state of networks and manipulate them into the critical state may enable the comparison of the architectures and features of networks in different states via different information and network theoretical measures. Identifying features correlated with networks at criticality will provide a foundation for the behaviors we wish to replicate as we develop novel neuroscience-inspired artificial neural network models and biologically plausible learning algorithms.
-
Heiney, Kristine; Huse Ramstad, Ola; Sandvig, Axel; Sandvig, Ioanna & Nichele, Stefano
(2020).
Manipulating the dynamic state of in vitro neuronal networks.
Vis sammendrag
The dynamic state of neuronal networks is influenced by a wide range of functional and structural factors, including network connectivity, synaptic function, and the balance of excitation and inhibition in the network. Additionally, it has been shown that only some dissociated networks self-organize into the critical state as they are left to mature in vitro, with many ultimately showing the tightly synchronized behavior indicative of a supercritical state.
To explore the possibility of manipulating dissociated networks into the critical state by chemical intervention, we increased inhibition in the network by adding -aminobutyric acid (GABA) at day in vitro (DIV) 51. Prior to perturbation with GABA, the networks showed large network-wide bursts with close synchrony, and the size distribution of neuronal avalanches was bimodal. After perturbation, the synchrony was broken, and the avalanche distribution followed a power law, one of the hallmarks of criticality.
This finding indicates that it is possible to use chemical perturbation in dissociated networks to manipulate the dynamic state. This will allow for future comparative experimental studies on the characteristics and computational capabilities of networks in different dynamical regimes, shedding light on the mechanisms and benefits of critical and near-critical behavior.
-
Heiney, Kristine; Huse Ramstad, Ola; Fiskum, Vegard; Sandvig, Axel; Sandvig, Ioanna & Nichele, Stefano
(2020).
Manipulation of the scaling behavior of neuronal avalanches toward achieving criticality.
-
Weir, Janelle Shari; Christiansen, Nicholas; Nichele, Stefano; Lind, Pedro; Sandvig, Axel & Sandvig, Ioanna
(2020).
From topology to functionality: Does perturbation in in-vitro neural networks drive hub adaptation towards computational efficiency?
.
-
Vie, Knut Jørgen; Thorstensen, Erik; Gjefsen, Mads Dahl; Nichele, Stefano & Tufte, Gunnar
(2020).
Ethical Tissues.
-
Pontes-Filho, Sidney; Lind, Pedro; Yazidi, Anis; Zhang, Jianhua; Hammer, Hugo Lewi & Mello, Gustavo
[Vis alle 9 forfattere av denne artikkelen]
(2020).
EvoDynamic: A Framework for the Evolution of Generally Represented Dynamical Systems and Its Application to Criticality.
-
Nichele, Stefano
(2020).
Novel models and substrates for the emergence of intelligence.
-
Christiansen, Nicholas; Nichele, Stefano; Tufte, Gunnar; Sandvig, Ioanna & Sandvig, Axel
(2019).
Using computer models to infer disease pathology in in vitro models of amyotrophic lateral sclerosis.
-
Mello, Gustavo Borges Moreno E; Valderhaug, Vibeke Devold; Pontes-Filho, Sidney; Zouganeli, Evi; Huse Ramstad, Ola & Sandvig, Axel
[Vis alle 8 forfattere av denne artikkelen]
(2019).
Method to obtain neuromorphic reservoir networks from images of in vitro cortical networks.
Vis sammendrag
In the brain, the structure of a network of neurons defines how these neurons implement the computations that underlie the mind and the behavior of animals and humans. Provided that we can describe the network of neurons as a graph. We can employ methods from graph theory to investigate its structure or use cellular automata to mathematically assess its function. Additionally, these graphs can provide biologically plausible designs for networks, which can be integrated as reservoirs to support computing. Although, software for the analysis of graphs and cellular automata are widely available. Graph extraction from the image of networks of brain cells remains difficult. Nervous tissue is heterogeneous, and differences in anatomy may reflect relevant differences in function. Here we introduce a deep learning based toolbox to extracts graphs from images of brain tissue. This toolbox provides an easy-to-use framework allowing system neuroscientists to generate graphs based on images of brain tissue by combining methods from image processing, deep learning, and graph theory. The goals are to simplify the training and usage of deep learning methods for computer vision and facilitate its integration into graph extraction pipelines. In this way, the toolbox provides an alternative to the required laborious manual process of tracing, sorting and classifying. We expect to democratize the machine learning methods to a wider community of users beyond the computer vision experts and improve the time-efficiency of graph extraction from large brain image datasets, which may lead to further understanding of the human mind.
-
Heiney, Kristine; Huse Ramstad, Ola; Sandvig, Ioanna; Sandvig, Axel & Nichele, Stefano
(2019).
Assessment and manipulation of the computational capacity of in vitro neuronal networks through criticality in neuronal avalanches.
-
Pontes-Filho, Sidney; Yazidi, Anis; Zhang, Jianhua; Hammer, Hugo Lewi; Mello, Gustavo & Sandvig, I.
[Vis alle 8 forfattere av denne artikkelen]
(2019).
A general representation of dynamical systems for reservoir computing.
-
Zhang, Jianhua; Li, Jianrong & Nichele, Stefano
(2019).
Instantaneous Mental Workload Recognition Using Wavelet-Packet Decomposition and Semi-Supervised Learning.
-
Zhang, Jianhua; Chen, Peng; Nichele, Stefano & Yazidi, Anis
(2019).
Emotion Recognition Using Time-frequency Analysis of EEG Signals and Machine Learning.
-
Weir, Janelle Shari; Sandvig, Axel; Sandvig, Ioanna & Nichele, Stefano
(2019).
In vitro neural networks as self-organizing computational substrates.
Vis sammendrag
The structural organization of neural networks and the functional connectivity of synapses have been the basis for network neuroscience research for decades. Evolutionarily conserved mechanisms of neuroplasticity processes such as activity-dependent Hebbian plasticity and homeostatic plasticity work in tandem to coordinate network functionality and maintain normal neural network function as well as regulate network responses to perturbation. The drive to understand how different forms of plasticity may result in adaptive or maladaptive neural network responses demonstrated as changes in network structure and function, has led to the development of progressively more sophisticated empirical tools. With advanced protocols, some of the complex structural and functional properties of in vivo systems, including characteristic self-organization and emergence of functional activity of neural networks can be monitored and selectively manipulated in vitro using cultured neurons.
Furthermore, advanced technologies such as microelectrode arrays (MEAs) have been established as a platform that can be used to map, record, analyze, and model elements and interactions of emergent electrophysiological behaviour of maturing neural networks in vitro. The MEA platform also allows electrical and/or chemical modulation of the network, while being compatible with chemo-and optogenetic techniques, which can be applied to selectively modulate network plasticity. Interfaced with microfluidic chambers, they allow the structuring of multi-nodal neural networks with definable afferent and efferent connectivity, thus creating experimental paradigms that capture fundamental features and dynamics of the function-structure relationships observed in different interconnected brain regions.
-
Huse Ramstad, Ola; Wijdeven, Rosanne Francisca van de; Heiney, Kristine; Nichele, Stefano; Sandvig, Ioanna & Sandvig, Axel
(2019).
Mapping functional response to axotomy in a reductionist corticospinal circuit.
-
Heiney, Kristine; Huse Ramstad, Ola; Sandvig, Ioanna; Sandvig, Axel & Nichele, Stefano
(2019).
Assessment and manipulation of the computational capacity of in vitro neuronal networks through criticality in neuronal avalanches.
Vis sammendrag
The brain is an effective and efficient computational machine, yet the precise mechanisms it uses to perform computations are poorly understood. As demand for technologies capable of storing and processing large amounts of data increases, it would be beneficial to harness the computational power of the brain in engineerable computing hardwares; however, to recapitulate the desired behaviors, we must first grasp the dynamics underlying the communication within networks of neurons. To this end, a preliminary analysis of the electrophysiological behavior of in vitro neuronal networks of primary rat cortical neurons was performed to identify when the networks are in a critical state based on the size distribution of network-wide avalanches of activity. The critical state is defined as a transitional state between static or cyclical behavior and highly disordered or hyperactive behavior, and systems in the critical state are thought to be in the optimal conditions to perform computational tasks. The neuronal networks were observed as they matured from day in vitro 7 to 51 and were chemically perturbed with GABA on day 51 to determine if networks that do not reach the critical state during normal maturation can be manipulated into the critical state by reducing the excitation-to-inhibition ratio. The results presented here demonstrate the importance of selecting appropriate parameters in the evaluation of the size distribution and indicate that it is possible to perturb networks showing highly synchronized—or supercritical—behavior into the critical state by increasing the level of inhibition in the network. The classification of critical versus non-critical networks is valuable in identifying networks that can be expected to perform well on computational tasks, and it is expected that perturbed networks or disease models may show different behaviors with regard to criticality during the course of maturation. This study is part of a larger research project, the overarching aim of which is to develop computational models that are able to reproduce target behaviors observed in in vitro neuronal networks. These models will ultimately be used to aid in the realization of these behaviors in nanomagnet arrays to be used in novel computing hardwares.
-
Heiney, Kristine; Valderhaug, Vibeke Devold; Huse Ramstad, Ola; Sandvig, Ioanna; Sandvig, Axel & Tufte, Gunnar
[Vis alle 8 forfattere av denne artikkelen]
(2019).
Evaluation of the criticality of in vitro neuronal networks: Toward an assessment of computational capacity.
Vis sammendrag
Novel computing hardwares are necessary to keep up with today's increasing demand for data storage and processing power. In this research project, we turn to the brain for inspiration to develop novel computing substrates that are self-learning, scalable, energy-efficient, and fault-tolerant. The overarching aim of this work is to develop computational models that are able to reproduce target behaviors observed in in vitro neuronal networks. These models will be ultimately be used to aid in the realization of these behaviors in a more engineerable substrate: an array of nanomagnets. The target behaviors will be identified by analyzing electrophysiological recordings of the neuronal networks. Preliminary analysis has been performed to identify when a network is in a critical state based on the size distribution of network-wide avalanches of activity, and the results of this analysis are reported here. This classification of critical versus non-critical networks is valuable in identifying networks that can be expected to perform well on computational tasks, as criticality is widely considered to be the state in which a system is best suited for computation. This type of analysis is expected to enable the identification of networks that are well-suited for computation and the classification of networks as perturbed or healthy.
-
Heiney, Kristine; Valderhaug, Vibeke Devold; Sandvig, Ioanna; Tufte, Gunnar; Hammer, Hugo Lewi & Nichele, Stefano
(2019).
Evaluating the criticality of neuronal networks toward the identification of pathological conditions .
-
Huse Ramstad, Ola; van de Wijdeven, Rosanne Francisca; Heiney, Kristine; Nichele, Stefano; Sandvig, Ioanna & Sandvig, Axel
(2019).
Mapping functional response to axotomy in a reductionist corticospinal network .
-
Christiansen, Nicholas; Nichele, Stefano; Tufte, Gunnar; Sandvig, Ioanna & Sandvig, Axel
(2019).
Using computer models to infer disease pathology in in vitro models of amyotrophic lateral sclerosis .
-
Fiskum, Vegard; Nichele, Stefano; Sandvig, Ioanna & Sandvig, Axel
(2018).
Modelling neural network dynamics in amyotrophic lateral sclerosis (ALS).
-
Heiney, Kristine; Nichele, Stefano; Sandvig, Ioanna & Tufte, Gunnar
(2023).
Deciphering and emulating neuronal communication: Population-level computation from elemental interactions.
NTNU.
-
Heiney, Kristine Anne & Nichele, Stefano
(2023).
Deciphering and emulating neuronal communication: Population-level computation from elemental interactions.
Norges teknisk-naturvitenskapelige universitet.
ISSN 978-82-326-7140-3.
-
Pontes Filho, Sidney; Nichele, Stefano; Yazidi, Anis; Zhang, Jianhua; Hammer, Hugo Lewi & Sandvig, Ioanna
[Vis alle 7 forfattere av denne artikkelen]
(2023).
Optimization of dynamical systems towards criticality and intelligent behavior.
Norges teknisk-naturvitenskapelige universitet.
ISSN 978-82-326-6953-0.
-
Variengien, Alexandre; Pontes Filho, Sidney; Glover, Tom Eivind & Nichele, Stefano
(2021).
Towards Self-organized Control: Using Neural Cellular Automata to Robustly Control a Cart-pole Agent.
Innovations in Machine Intelligence, Crosslabs, Vol. 1/2021.
1.
Vis sammendrag
Neural cellular automata (Neural CA) are a recent framework used to model biological phenomena emerging from multicellular organisms. In these systems, artificial neural networks are used as update rules for cellular automata. Neural CA are end-to-end differentiable systems where the parameters of the neural network can be learned to achieve a particular task. In this work, we used neural CA to control a cart-pole agent. The observations of the environment are transmitted in input cells while the values of output cells are used as a readout of the system. We trained the model using deep-Q learning where the states of the output cells were used as the Q-value estimates to be optimized. We found that the computing abilities of the cellular automata were maintained over several hundreds of thousands of iterations, producing an emergent stable behavior in the environment it controls for thousands of steps. Moreover, the system demonstrated life-like phenomena such as a developmental phase, regeneration after damage, stability despite a noisy environment, and robustness to unseen disruption such as input deletion.
-