Search Docs…

Search Docs…

Guide

Neuromorphic SpikingNeuralNetwork - (Biorealistic model)

Neuromorphic SpikingNeuralNetwork


Overview

This document provides a detailed description of the SpikingNeuralNetwork code, including directory structure, configuration files, key modules, and scripts for data generation, simulation, visualisation, and testing. Each section contains relevant code snippets and explanations to facilitate understanding and usage.

Directory Structure

The project is organised into several directories, each serving a specific purpose. The directory structure is as follows:

project_root/
├── config/
├── simulation_config.json
└── logging_config.json
├── data/
├── neuron_sensitivity.npy
├── initial_conditions.npy
├── cognitive_model_weights.npy
├── dopamine_levels.npy
├── serotonin_levels.npy
├── norepinephrine_levels.npy
├── integration_levels.npy
├── attention_signals.npy
├── docs/
└── README.md
├── logs/
└── simulation.log
├── modules/
├── baseline.py
├── dehaene_changeux_modulation.py
├── continuous_learning.py
├── emotional_models.py
├── plasticity.py
├── topology.py
├── behavior_monitoring.py
├── self_model.py
├── sensory_motor.py
├── adex_neuron.py
├── ionic_channels.py
├── scripts/
├── generate_data.py
├── run_simulation.py
└── visualization.py
└── tests/
    └── test_simulation.py


config
 Directory

  • simulation_config.json: This file contains configuration parameters for the simulation, such as the number of neurons, time steps, and various model parameters.

  • logging_config.json: This file configures the logging setup for the entire project, detailing log levels and file locations.

data Directory

  • Contains several .npy files: These files store data arrays used by the simulation, such as neuron sensitivity, initial conditions, cognitive model weights, and levels of various neuromodulators.

docs Directory

  • README.md: A markdown file providing an overview and instructions for the project.

logs Directory

  • simulation.log: The log file where runtime logs are stored.

modules Directory

This directory contains various Python modules implementing different aspects of the neuromorphic simulation:

  • baseline.py: Implements dynamic baseline adjustment for neurons.

  • dehaene_changeux_modulation.py: Implements the Dehaene-Changeux cognitive modulation model.

  • continuous_learning.py: Implements continuous learning mechanisms.

  • emotional_models.py: Implements models for simulating complex emotional states.

  • plasticity.py: Implements synaptic plasticity mechanisms.

  • topology.py: Implements network topology generation and dynamic reconfiguration.

  • behavior_monitoring.py: Implements functions to monitor and analyze emergent behaviors.

  • self_model.py: Implements a self-model for reflective processing and decision-making.

  • sensory_motor.py: Implements sensory and motor integration.

  • adex_neuron.py: Implements the Adaptive Exponential Integrate-and-Fire neuron model.

  • ionic_channels.py: Implements the dynamics of ionic channels in neurons.

scripts Directory

This directory contains scripts to generate data, run the simulation, and visualize results:

  • generate_data.py: Generates and saves initial data required for the simulation.

  • run_simulation.py: Initializes and runs the neuromorphic simulation.

  • visualization.py: Provides 2D and 3D visualization of the simulation results.

tests Directory

  • test_simulation.py: Contains unit tests for various modules in the simulation.

Configuration Files

config/simulation_config.json

This JSON file contains configuration parameters for the simulation, such as the number of neurons, time steps, and various model parameters.

{
    "num_neurons": 100,
    "time_steps": 1000,
    "baseline_mu": 1.0,
    "baseline_sigma": 0.005,
    "baseline_tau": 100.0,
    "kp": 0.1,
    "ki": 0.01,
    "kd": 0.01,
    "dopamine_effect": 0.1,
    "serotonin_effect": 0.1,
    "norepinephrine_effect": 0.1,
    "learning_rate": 0.01,
    "tau_eligibility": 20.0,
    "topology_type": "small_world",
    "p_rewire": 0.1,
    "k": 4,
    "cognitive_model_params": {
        "integration_levels": 1.0,
        "attention_signals": 1.0,
        "interregional_connectivity": 1.0
    }
}

Explanation

  • num_neurons: The total number of neurons in the simulation.

  • time_steps: The number of time steps for the simulation to run.

  • baseline_mu, baseline_sigma, baseline_tau: Parameters for the baseline model.

  • kp, ki, kd: Proportional, integral, and derivative gains for the PID controller used in baseline adjustment.

  • dopamine_effect, serotonin_effect, norepinephrine_effect: Parameters defining the effects of neuromodulators.

  • learning_rate: The rate at which the model learns from experience.

  • tau_eligibility: The time constant for eligibility traces used in learning.

  • topology_type: The type of network topology, e.g., "small_world".

  • p_rewire: The probability of rewiring connections in the network topology.

  • k: The number of initial connections per neuron.

  • cognitive_model_params: Parameters specific to the cognitive model, including integration levels, attention signals, and interregional connectivity.

Data Generation Script

scripts/generate_data.py

This script generates and saves initial data required for the simulation, such as neuron sensitivity and initial conditions.

import numpy as np
import os

# Ensure data directory exists
if not os.path.exists('data'):
    os.makedirs('data')

# Generate neuron sensitivity data based on empirical ranges
neuron_sensitivity = np.random.uniform(0.8, 1.2, (100, 3))
np.save('data/neuron_sensitivity.npy', neuron_sensitivity)

Explanation

  • Imports: The script uses numpy for numerical operations and os for file system operations.

  • Directory Check: Ensures the data directory exists, creating it if necessary.

  • Neuron Sensitivity Data: Generates a random array of neuron sensitivity values, uniformly distributed between 0.8 and 1.2 for 100 neurons across 3 dimensions, and saves it as neuron_sensitivity.npy.

# Generate initial conditions based on empirical data
initial_conditions = {
    'neuron_excitability': np.random.uniform(0.5, 1.5, 100),
    'synaptic_weights': np.random.rand(100, 100),
    'neuron_potentials': np.random.uniform(-65, -55, 100),
    'recovery_variables': np.random.uniform(-15, -10, 100)
}
np.save('data/initial_conditions.npy', initial_conditions)

Explanation

  • Initial Conditions: Generates initial conditions for the simulation, including neuron excitability, synaptic weights, neuron potentials, and recovery variables. Each parameter is randomly initialized within specified ranges and saved as initial_conditions.npy.

    # Generate cognitive model weights (example)
    cognitive_model_weights = np.random.rand(100, 100)
    np.save('data/cognitive_model_weights.npy', cognitive_model_weights)

Explanation

  • Cognitive Model Weights: Generates a random matrix of cognitive model weights for 100 neurons and saves it as cognitive_model_weights.npy.

    # Generate neuromodulator levels
    dopamine_levels = np.random.uniform(0.5, 1.5, 1000)
    serotonin_levels = np.random.uniform(0.5, 1.5, 1000)
    norepinephrine_levels = np.random.uniform(0.5, 1.5, 1000)
    integration_levels = np.random.uniform(0.5, 1.5, 1000)
    attention_signals = np.random.uniform(0.5, 1.5, 1000)
    
    np.save('data/dopamine_levels.npy', dopamine_levels)
    np.save('data/serotonin_levels.npy', serotonin_levels)
    np.save('data/norepinephrine_levels.npy', norepinephrine_levels)
    np.save('data/integration_levels.npy', integration_levels)
    np.save('data/attention_signals.npy', attention_signals)
    
    print("Data generated and saved.")

Explanation

  • Neuromodulator Levels: Generates random arrays for dopamine, serotonin, norepinephrine levels, integration levels, and attention signals over 1000 time steps, and saves each array in respective .npy files.

  • Print Statement: Indicates completion of data generation and saving.

Baseline Module

modules/baseline.py

This module implements the dynamic baseline adjustment mechanism for neurons based on the model by Turrigiano et al. (1998).

import numpy as np
import logging

logger = logging.getLogger(__name__)

def dynamic_baseline(t, mu=1.0, sigma=0.005, tau=100.0, kp=0.1, ki=0.01, kd=0.01, network_state=None):
    """
    Implements the dynamic baseline adjustment mechanism for neurons based on the model by Turrigiano et al. (1998).
    Reference: Turrigiano GG, Leslie KR, Desai NS, Rutherford LC, Nelson SB. Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature. 1998.
    """
    try:
        if not (0 < sigma < 0.1):
            raise ValueError("Sigma value out of expected range.")

Explanation

  • Imports: Imports numpy for numerical operations and logging for logging errors and information.

  • Logger: Configures a logger for the module.

  • Function Definition: Defines the dynamic_baseline function, which adjusts the baseline activity of neurons dynamically based on specified parameters and network state.

  • Validation: Checks if the sigma value is within the expected range, raising a ValueError if not.

        error = sigma
        integral = 0
        derivative = 0
        last_error = 0

        baseline = mu + sigma * np.random.randn(len(t))

Explanation

  • PID Controller Initialization: Initializes the error, integral, derivative, and last error variables for the PID controller.

  • Baseline Initialization: Initializes the baseline using a Gaussian distribution with mean mu and standard deviation sigma.

        for i in range(1, len(t)):
            if network_state is not None:
                # Adjust sigma based on the network's firing rate variability
                firing_rate_variability = np.var(network_state)
                sigma = sigma + kp * firing_rate_variability
            error = sigma - baseline[i-1]
            integral += error
            derivative = error - last_error
            adjustment = kp * error + ki * integral + kd * derivative
            baseline[i] += adjustment
            last_error = error

        return baseline

Explanation

  • Baseline Adjustment Loop: Iterates over each time step to adjust the baseline:

    • Firing Rate Variability: If the network state is provided, updates sigma based on the firing rate variability.

    • Error Calculation: Calculates the error between the target sigma and the current baseline.

    • PID Controller: Updates the integral and derivative components and calculates the adjustment using PID controller gains (kpkikd).

    • Baseline Update: Applies the adjustment to the baseline.

    • Error Update: Updates the last error value for the next iteration.

    except ValueError as ve:
        logger.error(f"Value error in dynamic_baseline: {ve}")
    except Exception as e:
        logger.error(f"Unexpected error in dynamic_baseline: {e}")

Explanation

  • Error Handling: Catches and logs ValueError and other exceptions that might occur during baseline adjustment.


Dehaene-Changeux Modulation Module

modules/dehaene_changeux_modulation.py

This module implements the Dehaene-Changeux model for cognitive modulation.

import numpy as np
from modules.continuous_learning import ContinuousLearning

class DehaeneChangeuxModulation:
    """
    Implements the Dehaene-Changeux model for cognitive modulation.
    Reference: Rolls ET. A hierarchical neural network model of the primate visual system. Journal of Neuroscience. 2011.
    """

Explanation

  • Imports: Imports numpy for numerical operations and the ContinuousLearning class from the continuous learning module.

  • Class Definition: Defines the DehaeneChangeuxModulation class to implement the Dehaene-Changeux cognitive modulation model.

    def __init__(self, neuron_count, layer_count, noise_level=0.05):
        self.neuron_count = neuron_count
        self.layer_count = layer_count
        self.noise_level = noise_level
        self.cognitive_model_weights = np.random.rand(neuron_count, neuron_count)
        self.integration_levels = np.random.rand(neuron_count)
        self.attention_signals = np.random.rand(neuron_count)
        self.neural_noise = np.random.randn(neuron_count) * noise_level
        self.continuous_learning = ContinuousLearning()

Explanation

  • Constructor: Initializes the instance variables:

    • neuron_count: The number of neurons.

    • layer_count: The number of layers in the cognitive model.

    • noise_level: The level of noise to be added to the neural activity.

    • cognitive_model_weights: A random matrix representing the weights of the cognitive model.

    • integration_levels: Randomly initialized integration levels for the neurons.

    • attention_signals: Randomly initialized attention signals for the neurons.

    • neural_noise: Random noise added to the neural activity.

    • continuous_learning: An instance of the ContinuousLearning class for updating model weights.

    def set_parameters(self, cognitive_model_weights=None, integration_levels=None, attention_signals=None):
        if cognitive_model_weights is not None:
            self.cognitive_model_weights = cognitive_model_weights
        if integration_levels is not None:
            self.integration_levels = integration_levels
        if attention_signals is not None:
            self.attention_signals = attention_signals

Explanation

  • Parameter Setter: Updates the cognitive model weights, integration levels, and attention signals if new values are provided.

    def normalize(self, array):
        return (array - np.mean(array)) / np.std(array)

Explanation

  • Normalization Function: Normalizes the input array to have zero mean and unit standard deviation.

    def multi_layer_integration(self, activity):
        layer_activities = []
        for _ in range(self.layer_count):
            activity = np.dot(self.cognitive_model_weights, activity)
            activity = self.normalize(activity) * self.integration_levels
            activity = np.tanh(activity)  # Non-linear activation
            activity = activity * self.attention_signals
            layer_activities.append(activity)
        return layer_activities[-1]  # Return the last layer's activity

Explanation

  • Multi-layer Integration: Integrates neural activity across multiple layers:

    • Activity Update: Multiplies the activity by the cognitive model weights.

    • Normalization and Scaling: Normalizes the activity and scales it by integration levels.

    • Non-linear Activation: Applies a hyperbolic tangent function for non-linear activation.

    • Attention Modulation: Modulates the activity by attention signals.

    • Layer Activities: Stores the activity of each layer and returns the final layer's activity.

    def empirical_feedback(self, integrated_activity, neuron_activity):
        feedback_strength_base = 0.1
        feedback_strength_dynamic = feedback_strength_base * np.std(neuron_activity)
        return feedback_strength_dynamic * np.tanh(integrated_activity - neuron_activity)

Explanation

  • Empirical Feedback: Calculates feedback based on the difference between integrated and current neural activity:

    • Feedback Strength: Computes the base and dynamic feedback strength.

    • Feedback Calculation: Applies a hyperbolic tangent function to the difference between integrated and neuron activity to determine feedback.

    def add_neural_noise(self, integrated_activity):
        return integrated_activity + self.neural_noise

Explanation

  • Neural Noise Addition: Adds predefined neural noise to the integrated activity.

    def update_weights(self, integrated_activity):
        self.cognitive_model_weights += self.continuous_learning.learn_from_experience(integrated_activity)
        self.cognitive_model_weights = np.clip(self.cognitive_model_weights, -1, 1)

Explanation

  • Weight Update: Updates the cognitive model weights using continuous learning based on integrated activity and clips the weights to stay within the range [-1, 1].

    def modulate_activity(self, neuron_activity):
        normalized_activity = self.normalize(neuron_activity)
        integrated_activity = self.multi_layer_integration(normalized_activity)
        hierarchical_feedback = self.empirical_feedback(integrated_activity, normalized_activity)
        integrated_activity += hierarchical_feedback
        integrated_activity = self.add_neural_noise(integrated_activity)
        self.update_weights(integrated_activity)
        return integrated_activity

Explanation

  • Activity Modulation: Modulates the neural activity:

  • Normalization: Normalizes the neuron activity.

  • Integration: Integrates the normalized activity across multiple layers.

  • Feedback: Adds hierarchical feedback to the integrated activity.

  • Noise Addition: Adds neural noise to the integrated activity.

  • Weight Update: Updates the cognitive model weights based on the integrated activity.

  • Return: Returns the modulated integrated activity.

# Example usage with dummy data
neuron_count = 100
layer_count = 3
neuron_activity = np.random.rand(neuron_count)

modulator = DehaeneChangeuxModulation(neuron_count, layer_count)
integrated_activity = modulator.modulate_activity(neuron_activity)

print(integrated_activity)

Explanation

  • Example Usage: Demonstrates how to use the DehaeneChangeuxModulation class with dummy data:

    • Initialization: Creates an instance of the DehaeneChangeuxModulation class.

    • Activity Modulation: Modulates the neural activity using the modulate_activity method.

    • Print: Outputs the integrated activity.

Continuous Learning Module

modules/continuous_learning.py

This module implements continuous learning mechanisms for neural networks.

import numpy as np

class ContinuousLearning:
    """
    Implements continuous learning mechanisms for neural networks.
    Reference: Izhikevich EM. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press. 2007.
    ""

Explanation

  • Imports: Imports numpy for numerical operations.

  • Class Definition: Defines the ContinuousLearning class to implement continuous learning mechanisms for neural networks.

    def __init__(self, memory_capacity=1000, learning_rate=0.01):
        self.memory_capacity = memory_capacity
        self.memory = []
        self.learning_rate = learning_rate

Explanation

  • Constructor: Initializes the instance variables:

  • memory_capacity: The maximum number of experiences to store in memory.

  • memory: A list to store experiences.

  • learning_rate: The rate at which the model learns from experience.

    def update_memory(self, experience):
        self.memory.append(experience)
        if len(self.memory) > self.memory_capacity:
            self.memory.pop(0)

Explanation

  • Memory Update: Adds a new experience to the memory and ensures the memory does not exceed its capacity by removing the oldest experience if necessary.

    def consolidate_memory(self):
        for experience in self.memory:
            self.learn_from_experience(experience)

Explanation

  • Memory Consolidation: Iterates through stored experiences and applies learning to each one.

    def advanced_learning_algorithm(self, experience):
        # Implement a sophisticated algorithm based on Izhikevich (2007)
        weights_update = self.learning_rate * np.outer(experience, experience)
        return weights_update

Explanation

  • Advanced Learning Algorithm: Implements an algorithm to update synaptic weights based on experience, using the outer product of the experience vector scaled by the learning rate.

    def learn_from_experience(self, experience):
        weights_update = self.advanced_learning_algorithm(experience)
        return weights_update

Explanation

  • Learn from Experience: Updates synaptic weights using the advanced learning algorithm.

# Example usage
continuous_learning = ContinuousLearning()

for t in range(100):
    experience = np.random.rand(100)  # Dummy experience data
    continuous_learning.update_memory(experience)

continuous_learning.consolidate_memory()

Explanation

  • Example Usage: Demonstrates how to use the ContinuousLearning class:

    • Initialization: Creates an instance of the ContinuousLearning class.

    • Memory Update: Adds dummy experiences to the memory.

    • Memory Consolidation: Consolidates the experiences stored in memory.

Emotional Models Module

modules/emotional_models.py

This module implements an emotional model for simulating complex emotional states.

import numpy as np

class EmotionalModel:
    """
    Implements an emotional model for simulating complex emotional states.
    Reference: Dayan P, Huys QJ. Serotonin in affective control. Annual Review of Neuroscience. 2009.
    """
    
    def __init__(self, initial_state):
        self.state = initial_state
        self.history = []

Explanation

  • Imports: Imports numpy for numerical operations.

  • Class Definition: Defines the EmotionalModel class to simulate complex emotional states.

  • Constructor: Initializes the emotional state and history:

  • initial_state: The initial emotional state.

  • state: Stores the current emotional state.

  • history: Stores the history of emotional states.

    def update_state_dynamic(self, external_factors, internal_feedback):
        # Implement dynamic interaction between emotional states
        self.state['happiness'] += external_factors['positive_events'] - internal_feedback['stress']
        self.state['stress'] += external_factors['negative_events'] - internal_feedback['calmness']
        self.state['motivation'] += external_factors['goals_achieved'] - internal_feedback['frustration']

Explanation

  • State Update: Dynamically updates the emotional state based on external factors and internal feedback:

  • happiness: Increases with positive events and decreases with stress.

  • stress: Increases with negative events and decreases with calmness.

  • motivation: Increases with goals achieved and decreases with frustration.

    def simulate_complex_emotional_states(self, dopamine_levels, serotonin_levels, norepinephrine_levels, t):
        self.state['happiness'] = np.mean(dopamine_levels[t])
        self.state['calmness'] = np.mean(serotonin_levels[t])
        self.state['alertness'] = np.mean(norepinephrine_levels[t])
        self.state['stress'] = 1.0 / (np.mean(serotonin_levels[t]) + np.mean(dopamine_levels[t]))
        self.state['motivation'] = np.mean(dopamine_levels[t]) * 0.5 + np.mean(norepinephrine_levels[t]) * 0.5
        self.state['frustration'] = 1.0 / (np.mean(dopamine_levels[t]) * np.mean(norepinephrine_levels[t]))
        self.state['satisfaction'] = np.mean(dopamine_levels[t]) - np.std(dopamine_levels[t])
        self.history.append(self.state.copy())

Explanation

  • Complex Emotional State Simulation: Simulates emotional states based on neuromodulator levels:

  • happiness: Based on mean dopamine levels.

  • calmness: Based on mean serotonin levels.

  • alertness: Based on mean norepinephrine levels.

  • stress: Inversely related to the sum of mean serotonin and dopamine levels.

  • motivation: Based on a combination of mean dopamine and norepinephrine levels.

  • frustration: Inversely related to the product of mean dopamine and norepinephrine levels.

  • satisfaction: Difference between mean and standard deviation of dopamine levels.

  • History Update: Appends the current state to the history.

    def get_emotional_state(self):
        return self.state

Explanation

  • Get Emotional State: Returns the current emotional state.

    def get_emotional_history(self):
        return self.history

Explanation

  • Get Emotional History: Returns the history of emotional states.

AdEx Neuron Model Module

modules/adex_neuron.py

This module implements the Adaptive Exponential Integrate-and-Fire (AdEx) neuron model.

import numpy as np

class AdExNeuron:
    """
    Implements the Adaptive Exponential Integrate-and-Fire (AdEx) neuron model.
    Reference: Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology. 2005.
    ""

Explanation

  • Imports: Imports numpy for numerical operations.

  • Class Definition: Defines the AdExNeuron class to implement the Adaptive Exponential Integrate-and-Fire neuron model.

    def __init__(self, C, gL, EL, VT, DeltaT, a, tau_w, b, Vr, Vpeak, dt):
        self.C = C
        self.gL = gL
        self.EL = EL
        self.VT = VT
        self.DeltaT = DeltaT
        self.a = a
        self.tau_w = tau_w
        self.b = b
        self.Vr = Vr
        self.Vpeak = Vpeak
        self.dt = dt
        self.V = EL  # Membrane potential
        self.w = 0  # Adaptation variable

Explanation

  • Constructor: Initializes the instance variables:

    • C: Capacitance.

    • gL: Leak conductance.

    • EL: Resting potential.

    • VT: Threshold potential.

    • DeltaT: Sharpness of the exponential approach to threshold.

    • a: Subthreshold adaptation.

    • tau_w: Adaptation time constant.

    • b: Spike-triggered adaptation.

    • Vr: Reset potential.

    • Vpeak: Peak potential.

    • dt: Time step.

    • V: Initial membrane potential.

    • w: Initial adaptation variable.

    def validate_parameters(self):
        if not (50 <= self.C <= 500):
            raise ValueError("Capacitance out of range.")
        if not (1 <= self.gL <= 100):
            raise ValueError("Leak conductance out of range.")
        if not (-80 <= self.EL <= -50):
            raise ValueError("Resting potential out of range.")
        if not (-60 <= self.VT <= -40):
            raise ValueError("Threshold potential out of range.")
        if not (0.5 <= self.DeltaT <= 5):
            raise ValueError("Sharpness of exponential approach to threshold out of range.")
        if not (1 <= self.a <= 10):
            raise ValueError("Subthreshold adaptation out of range.")
        if not (10 <= self.tau_w <= 200):
            raise ValueError("Adaptation time constant out of range.")
        if not (0.1 <= self.b <= 50):
            raise ValueError("Spike-triggered adaptation out of range.")
        if not (-70 <= self.Vr <= -40):
            raise ValueError("Reset potential out of range.")
        if not (10 <= self.Vpeak <= 50):
            raise ValueError("Peak potential out of range.")
        if not (0.01 <= self.dt <= 1):
            raise ValueError("Time step out of range.")

Explanation

  • Parameter Validation: Ensures that the parameters of the AdEx neuron model are within valid ranges, raising ValueError if any parameter is out of range.

    def step(self, I):
        self.validate_parameters()
        dV = (self.gL * (self.EL - self.V) + self.gL * self.DeltaT * np.exp((self.V - self.VT) / self.DeltaT) - self.w + I) / self.C
        dw = (self.a * (self.V - self.EL) - self.w) / self.tau_w
        self.V += dV * self.dt
        self.w += dw * self.dt
        if self.V >= self.Vpeak:
            self.V = self.Vr
            self.w += self.b
        return self.V, self.w

Explanation

  • Simulation Step: Updates the membrane potential and adaptation variable for a single time step:

    • Validation: Validates the parameters before proceeding.

    • Membrane Potential Update: Calculates the change in membrane potential (dV) using the AdEx model equation.

    • Adaptation Variable Update: Calculates the change in the adaptation variable (dw).

    • Update Variables: Updates the membrane potential and adaptation variable using Euler integration.

    • Spike Handling: Resets the membrane potential to Vr and increments the adaptation variable by b if the membrane potential exceeds Vpeak.

    • Return: Returns the updated membrane potential and adaptation variable.

Ionic Channels Module

modules/ionic_channels.py

This module implements the dynamics of ionic channels in neurons.

import numpy as np

class IonicChannel:
    """
    Implements the dynamics of ionic channels in neurons.
    Reference: Hille B. Ion Channels of Excitable Membranes. Sinauer Associates. 2001.
    """
    
    def __init__(self, g_max, E_rev, dynamics_params):
        self.g_max = g_max
        self.E_rev = E_rev
        self.dynamics_params = dynamics_params
        self.state = self.initialize_state()

Explanation

  • Imports: Imports numpy for numerical operations.

  • Class Definition: Defines the IonicChannel class to model the dynamics of ionic channels.

  • Constructor: Initializes the instance variables:

    • g_max: Maximum conductance of the ionic channel.

    • E_rev: Reversal potential of the ionic channel.

    • dynamics_params: Parameters governing the dynamics of the channel.

    • state: Initial state of the channel, initialized using initialize_state method.

    def initialize_state(self):
        # Initialize channel state variables based on dynamics parameters
        return {param: np.random.uniform(0, 1) for param in self.dynamics_params}

Explanation

  • Initialize State: Initializes the state variables of the ionic channel based on its dynamics parameters.

    def update_state(self, voltage, dt):
        # Update channel state variables based on voltage and dynamics equations
        for param, dynamics in self.dynamics_params.items():
            alpha = dynamics['alpha'](voltage)
            beta = dynamics['beta'](voltage)
            self.state[param] += (alpha * (1 - self.state[param]) - beta * self.state[param]) * dt

Explanation

  • State Update: Updates the state variables of the ionic channel based on the membrane potential (voltage) and the dynamics equations:

    • Alpha and Beta: Calculates the alpha and beta rate constants for each state variable.

    • State Variable Update: Updates each state variable using the alpha and beta values and the time step (dt).

    def complex_dynamics(self, voltage, dt):
        # Implement complex interactions between different ionic channels based on Hille (2001)
        pass

Explanation

  • Complex Dynamics: Placeholder method for implementing complex interactions between different ionic channels based on their dynamics.

    def compute_current(self, voltage):
        # Compute the ionic current based on the channel state and voltage
        g = self.g_max * np.prod([self.state[param] for param in self.dynamics_params])
        return g * (voltage - self.E_rev)

Explanation

  • Compute Current: Calculates the ionic current based on the channel state and membrane potential:

    • Conductance: Computes the conductance as the product of the state variables scaled by the maximum conductance.

    • Current: Calculates the ionic current as the product of the conductance and the difference between the membrane potential and the reversal potential.

Plasticity Module

modules/plasticity.py

This module implements synaptic weight updates using spike-timing-dependent plasticity (STDP) and deep Q-learning.

import logging
import numpy as np

logger = logging.getLogger(__name__)

def update_synaptic_weights(weights, spikes_pre, spikes_post, eligibility_traces, learning_rate, tau_eligibility):
    """
    Implements synaptic weight updates using STDP.
    Reference: Martin SJ, Grimwood PD, Morris RG. Synaptic plasticity and memory: an evaluation of the hypothesis. Annual Review of Neuroscience. 2000.
    ""

Explanation

  • Imports: Imports numpy for numerical operations and logging for logging.

  • Logger: Configures a logger for the module.

  • Function Definition: Defines the update_synaptic_weights function to update synaptic weights using spike-timing-dependent plasticity (STDP).

    A_plus = 0.005
    A_minus = 0.005
    tau_plus = 20.0
    tau_minus = 20.0

    pre_indices = np.where(spikes_pre)[0]
    post_indices = np.where(spikes_post)[0]
    delta_t = post_indices[:, np.newaxis] - pre_indices
    ltp = (delta_t >= 0).astype(float) * A_plus * np.exp(-delta_t / tau_plus)
    ltd = (delta_t < 0).astype(float) * A_minus * np.exp(delta_t / tau_minus)
    eligibility_traces += np.sum(ltp, axis=0) - np.sum(ltd, axis=0)
    eligibility_traces *= np.exp(-1 / tau_eligibility)
    weights += learning_rate * eligibility_traces
    weights = np.clip(weights, 0, 1 - 1e-6)
    
    logger.debug(f"Updated weights: {weights}")
    logger.debug(f"Eligibility traces: {eligibility_traces}")

    return weights, eligibility_traces

Explanation

  • STDP Parameters: Defines parameters for long-term potentiation (LTP) and long-term depression (LTD):

    • A_plus, A_minus: Amplitude constants for LTP and LTD.

    • tau_plus, tau_minus: Time constants for LTP and LTD.

  • Spike Time Difference: Computes the difference in spike times between pre- and post-synaptic neurons.

  • LTP and LTD Calculation: Computes the LTP and LTD contributions to the eligibility traces.

  • Eligibility Traces Update: Updates the eligibility traces based on LTP and LTD contributions.

  • Weight Update: Updates the synaptic weights using the eligibility traces and learning rate, and clips the weights to be within [0, 1).

  • Logging: Logs the updated weights and eligibility traces for debugging.

def deep_q_learning_update(weights, rewards, eligibility_traces, learning_rate, discount_factor, state, action):
    """
    Implements synaptic weight updates using Deep Q-Learning.
    Reference: Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature. 2015.
    """
    q_values = np.zeros_like(weights)
    for t in range(rewards.shape[0] - 1):
        td_error = rewards[t] + discount_factor * np.max(q_values[t + 1]) - q_values[t]
        eligibility_traces *= td_error
        weights[state, action] += learning_rate * td_error
        weights = np.clip(weights, 0, 1 - 1e-6)
        q_values[t] = q_values[t] + learning_rate * td_error
    
    logger.debug(f"Deep Q-learning updated weights: {weights}")
    return weights

Explanation

  • Function Definition: Defines the deep_q_learning_update function to update synaptic weights using Deep Q-Learning.

  • Q-values Initialization: Initializes Q-values array.

  • TD Error Calculation: Computes the temporal difference (TD) error for each time step.

  • Eligibility Traces Update: Updates eligibility traces using the TD error.

  • Weight Update: Updates the synaptic weights using the TD error and learning rate, and clips the weights to be within [0, 1).

  • Q-values Update: Updates the Q-values using the learning rate and TD error.

  • Logging: Logs the updated weights for debugging.

Topology Module

modules/topology.py

This module creates neural network topologies and implements dynamic network reconfiguration.

import numpy as np

def create_network_topology(num_neurons, topology_type="small_world", p_rewire=0.1, k=4):
    """
    Creates a neural network topology.
    Reference: Sporns O, Chialvo DR, Kaiser M, Hilgetag CC. Organization, development and function of complex brain networks. Trends in Cognitive Sciences. 2004.
    """
    if (topology_type == "small_world"):
        synaptic_weights = np.zeros((num_neurons, num_neurons))
        for i in range(num_neurons):
            for j in range(1, k // 2 + 1):
                synaptic_weights[i, (i + j) % num_neurons] = 1
                synaptic_weights[i, (i - j) % num_neurons] = 1
        for i in range(num_neurons):
            for j in range(i + 1, num_neurons):
                if (synaptic_weights[i, j] == 1 and np.random.rand() < p_rewire):
                    new_connection = np.random.randint(0, num_neurons)
                    while new_connection == i or synaptic_weights[i, new_connection] == 1:
                        new_connection = np.random.randint(0, num_neurons)
                    synaptic_weights[i, j] = 0
                    synaptic_weights[i, new_connection] = 1
    return synaptic_weights

Explanation

  • Imports: Imports numpy for numerical operations.

  • Function Definition: Defines the create_network_topology function to create a neural network topology.

  • Small-world Topology: Implements the creation of a small-world topology:

    • Initial Connections: Connects each neuron to its k nearest neighbors.

    • Rewiring: Rewires connections with probability p_rewire.

def dynamic_topology_switching(synaptic_weights, spikes, target_degree=4, adjustment_rate=0.01):
    """
    Implements dynamic network reconfiguration.
    Reference: Sporns O, Chialvo DR, Kaiser M, Hilgetag CC. Organization, development and function of complex brain networks. Trends in Cognitive Sciences. 2004.
    """
    current_degrees = np.sum(synaptic_weights > 0, axis=1)
    for i in range(len(current_degrees)):
        if (current_degrees[i] < target_degree):
            candidates = np.where(synaptic_weights[i] == 0)[0]
            if (candidates.size > 0):
                j = np.random.choice(candidates)
                synaptic_weights[i, j] = np.random.rand() * adjustment_rate
        elif (current_degrees[i] > target_degree):
            connected = np.where(synaptic_weights[i] > 0)[0]
            if (connected.size > 0):
                j = np.random.choice(connected)
                synaptic_weights[i, j] *= 1 - adjustment_rate
    return synaptic_weights

Explanation

  • Function Definition: Defines the dynamic_topology_switching function to dynamically reconfigure the network topology based on neuron activity.

  • Current Degrees: Calculates the current degree (number of connections) of each neuron.

  • Target Degree Adjustment: Adjusts connections to match the target degree:

    • Add Connections: Adds new connections for neurons with fewer than the target degree.

    • Remove Connections: Reduces connections for neurons with more than the target degree.

Behavior Monitoring Module

modules/behavior_monitoring.py

This module monitors emergent behaviors in neural activity and analyzes complex behavioral patterns.

import numpy as np
import logging

logger = logging.getLogger(__name__)

def monitor_emergent_behaviors(neuron_activity, threshold=0.9):
    """
    Monitors emergent behaviors in neural activity.
    Reference: O'Reilly RC, Frank MJ. Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural Computation. 2006.
    """
    high_activity_neurons = np.where(neuron_activity > threshold)[0]
    logger.info(f"High activity neurons: {high_activity_neurons}")
    return high_activity_neurons

Explanation

  • Imports: Imports numpy for numerical operations and logging for logging.

  • Logger: Configures a logger for the module.

  • Function Definition: Defines the monitor_emergent_behaviors function to identify neurons with activity levels above a specified threshold.

def analyze_complex_behaviors(neuron_activity, pattern_length=10):
    """
    Analyzes complex behavioral patterns in neural activity.
    Reference: O'Reilly RC, Frank MJ. Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural Computation. 2006.
    """
    complex_patterns = []
    for i in range(len(neuron_activity) - pattern_length):
        pattern = neuron_activity[i:i + pattern_length]
        if is_complex_pattern(pattern):
            complex_patterns.append(pattern)
            logger.debug(f"Complex behavior pattern: {pattern}")
    return complex_patterns

Explanation

  • Function Definition: Defines the analyze_complex_behaviors function to identify complex patterns in neural activity.

  • Pattern Analysis: Analyzes sequential patterns of specified length (pattern_length) to detect complex behaviors.

def is_complex_pattern(pattern):
    # Example of a complex pattern detection algorithm
    return (np.std(pattern) > 0.5) and (np.mean(pattern) > 0.75)  # Example conditions

Explanation

  • Complex Pattern Detection: Defines criteria for identifying complex patterns based on the standard deviation and mean of the pattern.

Self Model Module

modules/self_model.py

This module implements a self-model for reflective processing and decision-making.

import numpy as np

class SelfModel:
    """
    Implements a self-model for reflective processing and decision-making.
    Reference: Metzinger T. The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books. 2009.
    """
    
    def __init__(self, num_neurons):
        self.num_neurons = num_neurons
        self.neuron_activity = np.zeros(num_neurons)
        self.synaptic_weights = np.zeros((num_neurons, num_neurons))
        self.self_history = []

Explanation

  • Imports: Imports numpy for numerical operations.

  • Class Definition: Defines the SelfModel class to implement a self-model for reflective processing and decision-making.

  • Constructor: Initializes the instance variables:

    • num_neurons: The number of neurons.

    • neuron_activity: Array to store current neuron activity.

    • synaptic_weights: Matrix to store current synaptic weights.

    • self_history: List to store history of self-states.

    def update_self_model(self, neuron_activity, synaptic_weights):
        self.neuron_activity = neuron_activity
        self.synaptic_weights = synaptic_weights
        self.self_history.append((neuron_activity.copy(), synaptic_weights.copy()))
        if len(self.self_history) > 100:
            self.self_history.pop(0)

Explanation

  • Self Model Update: Updates the self-model with current neuron activity and synaptic weights:

    • Activity and Weights Update: Updates the current neuron activity and synaptic weights.

    • History Update: Adds the current state to the history and removes the oldest state if the history exceeds 100 entries.

    def complex_reflective_processing(self):
        """
        Implements complex reflective processing.
        Reference: Metzinger T. The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books. 2009.
        """
        self_awareness = np.mean(self.neuron_activity)
        decision_making = self.synaptic_weights.mean(axis=0) * self_awareness
        # Reflect on past states to improve current state
        if len(self.self_history) > 1:
            past_neuron_activity, past_synaptic_weights = self.self_history[-2]
            delta_activity = self.neuron_activity - past_neuron_activity
            decision_making += 0.01 * delta_activity
        return self_awareness, decision_making

Explanation

  • Reflective Processing: Implements reflective processing based on current and past states:

    • Self-awareness: Computes self-awareness as the mean of current neuron activity.

    • Decision Making: Computes decision-making based on mean synaptic weights and self-awareness.

    • Historical Reflection: Adjusts decision-making based on changes in neuron activity from the previous state.

Sensory and Motor Integration Module

modules/sensory_motor.py

This module implements sensory and motor integration in neurons.

import numpy as np

def sensory_motor_integration(sensory_input, motor_output, integration_params):
    """
    Implements sensory and motor integration in neurons.
    Reference: Evarts EV. Relation of pyramidal tract activity to force exerted during voluntary movement. Journal of Neurophysiology. 1968.
    """
    integrated_response = np.zeros_like(sensory_input)
    for i in range(len(sensory_input)):
        integrated_response[i] = integration_params['gain'] * sensory_input[i] + integration_params['bias'] * motor_output[i]
    return integrated_response

Explanation

  • Imports: Imports numpy for numerical operations.

  • Function Definition: Defines the sensory_motor_integration function to integrate sensory input and motor output based on specified parameters:

    • Integration Parameters: Uses gain and bias parameters for integration.

    • Integrated Response: Computes the integrated response for each sensory input and motor output pair.

def nonlinear_integration(sensory_input, motor_output, integration_params):
    """
    Implements non-linear dynamics in sensory-motor integration.
    Reference: Evarts EV. Relation of pyramidal tract activity to force exerted during voluntary movement. Journal of Neurophysiology. 1968.
    """
    integrated_response = np.tanh(integration_params['gain'] * sensory_input + integration_params['bias'] * motor_output)
    return integrated_response

Explanation

  • Imports: Imports numpy for numerical operations.

  • Function Definition: Defines the sensory_motor_integration function to integrate sensory input and motor output based on specified parameters:

    • Integration Parameters: Uses gain and bias parameters for integration.

    • Integrated Response: Computes the integrated response for each sensory input and motor output pair.

def nonlinear_integration(sensory_input, motor_output, integration_params):
    """
    Implements non-linear dynamics in sensory-motor integration.
    Reference: Evarts EV. Relation of pyramidal tract activity to force exerted during voluntary movement. Journal of Neurophysiology. 1968.
    """
    integrated_response = np.tanh(integration_params['gain'] * sensory_input + integration_params['bias'] * motor_output)
    return integrated_response

Explanation

  • Function Definition: Defines the nonlinear_integration function to implement non-linear sensory-motor integration:

    • Non-linear Activation: Applies a hyperbolic tangent function to the integrated sensory input and motor output.

Main Simulation Script

scripts/run_simulation.py

This script initializes and runs the neuromorphic simulation, utilizing various modules and configuration parameters.

import json
import numpy as np
import logging
import traceback
import cProfile
import pstats
from multiprocessing import Pool
from lava.magma.core.process.process import Process
from lava.magma.core.model.py.model import PyLoihiProcessModel
from lava.magma.core.run_conditions import RunSteps
from lava.magma.core.run_configs import Loihi1SimCfg
from lava.proc.dense.process import Dense
from lava.proc.io.source import RingBuffer
from lava.proc.io.sink import RingBufferSink
from dask import delayed, compute
from dask.distributed import Client
from modules.baseline import dynamic_baseline
from modules.dehaene_changeux_modulation import DehaeneChangeuxModulation
from modules.emotional_models import EmotionalModel
from modules.plasticity import update_synaptic_weights, deep_q_learning_update
from modules.topology import create_network_topology, dynamic_topology_switching
from modules.behavior_monitoring import monitor_emergent_behaviors, analyze_complex_behaviors
from modules.self_model import SelfModel
from modules.sensory_motor import sensory_motor_integration
from modules.adex_neuron import AdExNeuron
from modules.ionic_channels import IonicChannel

# Setup logging
logging.config.fileConfig('config/logging_config.json')
logger = logging.getLogger(__name__)

# Load configuration
with open('config/simulation_config.json') as f:
    config = json.load(f)

Explanation

  • Imports: Imports various modules and functions needed for the simulation.

  • Logging Setup: Configures logging based on the provided logging configuration file.

  • Configuration Loading: Loads simulation configuration from the simulation_config.json file.

# Load neuron sensitivity data
neuron_sensitivity = np.load('data/neuron_sensitivity.npy')

# Load initial conditions
initial_conditions = np.load('data/initial_conditions.npy', allow_pickle=True).item()

# Load cognitive model weights
cognitive_model_weights = np.load('data/cognitive_model_weights.npy')

# Load neuromodulator levels
dopamine_levels = np.load('data/dopamine_levels.npy')
serotonin_levels = np.load('data/serotonin_levels.npy')
norepinephrine_levels = np.load('data/norepinephrine_levels.npy')
integration_levels = np.load('data/integration_levels.npy')
attention_signals = np.load('data/attention_signals.npy')

client = Client()

Explanation

  • Data Loading: Loads various data arrays needed for the simulation:

    • neuron_sensitivity: Sensitivity data for neurons.

    • initial_conditions: Initial conditions for the simulation.

    • cognitive_model_weights: Weights for the cognitive model.

    • neuromodulator levels: Levels of dopamine, serotonin, norepinephrine, integration, and attention signals.

  • Dask Client: Initializes a Dask client for parallel computation.

def initialize_simulation():
    try:
        neurons, input_current, output_sink, synaptic_weights = setup_simulation_components(config)
        return neurons, input_current, output_sink, synaptic_weights
    except Exception as e:
        logger.error(f"Initialization error: {str(e)}")
        raise

Explanation

  • Function Definition: Defines the initialize_simulation function to set up and initialize the simulation components.

  • Error Handling: Catches and logs initialization errors, raising the exception if it occurs.

def setup_simulation_components(config):
    # Separate function for setting up components
    try:
        # Instantiate and configure neurons
        neurons = Process(shape=(config["num_neurons"],))
        neurons.in_ports.input.connect(neurons.out_ports.output)

        # Create input current generator
        input_current = RingBuffer(data=np.random.rand(config["num_neurons"], config["time_steps"]).astype(np.float32))

        # Create a sink to store output spikes
        output_sink = RingBufferSink(shape=(config["num_neurons"],), buffer_len=config["time_steps"])

        # Connect the input current to the neuron input
        input_current.out_ports.output.connect(neurons.in_ports.input)

        # Connect the neuron output to the output sink
        neurons.out_ports.output.connect(output_sink.in_ports.input)

        # Create a realistic network topology
        synaptic_weights = create_network_topology(config["num_neurons"], topology_type=config["topology_type"], p_rewire=config["p_rewire"], k=config["k"])

        return neurons, input_current, output_sink, synaptic_weights
    except Exception as e:
        logger.error(f"Setup error: {str(e)}")
        raise

Explanation

  • Function Definition: Defines the setup_simulation_components function to instantiate and configure simulation components:

    • Neurons: Creates a Process representing neurons and connects input and output ports.

    • Input Current: Creates a RingBuffer to generate input current for the neurons.

    • Output Sink: Creates a RingBufferSink to store output spikes from the neurons.

    • Connections: Connects the input current generator to neuron inputs and neuron outputs to the output sink.

    • Network Topology: Creates a network topology using the specified configuration parameters.

  • Error Handling: Catches and logs setup errors, raising the exception if it occurs.

@delayed
def batch_update_synaptic_weights_dask(synaptic_weights, spikes_pre_batch, spikes_post_batch, eligibility_traces, learning_rate, tau_eligibility):
    for t in range(spikes_pre_batch.shape[1]):
        synaptic_weights, eligibility_traces = update_synaptic_weights(synaptic_weights, spikes_pre_batch[:, t], spikes_post_batch[:, t], eligibility_traces, learning_rate, tau_eligibility)
    return synaptic_weights, eligibility_traces

Explanation

  • Function Definition: Defines the batch_update_synaptic_weights_dask function for batch updating synaptic weights using Dask for parallel computation:

    • Batch Processing: Iterates through batches of spike data and updates synaptic weights and eligibility traces.

    • Delayed Execution: Decorates the function with @delayed for delayed execution with Dask.

def run_simulation(neurons, synaptic_weights, output_sink):
    run_config = Loihi1SimCfg(select_tag="floating_pt", select_sub_proc_model=True)
    run_condition = RunSteps(num_steps=config["time_steps"])

    self_model = SelfModel(config["num_neurons"])
    emotional_model = EmotionalModel({'happiness': 0, 'stress': 0, 'motivation': 0})
    modulator = DehaeneChangeuxModulation(config["num_neurons"], layer_count=3)

Explanation

  • Function Definition: Defines the run_simulation function to execute the neuromorphic simulation:

    • Run Configuration: Configures the simulation run settings using Loihi1 simulation configuration and run conditions.

    • Model Instances: Initializes instances of SelfModelEmotionalModel, and DehaeneChangeuxModulationwith specified parameters.

    try:
        for t in range(config["time_steps"]):
            gradual_update_initial_conditions(t, alpha=0.1)
            synaptic_weights = dynamic_topology_switching(synaptic_weights, output_sink.data[:, t], target_degree=4, adjustment_rate=0.01)
            neurons.run(condition=run_condition, run_cfg=run_config)

            neuron_activity = output_sink.data[:, t]
            integrated_activity = modulator.modulate_activity(neuron_activity)
            
            spikes_pre = np.random.rand(config["num_neurons"], config["time_steps"]) > 0.9
            spikes_post = np.random.rand(config["num_neurons"], config["time_steps"]) > 0.9
            eligibility_traces = np.zeros(config["num_neurons"])
            
            synaptic_weights, eligibility_traces = compute(batch_update_synaptic_weights_dask(synaptic_weights, spikes_pre, spikes_post, eligibility_traces, config["learning_rate"], config["tau_eligibility"]))
            
            # Emotional state simulation
            emotional_model.simulate_complex_emotional_states(dopamine_levels, serotonin_levels, norepinephrine_levels, t)
            emotional_state = emotional_model.get_emotional_state()
            logger.info(f"Emotional state at step {t}: {emotional_state}")

            # Reflective processing
            self_model.update_self_model(neuron_activity, synaptic_weights)
            self_awareness, decision_making = self_model.reflective_processing()
            logger.info(f"Self-awareness at step {t}: {self_awareness}, Decision making: {decision_making}")

            # Behavior monitoring
            high_activity_neurons = monitor_emergent_behaviors(neuron_activity)
            behavior_patterns = analyze_complex_behaviors(neuron_activity)

    except Exception as e:
        logger.error(f"Error during simulation: {str(e)}")
        logger.error(f"Neuron Model State: {neurons.__dict__}")
        logger.error(f"Neuron Potentials: {initial_conditions['neuron_potentials']}")
        logger.error(f"Recovery Variables: {initial_conditions['recovery_variables']}")
        logger.error(f"Synaptic Weights: {synaptic_weights}")
        logger.error(f"Dopamine Levels: {dynamic_baseline(t)}")
        logger.error(f"Serotonin Levels: {dynamic_baseline(t)}")
        logger.error(f"Norepinephrine Levels: {dynamic_baseline(t)}")
        logger.error(f"Eligibility Traces: {np.zeros(config['num_neurons'])}")
        logger.error(f"Network Topology: {create_network_topology(config['num_neurons'], topology_type=config['topology_type'])}")
        traceback.print_exc()

Explanation

  • Simulation Loop: Executes the simulation loop for the specified number of time steps:

    • Initial Conditions Update: Gradually updates initial conditions.

    • Topology Switching: Dynamically reconfigures the network topology.

    • Neuron Activity: Runs the neuron process and retrieves neuron activity.

    • Modulation and Updates: Modulates neuron activity, updates synaptic weights using batch processing, and computes eligibility traces.

    • Emotional State: Simulates and logs the current emotional state.

    • Reflective Processing: Updates the self-model and performs reflective processing.

    • Behavior Monitoring: Monitors and analyzes neuron activity for emergent behaviors and complex patterns.

  • Error Handling: Logs detailed information and traces errors if any occur during simulation.

# Profiling and optimization
profiler = cProfile.Profile()
profiler.enable()

# Initialize and run the simulation
neurons, input_current, output_sink, synaptic_weights = initialize_simulation()
run_simulation(neurons, synaptic_weights, output_sink)

profiler.disable()
stats = pstats.Stats(profiler).sort_stats('cumtime')
stats.print_stats()

Explanation

  • Profiling: Uses cProfile to profile the simulation for performance analysis.

  • Simulation Initialization: Initializes the simulation components.

  • Simulation Execution: Runs the simulation.

  • Profiling Results: Disables profiling and prints the profiling statistics.

Visualization Script

scripts/visualization.py

This script provides 2D and 3D visualization of the simulation results.

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from vispy import scene
from vispy.scene import visuals

# Load the output spikes
output_sink_data = np.load('data/output_sink_data.npy')

Explanation

  • Imports: Imports various libraries for 2D and 3D visualization.

  • Data Loading: Loads the output spike data from the simulation.

# 2D Visualization using matplotlib
fig, ax = plt.subplots()
ax.set_xlim(0, output_sink_data.shape[1])
ax.set_ylim(0, output_sink_data.shape[0])
line, = ax.plot([], [], 'r-')

def init():
    line.set_data([], [])
    return line,

def update(frame):
    spike_times = np.where(output_sink_data[:, frame])[0]
    line.set_ydata(spike_times)
    return line,

ani = animation.FuncAnimation(fig, update, frames=output_sink_data.shape[1], init_func=init, blit=True, interval=20)
plt.show()

Explanation

  • 2D Visualization Setup: Sets up a 2D visualization using matplotlib:

    • Figure and Axes: Creates a figure and axes for plotting.

    • Initialization: Initializes the line plot.

    • Update Function: Updates the line plot with spike times for each frame.

    • Animation: Creates an animation to visualize the spikes over time.

# 3D Visualization using vispy
canvas = scene.SceneCanvas(keys='interactive', show=True)
view = canvas.central_widget.add_view()

scatter = visuals.Markers()
view.add(scatter)

view.camera = scene.cameras.TurntableCamera(fov=45)

# Generate random positions for neurons
pos = np.random.normal(size=(100, 3), scale=0.2)
scatter.set_data(pos, edge_color=None, face_color=(1, 1, 1, 0.5), size=5)

# Update the visualization with spikes
def update_vispy(frame):
    spike_times = np.where(output_sink_data[:, frame])[0]
    scatter.set_data(pos[spike_times], edge_color=None, face_color=(1, 0, 0, 0.5), size=10)

timer = canvas.events.timer.connect(lambda event: update_vispy(event))
timer.start(0.1)
canvas.app.run()

Explanation

  • 3D Visualization Setup: Sets up a 3D visualization using vispy:

    • Canvas and View: Creates a canvas and view for 3D rendering.

    • Scatter Plot: Creates a scatter plot for visualizing neuron positions.

    • Camera: Sets up a turntable camera for interactive viewing.

    • Neuron Positions: Generates random positions for neurons.

    • Update Function: Updates the scatter plot with spike times for each frame.

    • Timer: Connects the update function to a timer to update the visualization periodically.

    • Run: Starts the canvas application.

Test Script

tests/test_simulation.py

This script contains unit tests for various modules in the neuromorphic simulation.

import unittest
import numpy as np
from modules.baseline import dynamic_baseline
from modules.dehaene_changeux_modulation import DehaeneChangeuxModulation
from modules.continuous_learning import ContinuousLearning
from modules.emotional_models import EmotionalModel
from modules.plasticity import update_synaptic_weights, deep_q_learning_update
from modules.topology import create_network_topology, dynamic_topology_switching
from modules.behavior_monitoring import monitor_emergent_behaviors, analyze_complex_behaviors
from modules.self_model import SelfModel
from modules.adex_neuron import AdExNeuron
from modules.ionic_channels import IonicChannel

class TestNeuromorphicSimulation(unittest.TestCase):
    
    def test_dynamic_baseline(self):
        t = np.arange(1000)
        baseline = dynamic_baseline(t, network_state=np.random.rand(100))
        self.assertEqual(len(baseline), 1000)
        self.assertTrue(np.all(baseline >= 0))

Explanation

  • Imports: Imports unittest for testing and various modules from the simulation for testing.

  • Test Class: Defines the TestNeuromorphicSimulation class to contain unit tests.

  • Dynamic Baseline Test: Tests the dynamic_baseline function:

    • Baseline Length: Asserts that the length of the baseline is correct.

    • Baseline Values: Asserts that all baseline values are non-negative.

    def test_dehaene_changeux_modulation(self):
        neuron_count = 100
        layer_count = 3
        neuron_activity = np.random.rand(neuron_count)
        modulator = DehaeneChangeuxModulation(neuron_count, layer_count)
        integrated_activity = modulator.modulate_activity(neuron_activity)
        self.assertEqual(len(integrated_activity), neuron_count)

Explanation

  • Cognitive Modulation Test: Tests the DehaeneChangeuxModulation class:

    • Integrated Activity Length: Asserts that the length of the integrated activity matches the neuron count.

    def test_continuous_learning(self):
        continuous_learning = ContinuousLearning()

        experience = np.random.rand(100)
        continuous_learning.update_memory(experience)
        continuous_learning.consolidate_memory()
        self.assertTrue(len(continuous_learning.memory) > 0)

Explanation

  • Continuous Learning Test: Tests the ContinuousLearning class:

    • Memory Update: Asserts that the memory is updated with experiences.

    def test_emotional_models(self):
        initial_state = {'happiness': 0, 'stress': 0, 'motivation': 0}
        emotional_model = EmotionalModel(initial_state)
        dopamine_levels = np.random.rand(1000)
        serotonin_levels = np.random.rand(1000)
        norepinephrine_levels = np.random.rand(1000)
        t = 0
        emotional_model.simulate_complex_emotional_states(dopamine_levels, serotonin_levels, norepinephrine_levels, t)
        emotional_state = emotional_model.get_emotional_state()
        self.assertTrue('happiness' in emotional_state)
        self.assertTrue('stress' in emotional_state)
        self.assertTrue('motivation' in emotional_state)

Explanation

  • Emotional Models Test: Tests the EmotionalModel class:

    • Emotional State Simulation: Asserts that the emotional state contains the expected keys after simulation.

    def test_update_synaptic_weights(self):
        weights = np.random.rand(100, 100)
        spikes_pre = np.random.randint(2, size=100)
        spikes_post = np.random.randint(2, size=100)
        eligibility_traces = np.zeros(100)
        weights, eligibility_traces = update_synaptic_weights(weights, spikes_pre, spikes_post, eligibility_traces, 0.01, 20.0)
        self.assertEqual(weights.shape, (100, 100))
        self.assertEqual(eligibility_traces.shape, (100,))

Explanation

  • Synaptic Weights Update Test: Tests the update_synaptic_weights function:

    • Weights Shape: Asserts that the shape of the updated weights matrix is correct.

    • Eligibility Traces Shape: Asserts that the shape of the eligibility traces array is correct.

    def test_deep_q_learning_update(self):
        weights = np.random.rand(100, 100)
        rewards = np.random.rand(1000)
        eligibility_traces = np.zeros(100)
        state = np.random.randint(0, 100)
        action = np.random.randint(0, 100)
        updated_weights = deep_q_learning_update(weights, rewards, eligibility_traces, 0.01, 0.99, state, action)
        self.assertEqual(updated_weights.shape, (100, 100))

Explanation

  • Deep Q-Learning Update Test: Tests the deep_q_learning_update function:

    • Weights Shape: Asserts that the shape of the updated weights matrix is correct.

    def test_create_network_topology(self):
        synaptic_weights = create_network_topology(100)
        self.assertEqual(synaptic_weights.shape, (100, 100))
        self.assertTrue(np.all(synaptic_weights >= 0))
        self.assertTrue(np.all(synaptic_weights <= 1))

Explanation

  • Network Topology Creation Test: Tests the create_network_topology function:

    • Weights Shape: Asserts that the shape of the synaptic weights matrix is correct.

    • Weights Values: Asserts that all weights are within the range [0, 1].

    def test_dynamic_topology_switching(self):
        synaptic_weights = create_network_topology(100)
        spikes = np.random.randint(2, size=100)
        synaptic_weights = dynamic_topology_switching(synaptic_weights, spikes)
        self.assertEqual(synaptic_weights.shape, (100, 100))
        self.assertTrue(np.all(synaptic_weights >= 0))
        self.assertTrue(np.all(synaptic_weights <= 1))

Explanation

  • Dynamic Topology Switching Test: Tests the dynamic_topology_switching function:

    • Weights Shape: Asserts that the shape of the synaptic weights matrix is correct.

    • Weights Values: Asserts that all weights are within the range [0, 1].

    def test_hierarchical_cognitive_model(self):
        neuron_activity = np.random.rand(100)
        integration_levels = np.random.rand(100)
        attention_signals = np.random.rand(100)
        cognitive_model_weights = np.random.rand(100, 100)
        modulator = DehaeneChangeuxModulation(100, 3)
        integrated_activity = modulator.modulate_activity(neuron_activity)
        self.assertEqual(len(integrated_activity), 100)

Explanation

  • Hierarchical Cognitive Model Test: Tests the DehaeneChangeuxModulation class:

    • Integrated Activity Length: Asserts that the length of the integrated activity matches the neuron count.

    def test_monitor_emergent_behaviors(self):
        neuron_activity = np.random.rand(100)
        high_activity_neurons = monitor_emergent_behaviors(neuron_activity)
        self.assertTrue(len(high_activity_neurons) > 0)

Explanation

  • Emergent Behaviors Monitoring Test: Tests the monitor_emergent_behaviors function:

    • High Activity Neurons: Asserts that there are neurons with high activity levels.

    def test_analyze_complex_behaviors(self):
        neuron_activity = np.random.rand(1000)
        patterns = analyze_complex_behaviors(neuron_activity)
        self.assertEqual(len(patterns), 990)  # assuming pattern_length=10

Explanation

  • Complex Behaviors Analysis Test: Tests the analyze_complex_behaviors function:

    • Patterns Length: Asserts that the number of detected patterns matches the expected length.

    def test_adex_neuron(self):
        neuron = AdExNeuron(C=200, gL=10, EL=-70, VT=-50, DeltaT=2, a=2, tau_w=100, b=50, Vr=-58, Vpeak=20, dt=0.1)
        I = np.random.rand(1000)
        for current in I:
            V, w = neuron.step(current)
        self.assertTrue(V <= 20)
        self.assertTrue(w >= 0)

Explanation

  • AdEx Neuron Test: Tests the AdExNeuron class:

    • Membrane Potential: Asserts that the membrane potential does not exceed the peak potential.

    • Adaptation Variable: Asserts that the adaptation variable is non-negative.

    def test_ionic_channel(self):
        dynamics_params = {
            'm': {
                'alpha': lambda V: 0.1 * (25 - V) / (np.exp((25 - V) / 10) - 1),
                'beta': lambda V: 4 * np.exp(-V / 18)
            },
            'h': {
                'alpha': lambda V: 0.07 * np.exp(-V / 20),
                'beta': lambda V: 1 / (np.exp((30 - V) / 10) + 1)
            }
        }
        channel = IonicChannel(g_max=120, E_rev=50, dynamics_params=dynamics_params)
        for t in range(1000):
            channel.update_state(np.random.uniform(-100, 50), 0.01)
            current = channel.compute_current(np.random.uniform(-100, 50))
        self.assertTrue(current <= 120)

Explanation

  • Ionic Channel Test: Tests the IonicChannel class:

    • Current Value: Asserts that the computed current does not exceed the maximum conductance.

    def test_self_model(self):
        num_neurons = 100
        self_model = SelfModel(num_neurons)
        neuron_activity = np.random.rand(num_neurons)
        synaptic_weights = np.random.rand(num_neurons, num_neurons)
        self_model.update_self_model(neuron_activity, synaptic_weights)
        self_awareness, decision_making = self_model.reflective_processing()
        self.assertTrue(self_awareness >= 0)
        self.assertEqual(len(decision_making), num_neurons)

if __name__ == '__main__':
    unittest.main()

Explanation

  • Self Model Test: Tests the SelfModel class:

    • Self-awareness: Asserts that self-awareness is non-negative.

    • Decision Making: Asserts that the length of the decision-making array matches the neuron count.

The Enhanced Neuromorphic Simulation Code is a comprehensive framework designed to simulate complex neural networks and their behaviors using state-of-the-art models and algorithms. This document has provided an in-depth overview of the directory structure, configuration files, key modules, and scripts necessary to run and analyze the neuromorphic simulation.

Detailed Functionality

  1. Directory Structure:

    • Organized into logical sections including configdatadocslogsmodulesscripts, and tests.

    • Ensures modularity and ease of navigation, facilitating efficient management and extension of the codebase.

  2. Configuration Files:

    • simulation_config.json: Defines parameters such as the number of neurons, time steps, and specific parameters for various models. This file is crucial for customizing the simulation to specific research needs.

    • logging_config.json: Configures the logging setup, detailing log levels and file locations to ensure comprehensive monitoring and debugging.

  3. Data Generation:

    • The generate_data.py script initializes essential data such as neuron sensitivity, initial conditions, cognitive model weights, and neuromodulator levels. This script ensures that the simulation starts with empirically grounded and randomly generated initial states.

  4. Modules:

    • Baseline Adjustmentbaseline.py implements dynamic baseline adjustment for neurons using PID control, based on the model by Turrigiano et al. (1998), which addresses homeostatic plasticity in cortical neurons.

    • Cognitive Modulationdehaene_changeux_modulation.py models cognitive processes using the Dehaene-Changeux framework, simulating hierarchical processing in neural networks.

    • Continuous Learningcontinuous_learning.py facilitates adaptive learning through experience storage and processing, inspired by principles outlined by Izhikevich (2007).

    • Emotional Modelsemotional_models.py simulates emotional states influenced by neuromodulator levels, based on the work of Dayan and Huys (2009).

    • Synaptic Plasticityplasticity.py updates synaptic weights using spike-timing-dependent plasticity (STDP) and deep Q-learning, incorporating methods from Martin et al. (2000) and Mnih et al. (2015).

    • Network Topologytopology.py creates and dynamically reconfigures neural network topologies, implementing models such as the small-world network described by Sporns et al. (2004).

    • Behavior Monitoringbehavior_monitoring.py tracks and analyzes emergent and complex neural behaviors, providing insights into neural activity patterns.

    • Self Modelself_model.py enables reflective processing and decision-making, drawing on concepts from Metzinger (2009).

    • Neuron Modelsadex_neuron.py and ionic_channels.py implement detailed neuron and ionic channel dynamics based on models by Brette and Gerstner (2005) and Hille (2001).

    • Sensory-Motor Integrationsensory_motor.py models interactions between sensory inputs and motor outputs, incorporating nonlinear dynamics as described by Evarts (1968).

  5. Simulation Execution:

    • run_simulation.py orchestrates the initialization, execution, and management of the neuromorphic simulation, utilizing components such as neuron processes, input generators, output sinks, and dynamic topology adjustments. The script ensures the seamless integration of all modules and manages the flow of the simulation.

  6. Visualization:

    • visualization.py provides tools for 2D and 3D visualization of simulation results using matplotlib and vispy. These visualizations enhance the understanding of neural activity patterns and emergent behaviors, offering both static and dynamic views of the simulation.

  7. Testing:

    • test_simulation.py includes comprehensive unit tests for various modules, ensuring the reliability and correctness of the simulation components. Regular testing is essential for maintaining the integrity of the simulation as it evolves.

Running the Simulation: Tips and Best Practices

  1. Preparation:

    • Ensure all dependencies are installed, including numpymatplotlibvispy, and dask.

    • Review and modify configuration files (simulation_config.json and logging_config.json) as per your simulation requirements. Ensure that parameters are set according to the specific goals and constraints of your study.

  2. Data Generation:

    • Run scripts/generate_data.py to create and save necessary data files in the data directory. This step ensures that the simulation starts with empirically grounded initial states.

    • Verify the generated files to ensure they contain valid data, checking for any inconsistencies or anomalies.

  3. Simulation Execution:

    • Initialize and run the simulation using scripts/run_simulation.py. This script manages the entire simulation process, from initializing components to running the neural network and collecting results.

    • Monitor the log file (logs/simulation.log) for real-time updates and debugging information. This file provides valuable insights into the simulation's progress and any issues that arise.

    • Profile the simulation using built-in profiling tools to optimize performance. This step helps identify bottlenecks and improve the efficiency of the simulation.

  4. Visualization:

    • Use scripts/visualization.py to visualize the results in both 2D and 3D formats. These visualizations provide a deeper understanding of the neural activity and behavior patterns within the simulation.

    • Analyze the visualizations to gain insights into neural dynamics, emergent behaviors, and the impact of various parameters. Use these insights to refine the simulation and guide further experiments.

  5. Testing:

    • Regularly run tests/test_simulation.py to validate changes and ensure the integrity of the simulation. This step is crucial for maintaining the reliability of the simulation as new features are added or existing ones are modified.

    • Add new tests as you develop additional features or modify existing ones. Comprehensive testing helps catch issues early and ensures that the simulation remains robust and accurate.

License held by Matthew Drabek from Digital Trans4orMation Team ®

By Matthew Drabek & Digital Trans4orMation Team

© 2024

By Matthew Drabek & Digital Trans4orMation Team

© 2024