Guide
RAG
neural_noise: Random noise added to the neural activity.
continuous_learning: An instance of the
ContinuousLearning
class for updating model weights.
Explanation
Parameter Setter: Updates the cognitive model weights, integration levels, and attention signals if new values are provided.
Explanation
Normalization Function: Normalizes the input array to have zero mean and unit standard deviation.
Explanation
Multi-layer Integration: Integrates neural activity across multiple layers:
Activity Update: Multiplies the activity by the cognitive model weights.
Normalization and Scaling: Normalizes the activity and scales it by integration levels.
Non-linear Activation: Applies a hyperbolic tangent function for non-linear activation.
Attention Modulation: Modulates the activity by attention signals.
Layer Activities: Stores the activity of each layer and returns the final layer's activity.
Explanation
Empirical Feedback: Calculates feedback based on the difference between integrated and current neural activity:
Feedback Strength: Computes the base and dynamic feedback strength.
Feedback Calculation: Applies a hyperbolic tangent function to the difference between integrated and neuron activity to determine feedback.
Explanation
Neural Noise Addition: Adds predefined neural noise to the integrated activity.
Explanation
Weight Update: Updates the cognitive model weights using continuous learning based on integrated activity and clips the weights to stay within the range [-1, 1].
Explanation
Activity Modulation: Modulates the neural activity:
Normalization: Normalizes the neuron activity.
Integration: Integrates the normalized activity across multiple layers.
Feedback: Adds hierarchical feedback to the integrated activity.
Noise Addition: Adds neural noise to the integrated activity.
Weight Update: Updates the cognitive model weights based on the integrated activity.
Return: Returns the modulated integrated activity.
Explanation
Example Usage: Demonstrates how to use the
DehaeneChangeuxModulation
class with dummy data:Initialization: Creates an instance of the
DehaeneChangeuxModulation
class.Activity Modulation: Modulates the neural activity using the
modulate_activity
method.Print: Outputs the integrated activity.
Continuous Learning Module
modules/continuous_learning.py
This module implements continuous learning mechanisms for neural networks.
Explanation
Imports: Imports
numpy
for numerical operations.Class Definition: Defines the
ContinuousLearning
class to implement continuous learning mechanisms for neural networks.
Explanation
Constructor: Initializes the instance variables:
memory_capacity: The maximum number of experiences to store in memory.
memory: A list to store experiences.
learning_rate: The rate at which the model learns from experience.
Explanation
Memory Update: Adds a new experience to the memory and ensures the memory does not exceed its capacity by removing the oldest experience if necessary.
Explanation
Memory Consolidation: Iterates through stored experiences and applies learning to each one.
Explanation
Advanced Learning Algorithm: Implements an algorithm to update synaptic weights based on experience, using the outer product of the experience vector scaled by the learning rate.
Explanation
Learn from Experience: Updates synaptic weights using the advanced learning algorithm.
Explanation
Example Usage: Demonstrates how to use the
ContinuousLearning
class:Initialization: Creates an instance of the
ContinuousLearning
class.Memory Update: Adds dummy experiences to the memory.
Memory Consolidation: Consolidates the experiences stored in memory.
Emotional Models Module
modules/emotional_models.py
This module implements an emotional model for simulating complex emotional states.
Explanation
Imports: Imports
numpy
for numerical operations.Class Definition: Defines the
EmotionalModel
class to simulate complex emotional states.Constructor: Initializes the emotional state and history:
initial_state: The initial emotional state.
state: Stores the current emotional state.
history: Stores the history of emotional states.
Explanation
State Update: Dynamically updates the emotional state based on external factors and internal feedback:
happiness: Increases with positive events and decreases with stress.
stress: Increases with negative events and decreases with calmness.
motivation: Increases with goals achieved and decreases with frustration.
Explanation
Complex Emotional State Simulation: Simulates emotional states based on neuromodulator levels:
happiness: Based on mean dopamine levels.
calmness: Based on mean serotonin levels.
alertness: Based on mean norepinephrine levels.
stress: Inversely related to the sum of mean serotonin and dopamine levels.
motivation: Based on a combination of mean dopamine and norepinephrine levels.
frustration: Inversely related to the product of mean dopamine and norepinephrine levels.
satisfaction: Difference between mean and standard deviation of dopamine levels.
History Update: Appends the current state to the history.
Explanation
Get Emotional State: Returns the current emotional state.
Explanation
Get Emotional History: Returns the history of emotional states.
AdEx Neuron Model Module
modules/adex_neuron.py
This module implements the Adaptive Exponential Integrate-and-Fire (AdEx) neuron model.
Explanation
Imports: Imports
numpy
for numerical operations.Class Definition: Defines the
AdExNeuron
class to implement the Adaptive Exponential Integrate-and-Fire neuron model.
Explanation
Constructor: Initializes the instance variables:
C: Capacitance.
gL: Leak conductance.
EL: Resting potential.
VT: Threshold potential.
DeltaT: Sharpness of the exponential approach to threshold.
a: Subthreshold adaptation.
tau_w: Adaptation time constant.
b: Spike-triggered adaptation.
Vr: Reset potential.
Vpeak: Peak potential.
dt: Time step.
V: Initial membrane potential.
w: Initial adaptation variable.
Explanation
Parameter Validation: Ensures that the parameters of the AdEx neuron model are within valid ranges, raising
ValueError
if any parameter is out of range.
Explanation
Simulation Step: Updates the membrane potential and adaptation variable for a single time step:
Validation: Validates the parameters before proceeding.
Membrane Potential Update: Calculates the change in membrane potential (
dV
) using the AdEx model equation.Adaptation Variable Update: Calculates the change in the adaptation variable (
dw
).Update Variables: Updates the membrane potential and adaptation variable using Euler integration.
Spike Handling: Resets the membrane potential to
Vr
and increments the adaptation variable byb
if the membrane potential exceedsVpeak
.Return: Returns the updated membrane potential and adaptation variable.
Ionic Channels Module
modules/ionic_channels.py
This module implements the dynamics of ionic channels in neurons.
Explanation
Imports: Imports
numpy
for numerical operations.Class Definition: Defines the
IonicChannel
class to model the dynamics of ionic channels.Constructor: Initializes the instance variables:
g_max: Maximum conductance of the ionic channel.
E_rev: Reversal potential of the ionic channel.
dynamics_params: Parameters governing the dynamics of the channel.
state: Initial state of the channel, initialized using
initialize_state
method.
Explanation
Initialize State: Initializes the state variables of the ionic channel based on its dynamics parameters.
Explanation
State Update: Updates the state variables of the ionic channel based on the membrane potential (
voltage
) and the dynamics equations:Alpha and Beta: Calculates the alpha and beta rate constants for each state variable.
State Variable Update: Updates each state variable using the alpha and beta values and the time step (
dt
).
Explanation
Complex Dynamics: Placeholder method for implementing complex interactions between different ionic channels based on their dynamics.
Explanation
Compute Current: Calculates the ionic current based on the channel state and membrane potential:
Conductance: Computes the conductance as the product of the state variables scaled by the maximum conductance.
Current: Calculates the ionic current as the product of the conductance and the difference between the membrane potential and the reversal potential.
Plasticity Module
modules/plasticity.py
This module implements synaptic weight updates using spike-timing-dependent plasticity (STDP) and deep Q-learning.
Explanation
Imports: Imports
numpy
for numerical operations andlogging
for logging.Logger: Configures a logger for the module.
Function Definition: Defines the
update_synaptic_weights
function to update synaptic weights using spike-timing-dependent plasticity (STDP).
Explanation
STDP Parameters: Defines parameters for long-term potentiation (LTP) and long-term depression (LTD):
A_plus, A_minus: Amplitude constants for LTP and LTD.
tau_plus, tau_minus: Time constants for LTP and LTD.
Spike Time Difference: Computes the difference in spike times between pre- and post-synaptic neurons.
LTP and LTD Calculation: Computes the LTP and LTD contributions to the eligibility traces.
Eligibility Traces Update: Updates the eligibility traces based on LTP and LTD contributions.
Weight Update: Updates the synaptic weights using the eligibility traces and learning rate, and clips the weights to be within [0, 1).
Logging: Logs the updated weights and eligibility traces for debugging.
Explanation
Function Definition: Defines the
deep_q_learning_update
function to update synaptic weights using Deep Q-Learning.Q-values Initialization: Initializes Q-values array.
TD Error Calculation: Computes the temporal difference (TD) error for each time step.
Eligibility Traces Update: Updates eligibility traces using the TD error.
Weight Update: Updates the synaptic weights using the TD error and learning rate, and clips the weights to be within [0, 1).
Q-values Update: Updates the Q-values using the learning rate and TD error.
Logging: Logs the updated weights for debugging.
Topology Module
modules/topology.py
This module creates neural network topologies and implements dynamic network reconfiguration.
Explanation
Imports: Imports
numpy
for numerical operations.Function Definition: Defines the
create_network_topology
function to create a neural network topology.Small-world Topology: Implements the creation of a small-world topology:
Initial Connections: Connects each neuron to its
k
nearest neighbors.Rewiring: Rewires connections with probability
p_rewire
.
Explanation
Function Definition: Defines the
dynamic_topology_switching
function to dynamically reconfigure the network topology based on neuron activity.Current Degrees: Calculates the current degree (number of connections) of each neuron.
Target Degree Adjustment: Adjusts connections to match the target degree:
Add Connections: Adds new connections for neurons with fewer than the target degree.
Remove Connections: Reduces connections for neurons with more than the target degree.
Behavior Monitoring Module
modules/behavior_monitoring.py
This module monitors emergent behaviors in neural activity and analyzes complex behavioral patterns.
Explanation
Imports: Imports
numpy
for numerical operations andlogging
for logging.Logger: Configures a logger for the module.
Function Definition: Defines the
monitor_emergent_behaviors
function to identify neurons with activity levels above a specified threshold.
Explanation
Function Definition: Defines the
analyze_complex_behaviors
function to identify complex patterns in neural activity.Pattern Analysis: Analyzes sequential patterns of specified length (
pattern_length
) to detect complex behaviors.
Explanation
Complex Pattern Detection: Defines criteria for identifying complex patterns based on the standard deviation and mean of the pattern.
Self Model Module
modules/self_model.py
This module implements a self-model for reflective processing and decision-making.
Explanation
Imports: Imports
numpy
for numerical operations.Class Definition: Defines the
SelfModel
class to implement a self-model for reflective processing and decision-making.Constructor: Initializes the instance variables:
num_neurons: The number of neurons.
neuron_activity: Array to store current neuron activity.
synaptic_weights: Matrix to store current synaptic weights.
self_history: List to store history of self-states.
Explanation
Self Model Update: Updates the self-model with current neuron activity and synaptic weights:
Activity and Weights Update: Updates the current neuron activity and synaptic weights.
History Update: Adds the current state to the history and removes the oldest state if the history exceeds 100 entries.
Explanation
Reflective Processing: Implements reflective processing based on current and past states:
Self-awareness: Computes self-awareness as the mean of current neuron activity.
Decision Making: Computes decision-making based on mean synaptic weights and self-awareness.
Historical Reflection: Adjusts decision-making based on changes in neuron activity from the previous state.
Sensory and Motor Integration Module
modules/sensory_motor.py
This module implements sensory and motor integration in neurons.
Explanation
Imports: Imports
numpy
for numerical operations.Function Definition: Defines the
sensory_motor_integration
function to integrate sensory input and motor output based on specified parameters:Integration Parameters: Uses gain and bias parameters for integration.
Integrated Response: Computes the integrated response for each sensory input and motor output pair.
Explanation
Imports: Imports
numpy
for numerical operations.Function Definition: Defines the
sensory_motor_integration
function to integrate sensory input and motor output based on specified parameters:Integration Parameters: Uses gain and bias parameters for integration.
Integrated Response: Computes the integrated response for each sensory input and motor output pair.
Explanation
Function Definition: Defines the
nonlinear_integration
function to implement non-linear sensory-motor integration:Non-linear Activation: Applies a hyperbolic tangent function to the integrated sensory input and motor output.
Main Simulation Script
scripts/run_simulation.py
This script initializes and runs the neuromorphic simulation, utilizing various modules and configuration parameters.
Explanation
Imports: Imports various modules and functions needed for the simulation.
Logging Setup: Configures logging based on the provided logging configuration file.
Configuration Loading: Loads simulation configuration from the
simulation_config.json
file.
Explanation
Data Loading: Loads various data arrays needed for the simulation:
neuron_sensitivity: Sensitivity data for neurons.
initial_conditions: Initial conditions for the simulation.
cognitive_model_weights: Weights for the cognitive model.
neuromodulator levels: Levels of dopamine, serotonin, norepinephrine, integration, and attention signals.
Dask Client: Initializes a Dask client for parallel computation.
Explanation
Function Definition: Defines the
initialize_simulation
function to set up and initialize the simulation components.Error Handling: Catches and logs initialization errors, raising the exception if it occurs.
Explanation
Function Definition: Defines the
setup_simulation_components
function to instantiate and configure simulation components:Neurons: Creates a
Process
representing neurons and connects input and output ports.Input Current: Creates a
RingBuffer
to generate input current for the neurons.Output Sink: Creates a
RingBufferSink
to store output spikes from the neurons.Connections: Connects the input current generator to neuron inputs and neuron outputs to the output sink.
Network Topology: Creates a network topology using the specified configuration parameters.
Error Handling: Catches and logs setup errors, raising the exception if it occurs.
Explanation
Function Definition: Defines the
batch_update_synaptic_weights_dask
function for batch updating synaptic weights using Dask for parallel computation:Batch Processing: Iterates through batches of spike data and updates synaptic weights and eligibility traces.
Delayed Execution: Decorates the function with
@delayed
for delayed execution with Dask.
Explanation
Function Definition: Defines the
run_simulation
function to execute the neuromorphic simulation:Run Configuration: Configures the simulation run settings using Loihi1 simulation configuration and run conditions.
Model Instances: Initializes instances of
SelfModel
,EmotionalModel
, andDehaeneChangeuxModulation
with specified parameters.
Explanation
Simulation Loop: Executes the simulation loop for the specified number of time steps:
Initial Conditions Update: Gradually updates initial conditions.
Topology Switching: Dynamically reconfigures the network topology.
Neuron Activity: Runs the neuron process and retrieves neuron activity.
Modulation and Updates: Modulates neuron activity, updates synaptic weights using batch processing, and computes eligibility traces.
Emotional State: Simulates and logs the current emotional state.
Reflective Processing: Updates the self-model and performs reflective processing.
Behavior Monitoring: Monitors and analyzes neuron activity for emergent behaviors and complex patterns.
Error Handling: Logs detailed information and traces errors if any occur during simulation.
Explanation
Profiling: Uses
cProfile
to profile the simulation for performance analysis.Simulation Initialization: Initializes the simulation components.
Simulation Execution: Runs the simulation.
Profiling Results: Disables profiling and prints the profiling statistics.
Visualization Script
scripts/visualization.py
This script provides 2D and 3D visualization of the simulation results.
Explanation
Imports: Imports various libraries for 2D and 3D visualization.
Data Loading: Loads the output spike data from the simulation.
Explanation
2D Visualization Setup: Sets up a 2D visualization using
matplotlib
:Figure and Axes: Creates a figure and axes for plotting.
Initialization: Initializes the line plot.
Update Function: Updates the line plot with spike times for each frame.
Animation: Creates an animation to visualize the spikes over time.
Explanation
3D Visualization Setup: Sets up a 3D visualization using
vispy
:Canvas and View: Creates a canvas and view for 3D rendering.
Scatter Plot: Creates a scatter plot for visualizing neuron positions.
Camera: Sets up a turntable camera for interactive viewing.
Neuron Positions: Generates random positions for neurons.
Update Function: Updates the scatter plot with spike times for each frame.
Timer: Connects the update function to a timer to update the visualization periodically.
Run: Starts the canvas application.
Test Script
tests/test_simulation.py
This script contains unit tests for various modules in the neuromorphic simulation.
Explanation
Imports: Imports
unittest
for testing and various modules from the simulation for testing.Test Class: Defines the
TestNeuromorphicSimulation
class to contain unit tests.Dynamic Baseline Test: Tests the
dynamic_baseline
function:Baseline Length: Asserts that the length of the baseline is correct.
Baseline Values: Asserts that all baseline values are non-negative.
Explanation
Cognitive Modulation Test: Tests the
DehaeneChangeuxModulation
class:Integrated Activity Length: Asserts that the length of the integrated activity matches the neuron count.
Explanation
Continuous Learning Test: Tests the
ContinuousLearning
class:Memory Update: Asserts that the memory is updated with experiences.
Explanation
Emotional Models Test: Tests the
EmotionalModel
class:Emotional State Simulation: Asserts that the emotional state contains the expected keys after simulation.
Explanation
Synaptic Weights Update Test: Tests the
update_synaptic_weights
function:Weights Shape: Asserts that the shape of the updated weights matrix is correct.
Eligibility Traces Shape: Asserts that the shape of the eligibility traces array is correct.
Explanation
Deep Q-Learning Update Test: Tests the
deep_q_learning_update
function:Weights Shape: Asserts that the shape of the updated weights matrix is correct.
Explanation
Network Topology Creation Test: Tests the
create_network_topology
function:Weights Shape: Asserts that the shape of the synaptic weights matrix is correct.
Weights Values: Asserts that all weights are within the range [0, 1].
Explanation
Dynamic Topology Switching Test: Tests the
dynamic_topology_switching
function:Weights Shape: Asserts that the shape of the synaptic weights matrix is correct.
Weights Values: Asserts that all weights are within the range [0, 1].
Explanation
Hierarchical Cognitive Model Test: Tests the
DehaeneChangeuxModulation
class:Integrated Activity Length: Asserts that the length of the integrated activity matches the neuron count.
Explanation
Emergent Behaviors Monitoring Test: Tests the
monitor_emergent_behaviors
function:High Activity Neurons: Asserts that there are neurons with high activity levels.
Explanation
Complex Behaviors Analysis Test: Tests the
analyze_complex_behaviors
function:Patterns Length: Asserts that the number of detected patterns matches the expected length.
Explanation
AdEx Neuron Test: Tests the
AdExNeuron
class:Membrane Potential: Asserts that the membrane potential does not exceed the peak potential.
Adaptation Variable: Asserts that the adaptation variable is non-negative.
Explanation
Ionic Channel Test: Tests the
IonicChannel
class:Current Value: Asserts that the computed current does not exceed the maximum conductance.
Explanation
Self Model Test: Tests the
SelfModel
class:Self-awareness: Asserts that self-awareness is non-negative.
Decision Making: Asserts that the length of the decision-making array matches the neuron count.
The Enhanced Neuromorphic Simulation Code is a comprehensive framework designed to simulate complex neural networks and their behaviors using state-of-the-art models and algorithms. This document has provided an in-depth overview of the directory structure, configuration files, key modules, and scripts necessary to run and analyze the neuromorphic simulation.
Detailed Functionality
Directory Structure:
Organized into logical sections including
config
,data
,docs
,logs
,modules
,scripts
, andtests
.Ensures modularity and ease of navigation, facilitating efficient management and extension of the codebase.
Configuration Files:
simulation_config.json
: Defines parameters such as the number of neurons, time steps, and specific parameters for various models. This file is crucial for customizing the simulation to specific research needs.logging_config.json
: Configures the logging setup, detailing log levels and file locations to ensure comprehensive monitoring and debugging.
Data Generation:
The
generate_data.py
script initializes essential data such as neuron sensitivity, initial conditions, cognitive model weights, and neuromodulator levels. This script ensures that the simulation starts with empirically grounded and randomly generated initial states.
Modules:
Baseline Adjustment:
baseline.py
implements dynamic baseline adjustment for neurons using PID control, based on the model by Turrigiano et al. (1998), which addresses homeostatic plasticity in cortical neurons.Cognitive Modulation:
dehaene_changeux_modulation.py
models cognitive processes using the Dehaene-Changeux framework, simulating hierarchical processing in neural networks.Continuous Learning:
continuous_learning.py
facilitates adaptive learning through experience storage and processing, inspired by principles outlined by Izhikevich (2007).Emotional Models:
emotional_models.py
simulates emotional states influenced by neuromodulator levels, based on the work of Dayan and Huys (2009).Synaptic Plasticity:
plasticity.py
updates synaptic weights using spike-timing-dependent plasticity (STDP) and deep Q-learning, incorporating methods from Martin et al. (2000) and Mnih et al. (2015).Network Topology:
topology.py
creates and dynamically reconfigures neural network topologies, implementing models such as the small-world network described by Sporns et al. (2004).Behavior Monitoring:
behavior_monitoring.py
tracks and analyzes emergent and complex neural behaviors, providing insights into neural activity patterns.Self Model:
self_model.py
enables reflective processing and decision-making, drawing on concepts from Metzinger (2009).Neuron Models:
adex_neuron.py
andionic_channels.py
implement detailed neuron and ionic channel dynamics based on models by Brette and Gerstner (2005) and Hille (2001).Sensory-Motor Integration:
sensory_motor.py
models interactions between sensory inputs and motor outputs, incorporating nonlinear dynamics as described by Evarts (1968).
Simulation Execution:
run_simulation.py
orchestrates the initialization, execution, and management of the neuromorphic simulation, utilizing components such as neuron processes, input generators, output sinks, and dynamic topology adjustments. The script ensures the seamless integration of all modules and manages the flow of the simulation.
Visualization:
visualization.py
provides tools for 2D and 3D visualization of simulation results usingmatplotlib
andvispy
. These visualizations enhance the understanding of neural activity patterns and emergent behaviors, offering both static and dynamic views of the simulation.
Testing:
test_simulation.py
includes comprehensive unit tests for various modules, ensuring the reliability and correctness of the simulation components. Regular testing is essential for maintaining the integrity of the simulation as it evolves.
Running the Simulation: Tips and Best Practices
Preparation:
Ensure all dependencies are installed, including
numpy
,matplotlib
,vispy
, anddask
.Review and modify configuration files (
simulation_config.json
andlogging_config.json
) as per your simulation requirements. Ensure that parameters are set according to the specific goals and constraints of your study.
Data Generation:
Run
scripts/generate_data.py
to create and save necessary data files in thedata
directory. This step ensures that the simulation starts with empirically grounded initial states.Verify the generated files to ensure they contain valid data, checking for any inconsistencies or anomalies.
Simulation Execution:
Initialize and run the simulation using
scripts/run_simulation.py
. This script manages the entire simulation process, from initializing components to running the neural network and collecting results.Monitor the log file (
logs/simulation.log
) for real-time updates and debugging information. This file provides valuable insights into the simulation's progress and any issues that arise.Profile the simulation using built-in profiling tools to optimize performance. This step helps identify bottlenecks and improve the efficiency of the simulation.
Visualization:
Use
scripts/visualization.py
to visualize the results in both 2D and 3D formats. These visualizations provide a deeper understanding of the neural activity and behavior patterns within the simulation.Analyze the visualizations to gain insights into neural dynamics, emergent behaviors, and the impact of various parameters. Use these insights to refine the simulation and guide further experiments.
Testing:
Regularly run
tests/test_simulation.py
to validate changes and ensure the integrity of the simulation. This step is crucial for maintaining the reliability of the simulation as new features are added or existing ones are modified.Add new tests as you develop additional features or modify existing ones. Comprehensive testing helps catch issues early and ensures that the simulation remains robust and accurate.