Neuron Intelligence in Cyber Security Software - Part One

by James Griffin, Achim D. Brucker, Brett J. Kagan, Alon Loeffler

In this age of "artificial intelligence," it is tempting sometimes to ask where the "real intelligence" may lie.  Since life is normally considered "intelligent," it begs the question whether or not those substrates which provide the core functionality of intelligent life - biological neurons - could be used to help make intelligent decisions.

In 2022, cultured neurons were integrated into a version of Pong by Cortical Labs in Melbourne, Australia.  We incorporated cultured neurons into an agent-based cybersecurity simulation environment, called YAWNING-TITAN (github.com/dstl/YAWNING-TITAN).  We used the Cortical Labs' Application Programming Interface (API) and integrated this into YAWNING-TITAN.

Our results indicate that such neurons do display rudimentary learning in such a simulation.  Plausibly, more stable iterations of this function could be operationalized as taking responsive defensive actions, i.e., protecting a network against offensive actions of cyber criminals.  In more detail, we use YAWNING-TITAN to simulate a network of servers (nodes) that are attacked by red agents.  Blue agents must choose suitable defensive actions (e.g., patching a specific server) to protect the network against attacks.

This article seeks to explore two core questions:

  1. Can cultured neurons learn to influence digital decision-making in a cybersecurity environment?
  2. Can this influence be measured through increased response speed and improved action selection over time?

An example version of our neuron version of YAWNING-TITAN can be found at: git.logicalhacking.com/BiologicalAI/YAWNING-TITAN_Neuron-Edition

Interfacing With Neurons

A Microelectrode Array (MEA) is a platform to which neuron cultures adhere, enabling the recording of their electrical activity as well as the delivery of stimulation back to the neurons.  MEAs have a certain number of channels which will often align to the electrodes present.

For example, the MEA system we used had 60 channels - that is, 59 individual electrodes, each capable of recording electrical signals from nearby neurons and delivering stimulation, and one reference electrode.  The MEA prototype system under development by Cortical Labs allows a rapid stream of electrophysiological data reflecting action potential ("spikes") generated by biological neurons.

Correspondingly, stimuli can also be delivered via the MEA to the neural cells as a way to input information into the system.  It is possible to stimulate one channel or a group of channels, and spikes can be constantly monitored.  Cortical Labs provided us with a visualizer system so that we can also observe what the neurons are up to.  This provided a means to explore software that is able to quickly respond to those spikes for the purposes of training and prediction.

The training that is taking place typically revolves around monitoring spikes (a specific electrical activity of neurons) and sending electrical stimuli in response to detected spikes that would ideally occur after a given input.

For example, the cybersecurity software we used would only act to isolate a network node if requested or permitted to do so by the neurons.  A spike that corresponds to that action could be stimulated to encourage recognition of the combined action and thus for that activity on future attempts; and to do similar for other situations.

However, despite substantial work in neuroscience, there remains a lot of uncertainty over the behavior of neurons and what underpins biological intelligence.  Related work by Cortical Labs indicates that learning is possible, and we know that stimulations have an impact on the development and reactions of those neurons.  In this sense, neurons can provide some responsive learning and development when linked to a digital computer.

The difficulty comes in thinking about how to encourage neurons to act in different ways when operating in a digital environment.  To this end, we will now consider the use of Cortical Labs API as a means to achieve this.

Software API

The reader might be interested to know that Cortical Labs is producing their new API software at the time of writing this piece (github.com/cortical-labs).  The system we used was prior to this launch.  An example of the code can be found on Cortical Labs' GitHub pages.  This allows, for example, alteration of the voltage of a stimulation, the frequency of the stimulation, and the bursts.

While it has been outlined that those different types of neurons have different behaviors, what has not been noted is that neurons can have different sorts of activity on different channels.  It can be desirable to have a means of selection for the neuron channels that are more active in terms of spikes, to obtain faster runs or to favor certain YAWNING-TITAN actions.  It is also possible to group together neuron channels and select those which are busiest, or next to the busiest, channel, which can be a means to encourage learning.  This can all be done with reference to activity within the digital software, in our instance YAWNING-TITAN.

YAWNING-TITAN

YAWNING-TITAN's cybersecurity simulation is already driven by agents powered by traditional AI.  The primary aim of our work is to replace this artificial intelligence with biological neurons - both during training and inference - by interfacing with Cortical Labs' API.

When it comes to neuron integration, the neurons initially have no training and are unable to act meaningfully without structured input.  To avoid a simplistic "whack-a-mole" model (where neurons merely react without learning), we introduced closed-loop stimulation to provide context.  We embedded this process within the operation of the existing blue agent models, allowing us to monitor how the neurons performed across training runs.  This then enabled more dynamic responses when performing decision making.

In YAWNING-TITAN, red agents attack and blue agents respond.  Red have a set of actions such as basic_attack and spread, blue actions such as isolate_node and reconnect_node.  We link the actions in YAWNING-TITAN through to the neuron soup, to initially merely permit actions but later in our experimentation to choose actions.

The visualizer provided by Cortical Labs shows the activity of the neural cultures (i.e., voltage spikes over time).  While this is useful for development and debugging, the main aim of our work is directly driving decisions in YAWNING-TITAN.  Given that the novelty of the project lies in the integration of neurons, and neurons recognize patterns, the starting point was to get the neural culture to recognize and learn the patterns of the existing blue agents responding to red, rather than start from scratch in a void without any prior learning inferences.

That said, by the project end it was possible for the neurons to show differential activity that could be interpreted as responsive decisions for the selection of blue agent action.

The initial approach to integration was to take YAWNING-TITAN as a sounding board for the neurons to act.  After focusing on various files, the blue action set file provided us with a means to communicate with the neurons.  Every action taken by blue has to happen through this file, and so each specific action could be linked to particular channels on the MEA with the neurons on it.

This means, for example, that if the blue agent wishes to isolate a node, then it is possible for this request to be interpreted through activity of the neural culture.  Of course, at this juncture, the issue was that the neurons cannot make the decision for themselves whether to allow the action, so the underlying digital code still needs to take that decision.

Our purpose was to assess whether or not the neural culture could be potentially trained to respond in a manner consistent with the underlying logic of what would be considered a "correct" decision.  If so, we could then seek to understand what logic drove this behavior to then later leverage this approach more with revised code.  To begin with, we timed runs to see if the neurons would permit a blue agent action to occur more quickly on subsequent runs.

YAWNING-TITAN has shown itself to be a useful test bed for assessing how to incorporate neurons into a software package.

To be continued.

Return to $2600 Index