Influence

influence_square

Screenshot of the Influence software environment with 50 remote agents connected.

Participants: Joseph Malloch, Stephen Sinclair
Time period: 2012–2013
Repository: Influence on GitHub

In addition to data analysis, an important component of the E[MERGE] project is the use of dynamic systems to react to sensor data and provide feedback by means of media control. We have developed two systems which can work together or independently to provide a dynamic response to sensor input.

The first is a library for agent-based behaviour, written by Sofian Audry, called Qualia. The other is the shared environment Influence, which uses a pixel-based, GPU-driven 2-D convolution process to iteratively transmit information between agents that inhabit the space. Using libmapper as a communication protocol, Qualia can control agents which inhabit the Influence environment, but Influence can also be inhabited by agents reacting with modelled physical behaviour (virtual particles), or by agents controlled externally by human input.

An immediate idea for involving multiple agents was to have them connect to the Processing environment just like the original Qualia agent, and have the Processing physics engine apply forces between them. However, with a mind towards generalization and scalability, we wanted to make such an environment where the motion of agents was determined mostly by the agents rather than the central process, so that no one process was in charge of integrating all the physics. In the setup above, the physics engine inside the Processing applet would be in charge of an N-body physical problem, which may not scale well to large numbers of agents.

Additionally, although the physical motion was an interesting interaction, we wanted an environment which could be used in a more general way, to inform agents of their surroundings and allow them to make decisions on how to act, without needing to implement globals laws such as a physical simulation. The reason is that the locations of agents in this space will not necessarily represent physical positions, but may indeed be used to represent characteristic analyses of sensor data.

Previously in the IDMIL, we had designed an interactive table which used a pixel-based convolution method to transmit information about what objects are on the table towards objects in a physical simulation. At each iteration, the convolution caused active pixels to spread further and further, and meanwhile a physics engine was reading the pixel data and using it to inform forces; this allowed simulated objects to be attracted or repelled by real objects seen through a camera.

We decided to use this idea to propagate information between agents. This has several advantages for our application: firstly, the N-body problem enabling all agents to “see” all other agents is spread across time, so that it becomes a linear problem in the number of pixels rather than agents. Although this could be represent considerable amount of processing, we implemented the convolution in a GPU shader, off-loading the work from the CPU. Since it does not increase with the number of agents, as long as the GPU can handle the workload, computational requirements are constant. Secondly, since interaction takes place in a 2-D bitmap, it lends itself to other methods of interaction, such as drawing directly on the surface, placing virtual walls or objects in the space, or taking input from a video or depth camera such as the Microsoft Kinect.

The current implementation features 2-D vector fields that support directionality, enabling effects such as spin and flow. The user can draw flows with the mouse that pull agents along a path.  It can be seen in the figure above that the agents leave trails. This is because as they move, they write values to the vector field which decay over time.

 

During planning, we implemented some test environments with large numbers of local agents, which can be seen above. These show that interesting emergent behaviour can arise by having one or two types of agents with simple sets of rules.  Influence presents a proof of concept dynamic environment where agents of different types can inhabit and observe each other, and reduces each agent’s observations to a constant-sized vector which can be processed by a decision engine such as Qualia’s reinforcement learning. We have shown that a dynamic system can be fed information from sensors, simulations, and intelligent agents, and that the agents can observe and react to the behaviour of their peers.