Communication among agents is of vital importance for their effective collaboration and the development of language probably played a key role in human technological advancement. This work aspires to develop a self-organized communication system between two agents in a computer simulation in a way that is theoretically based in neuroscientific research. Language is understood in terms of a communication protocol that couples the participants of the conversation into a larger system in which the optimal operation is attained via the optimization of individuals. Agents in the simulation are designed on the principle of predictive processing, which postulates that in order to understand their environment agents must minimize the prediction error between their predictions about the future states and actual sensory input. In this way, they minimize surprisal of their sensory input and consequently improve their generative model of the world. A simple simulation is set up with two agents who are presented the same image and are equipped with the ability to form messages about it. Agents are modeled as a perceptual and generative neural network on each of the two modalities at hand, visual and verbal, with a shared internal representation of concepts. The emergence of a communication protocol is achieved by playing so-called naming games in which agents alternately name the object in the image and learn the words of their co-speaker. After 50.000 iterations of naming games, a stable vocabulary for object naming is achieved, but agents’ predictions of visual input based on the received messages is not completely reliable. While imperfect, the results show promise for future research in the direction of using the principle of predictive processing and indicate that modeling the emergence of language based on self-organization of individual participants is a viable approach.
|