Laboratory for Brain and Machine Intelligence

Laboratory for Brain and Machine Intelligence @ KAIST

Laboratory for brain and machine intelligence, KAIST

Brain 2 AI

Our research aims to understand how cognitive control is implemented in the human brain ("AI2Brain"), thereby designing brain-inspired artificial intelligent systems that show a high level of ability to perform a wide range of tasks ("Brain2AI").

In particular, we study neural computations underlying the process of a human prefrontal cortex which allocates control over behavior to multiple types of learning and inference systems. This is achieved through a combination of computational learning theory, control theory, and experimental techniques including model-based functional magnetic resonance imaging (fMRI), electroencephalography (EEG), Transcranial magnetic stimulation (TMS), and transcranial direct current stimulation (tDCS). Topics of interest include, but are not limited to, the following:


research_3.png

1. Gregorian Cognitive Machines: Infinite Mixture of AI Systems

It is a long-term research goal aimed at creating brain-inspired artificial intelligence systems that are equipped with mind tools and communication skills (“Gregorian creature”). The project extends over two stages. First, the focus is on understanding brain mechanisms underlying collective intelligence, asking two key questions: under which conditions are two brains better than one? how is it that multiple brains form a functional hierarchy when they are asked to work to achieve a common goal? In the second stage, the focus shifts towards the problem of artificial intelligence - devising system of systems capable of self-organizing a functional hierarchy the same way a group of human brains does.

(1) Multidimensional human intelligence
We aim to categorize human brains according to a variety of biological parameters associated with prefrontal hierarchical control algorithms [Lee, IEEE TKDE 2011; LEE, IEEE Computer 2013]. In doing so it would be possible to quantify individual differences in the biological parameters of the prefrontal system, as well as gaining a deeper appreciation of how a homogeneous/heterogeneous group of prefrontal systems emerges as a unitary intelligent agent.

(2) Infinite-dimensional artificial intelligence
The ultimate goal of this research is to devise artificial collective intelligence systems that are expected to surpass the performance of human groups making group decisions. First, we aim to design an algorithm for automatic organization of a functional hierarchy, in which multiple artificial intelligence agents interact with each other to group together to make group decisions. Second, empirical and theoretical assessment of competence will be made for a homogeneous and a heterogeneous group, respectively. Note that this will offer answers to the long-standing problem of collective decision that was first discussed in Bahrami’s Science paper, demonstrating that though limited to simple visual perception tasks, two agents can make smarter choices than a single agent provided effective communication. Finally, empirical evaluation includes deriving a parameterized version of system of systems with the number of agents being N, and then explore the limit N to infinity. A group of humans and the proposed system will compete in a diverse array of tasks. This approach may have advantages over other types of collective intelligence systems; (i) it is a parameter-free system whose performance does not vary across task types, and theoretically (ii) system performance will increase without bound, provided that agents group together to make group choices through effective communication.


2. Hierarchical Cognitive Control Using Scalable Deep Learning

Decades of studies support the view that visual cortex has a hierarchical structure, guides perception in both a bottom-up and a top-down manner. This view and the above-mentioned notions of hierarchical control for learning and decision making converge on the idea that two hierarchical models, one in the visual cortex and another in the prefrontal cortex, interact with each other. The questions arise as to (i) what are the key variables (e.g., object identity and context information) facilitating the communication between these two brain modules and (ii) whether the brain deploys different functional networks associated with each case. We will use deep learning as a means to formulate and test these hypotheses.

(1) Popperian cognitive model for paired associate learning
This research aims to establish a computational principle of how the predictive functional specificity in perception tasks arises from a nonlinear combination of neural populations. We will use deep learning algorithms to (i) test the hypothesis about how the conditioned multiple stimuli-stimuli association and the stimuli-response association interact in LIP, striatum, and PFC, and (ii) moot the idea of overlapping multiple selectivity maps. These problems stem from the kernel-based paired associate learning [Lee, IEEE TNN 2011]. It is noted that under certain assumptions, a deep learning network can be viewed as a particular type of a hierarchical kernel function [Anselmi, arXiv 2015].

(2) Interaction between visual cortex and lateral prefrontal cortex
Object identities and contexts are the two basic types of information known to be processed across brains, including visual cortex, striatum, and prefrontal cortex. We hypothesize that these two variables play an important role in tuning to communication channels to ultimately guide our responses to stimuli to learn about the world. The project aims to understand how the object identity and the context information translate into a value signal, and how the corresponding brain functional networks are formed in the brain.


3. Simulation-based Learning of Deep Neural Networks

Even the cutting-edge deep neural networks requires a large amount of data. However, human often quickly learns from a small amount of data. Here we aim to design a computational framework that enables the deep neural network to gain new knowledge after making only a few observations. This involves generating virtual training samples ("pseudo-decoding"), followed by running simulations on the same neural network ("thought experiment"). The extreme case is called one-shot learning.


4. CovGram : Subspace Analysis for Real-Time Brain-Machine Interface

While neuroimaging techniques help us understand the structure of the prefrontal meta-control network, it has poor temporal resolution leaving the temporal dynamics driving the computations unobserved. To resolve this we will use EEG (electroencephalography), which has much higher temporal resolution at the expense of poor spatial resolution. In addition to EEG, we have been developing covariance-gram matrix-based eigenvalue decomposition method, which is meant to make the subspace analysis applicable to large scale data sets by circumventing computational complexity of matrix inversion. Preliminary results from a social interaction task with subjects with autism suggest that the proposed method can process data from 128 EEG channels in less than 100msec, while successfully distinguishing between normal subjects’ and autistic subjects’ neural patterns.