Laboratory for Brain and Machine Intelligence

Laboratory for Brain and Machine Intelligence @ KAIST

Laboratory for brain and machine intelligence, KAIST

Seminars (hosted by BML)

Joel Z Leibo (Google DeepMind), Autocurricula and the emergence of innovation from social interaction, May 16, 2019. (Bio-IT/BBE/BCE seminar series)

Zeb Kurth-Nelson (Google DeepMind), Distributions from dopamine and factorized replay, MAY 1, 2019. (Bio-IT/BBE/BCE seminar series)

YoungGyun Park (MIT), Toward integrative brain mapping via intact tissue processing and phenotyping techniques, MAY 1, 2019. (BCE seminar series)

Xavier Boix (MIT), Making a science from the computer vision zoo, NOV 15, 2018. (MIR-MSREP seminar series)

Hiroyuki Nakahara (RIKEN), Neural mechanism and computations for social decision-making , OCT 24, 2018. (Bio-IT half-day workshop)

Ales Leonardis (University of Birmingham), Combining vision and physics to explore synergies in scene understanding, AUG 14, 2018. (KI for AI seminar series)

Daeyeol Lee (Yale University), Future of AI: Is the brain a computer?, AUG 1, 2018. (Bio-IT inspiring talk series)

Minjoon Kouh (Drew), Trade-offs in neural computation, June 27, 2018. (Bio-IT inspiring talk series)

Joel Z Leibo (Google DeepMind), The interplay of competition and cooperation in shaping intelligence, MAR 28, 2018. (BBE seminar series)

Ben Seymour (University of Cambridge; CINN/ATR/Osaka Univ.), Pain and aversive learning: from computational neuroscience to clinical neuroengineering, NOV 16, 2017. (BML computational psychiatry seminar series)

Rongjun Yu (National University of Singapore), The neural basis of decision making under uncertainty, SEP 28, 2017. (BML computational psychiatry seminar series)

Christopher Summerfield (University of Oxford), Neural and computational mechanisms of human decision-making, SEP 13, 2017.  (BML computational psychiatry seminar series)

Kóczy T. László (Budapest University of Technology and Economics), Fuzzy signature, July 3, 2017.

Erie D. Boorman (University of California at Davis), Computational and representational approaches to associative learning, June 21, 2017. (BML computational psychiatry seminar series)

Hyun Kook Lim (Catholic University Saint Vincent Hospital), Alzheimer's disease, Mar 16, 2017.

Seung-Tae Lee (Yonsei University College of Medicine), Next-generation sequencing, Mar 24, 2017.

Minlie Huang (Tsinghua University), New Approaches for Representing Text and Knowledge, NOV 22, 2016.

JeeHang Lee (Yonsei University; University of Bath), Normative decision making, OCT 19, 2016.

Heyeon Park (Seoul National University Bundang Hospital), Multiple effects of stress on reinforcement learning in a changing environment, SEP 22, 2016.

Yongsek Yoo (Hongik University), A computational model of the medial temporal lobe, AUG 11, 2016.

Demis Hassabis (Google DeepMind - Founder & CEO), Artificial Intelligence and the Future, MAR 11, 2016. (Bio-IT seminar series)

Workshops (hosted by BML)

2017 Computational Psychiatry Seminar Series

- Erie Boorman (University of California at Davis) – JUNE 21
- Christopher Summerfield (University of Oxford) – SEP 13
- Rongjun Yu (National University of Singapore) – SEP 28
- Ben Seymour (University of Cambridge; CiNet/Osaka Univ) – NOV 16

2016 International Workshop on Computational Psychiatry

Date: Wed, OCT 5, 2016 (10:30-17:30)
Venue: Dream Hall, CHUNG Moon Soul building (E16)

- Benedetto Martinos (University of Cambridge)
- Robb Rutledge (University College London)
- Shinsuke Suzuki (Tohoku University)
- Sukbin Lim (NYU Shanghai)
- Sang Wan Lee (KAIST)

2016 Neural Computation Workshop

Date: Wed, NOV 2, 2016 (14:00-17:30)
Venue: Dream Hall, CHUNG Moon Soul building (E16)

Speakers :
- Mattia Rigotti (IBM TJ Watson)
- Jinseob Ki (KBRI)
- Se-bum Paik (KAIST)

- Sang Wan Lee (KAIST)

Lab workshops

2018 Model-based deep reinforcement learning (PDF flyer)

Date: Mon, FEB 12, 2018 (15:00-18:00)
Venue: #205 (E16-1 YBS Bldg.)

Reinforcement learning + deep learning + Bayesian game theory. This half-day workshop aims to review recent studies about model-based deep reinforcement learning (RL). Model-based RL refers to a class of reinforcement learning algorithms that learn the model of the environment. For example, model-based RL agents are expected to rapidly adapt to the change of the environment structure. It addresses the Bayesian game problem. Imagine you play a Tic Tac Toe, Chess, or GO with the model-based RL agent. It can dominate the game by taking advantage of your game strategy. However, the conventional model-free RL agent (e.g., DQN, SARSA, TD, and etc.) can be fooled by sudden changes of a goal or deliberate changes in your game strategy. This approach offers enormous potential for solving general problems.