Nowadays Deep Learning models (such as Bert) have largely promoted the development of Natural Language Processing. The main idea is to learn language model from large corpus. Despite the great success on many different NLP tasks, such as QA, deep learning models still fail to really understand human language. The thought experiment "Chinese Room" tells why.
In my opinion, people learn language by understanding the world. In human's mind, we have a model for language, and also a model for the physical and mental world. Just as the introduction of Knowledge Graph from Google: >"things not strings".
Generally speaking, learning a language is equivalent to developing a understanding of the world. Grounding symbols into real world objects is a prerequisite for natural language understanding, which is a step towards human-level intelligence or say AGI.
Key words: Symbolic Grounding, Natural Language Grounding, World Model, Embodied Learning, Cognitive Linguistic
Papers:
Language Grounding in Virtual World
- Why Build an Assistant in Minecraft?
- BabyAI: First Steps Towards Grounding Language Learning With A Human In The Loop
- A Computational Theory of Grounding in Natural Language Conversation
- Extending Machine Language Models toward Human-Level Language Understanding
Visual Question Answering / Multimodal ML
- Grounded Semantic Role Labeling
- Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding
- TVQA+: Spatio-Temporal Grounding for Video Question Answering
- Visual Genome
Neural-Symbolic Methods
Compositional and disentangled representations
- Reconciling deep learning with symbolic artificial intelligence: representing objects and relations
- β-VAE: LEARNING BASIC VISUAL CONCEPTS WITH A CONSTRAINED VARIATIONAL FRAMEWORK
Cognitive Science
- THE SYMBOL GROUNDING PROBLEM
- Modularity of Mind
- Is the mind really modular?
- Building Machines That Learn and Think Like People
Courses:
- Language Grounding to Vision and Control
- Grounded Natural Language Processing
- Grounded Language for Robotics
- 乔治·莱考夫认知语言学十讲
Books:
- Metaphors We Live by
Workshops:
Web:
- 人工智能若想取得突破,需要认知科学和工程学的联姻
- 计算认知科学优质资源 (IMPORTANT!)
- Neureality
- Yann LeCun - How does the brain learn so much so quickly?
- What is wrong with convolutional neural nets
- 构建AGI的系统神经科学方法 Demis Hassabis
- Sam Gershman给计算神经科学初学者的建议
Tutorial:
Tasks:
Active Researchers:
- George Lakoff (UCB)
- Mohit Bansal (UNC)
- Igor Labutov (LAER AI)
- Bishan Yang (LAER AI)
- Yoshua Bengio (MILA)
- Joyce Y. Chai (Umich)
- David Traum(USC)
- Raymond J. Mooney(UTA)
- Yonatan Bisk(CMU)
- Marta Garnelo(DeepMind)
- Mark B. Ring
- Jiayuan Mao