#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

EPISODE · Jan 28, 2023 · 24 MIN

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

from Machine Learning Street Talk (MLST)

Research has shown that humans possess strong inductive biases which enable them to quickly learn and generalize. In order to instill these same useful human inductive biases into machines, a paper was presented by Sreejan Kumar at the NeurIPS conference which won the Outstanding Paper of the Year award. The paper is called Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines. This paper focuses on using a controlled stimulus space of two-dimensional binary grids to define the space of abstract concepts that humans have and a feedback loop of collaboration between humans and machines to understand the differences in human and machine inductive biases.  It is important to make machines more human-like to collaborate with them and understand their behavior. Synthesised discrete programs running on a turing machine computational model instead of a neural network substrate offers promise for the future of artificial intelligence. Neural networks and program induction should both be explored to get a well-rounded view of intelligence which works in multiple domains, computational substrates and which can acquire a diverse set of capabilities. Natural language understanding in models can also be improved by instilling human language biases and programs into AI models. Sreejan used an experimental framework consisting of two dual task distributions, one generated from human priors and one from machine priors, to understand the differences in human and machine inductive biases. Furthermore, he demonstrated that compressive abstractions can be used to capture the essential structure of the environment for more human-like behavior. This means that emergent language-based inductive priors can be distilled into artificial neural networks, and AI  models can be aligned to the us, world and indeed, our values. Humans possess strong inductive biases which enable them to quickly learn to perform various tasks. This is in contrast to neural networks, which lack the same inductive biases and struggle to learn them empirically from observational data, thus, they have difficulty generalizing to novel environments due to their lack of prior knowledge.  Sreejan's results showed that when guided with representations from language and programs, the meta-learning agent not only improved performance on task distributions humans are adept at, but also decreased performa on control task distributions where humans perform poorly. This indicates that the abstraction supported by these representations, in the substrate of language or indeed, a program, is key in the development of aligned artificial agents with human-like generalization, capabilities, aligned values and behaviour. References Using natural language and program abstractions to instill human inductive biases in machines [Kumar et al/NEURIPS] https://openreview.net/pdf?id=buXZ7nIqiwE Core Knowledge [Elizabeth S. Spelke / Harvard] https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf The Debate Over Understanding in AI's Large Language Models [Melanie Mitchell] https://arxiv.org/abs/2210.13966 On the Measure of Intelligence [Francois Chollet] https://arxiv.org/abs/1911.01547 ARC challenge [Chollet] https://github.com/fchollet/ARC

NOW PLAYING

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

0:00 24:58

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

Sunday Morning Linux Review - MP3 Feed Tony Bemus, Mary Tomich, Phil Porada, and Tom Lawrence Sunday Morning Linux Review www.smlr.us is a podcast with Tony Bemus, Mary Tee , Phil Porada, and Tom Lawrence. We talk about the Linux and Open Source News. Edited episodes and show notes are found at www.smlr.us , We will be Live on IRC #SMLR and Video: youtube.com/c/SmlrUs WSJ Free for All with Jason Gay Jason Gay, The Wall Street Journal In his unique style, Jason Gay from The Wall Street Journal discusses the current events and news you need to be informed on sports, culture and life. Enjoy these timely and engaging stories in our WSJ Free for All podcast. Teen Taal Aaj Tak Radio Teen Taal is a witty, comedy oriented Hindi podcast where three musketeers Kamlesh Kishore Singh, Panini Anand and Kuldeep Mishra talk about various issues with a pinch of humour and fun. The topic of conversation varies from politics, Indian society, jokes, Viral stuff on social media, food, movies and many more. Catch your share of fun every Saturday.इस पॉडकास्ट के नायक और खलनायक हैं,तीन तिलंगे- कमलेश किशोर सिंह, पाणिनि आनंद और कुलदीप मिश्र. ये तीनों लोग हफ़्ते की घटनाओं पर अतरंगी अंदाज़ में बातें करते हैं, ठहाकों के साथ और अपने अपने biases के साथ. ये पॉडकास्ट सबके लिए नहीं है. जो घर फूंके आपना, सो चले हमारे साथ. यानी वही लोग सुनें जिनका आहत होने का पैरामीटर ज़रा ऊंचा हो. हर शनिवार, आज तक रेडियो पर. जय हो. Integrating Nutrition, Psychology and Neuroscience to Measure Infant Development in the UK & Gambia Talk by Dr Sarah Lloyd Fox, Birkbeck College, on infant brain imaging in The Gambia
URL copied to clipboard!