Pieter Abbeel is one of the rock stars of modern AI in robotics and reinforcement learning
(RL). He’s a professor in the EECS Department at UC Berkeley and a lot of the most
influential recent research in robotics an RL has come out of his lab. He also runs a very
interesting podcast called the RobotBrains. In the most recent episode (Aug. 3), he talks
with John Schulman who was his Ph.D. student several years ago and then went on to co-
found OpenAI, which everyone knows today thanks to the mind-boggling capabilities of
ChatGPT and GPT-4 which has dazzled hundreds of millions of users around the world within
a few months. Schulman was a pioneer of many modern advances in RL and meta-learning
and his team was centrally involved in designing ChatGPT.


In this episode, Abbeel and Schulman have a very interesting conversation about the
technology behind ChatGPT and where it’s moving in the future. They discuss both the
amazing capabilities of the models as well as their limitations. Generalization for example is
both a strength and a limitation. We all have seen how it can generalize to write surprisingly
impressive novel text but it also shows very limited ability to reason and do math, and
sometimes repeats the same joke over and over again! One of the fascinating possibilities
they discuss is the human input in reinforcement learning with human feedback (RLHF)
which is currently very central in training the models. Could this be automated? Perhaps it
could generate many different options like AlphaGo and then rate itself by using a GAN-like
architecture?!


Another interesting point they discuss is the role of academia versus industry. Abbeel asks
the all-important question if research requiring such staggering amounts of resources could
be done in academia. Schulman answers that the two could be complementary: while
industry focuses on commercial products, basic and fundamental research is better done in
academia e.g. getting a better understanding of how such monster models work. He said
that given a second chance, he’d still do a Ph.D. in academia to understand things at a
fundamental level before moving to creating products in industry.


Schulman thinks that the absolute top performing models may still be in industry because of
the strong commercial drive, but he thought open sourcing models such as Meta’s Llama 2
could create great opportunities. They discussed the possibilities of future compressed
versions of these models trained with small amounts of data running on small devices. This
is the future of bringing the full potential of LLMs to everyday life and this is where Embedl’s
technology is leading the way!

You may also like

The Big Story of AI in 2023: LLMs
The Big Story of AI in 2023: LLMs
5 January, 2024

The year 2023 marked a revolutionary period in the field of artificial intelligence, particularly with the advent of Lar...

Enter the Transformer!
Enter the Transformer!
4 July, 2023

“The Transformer has taken over AI”, says Andrej Karpathy, former Director of AI Research at Tesla, in a recent episode ...

Towards 1 Bit LLMs!
Towards 1 Bit LLMs!
25 April, 2024

Large Language Models (LLMs) are all the rage these days! They have revolutionized areas like natural language processin...