This post is a summary of my notes from the Feb 11, 2021 discussion on Clubhouse titled Recent Breakthroughs in AI
. The talk was moderated by Russell Kaplan (Scale AI) and the panel included Richard Socher (You, Salesforce Research), Justin Johnson (University of Michigan, Facebook), Andrej Karpathy (Tesla). The discussion mostly looked at the novelty of transformer based multimodal models such as CLIP1
which have both shown interesting results.
Below are some of the ideas I found really interesting.
- Data is king! Getting better data might be the single biggest bang for buck in terms of performance improvement.
- Data curation toolkits, MLOps (and companies in this space) and will be increasingly important.
- Transformers are unifying the deep learning problem/solution space i.e transformer-based model architectures can be effectively applied to multiple domains e.g. image, text, speech.
- Models that can be parralelized and optimized for today’s hardware will have more impact.
- New Research Frontiers? Models that learn continuously; Models capable of logic/reasoning; New objective functions and application areas; New approaches to data labeling; Model-first approach to benchmark design; New approaches to creating massive datasets (e.g. via simulations)