Top 10 Machine Learning and Design Insights from Google IO - Issue #3
Google IO is Google's annual developer conference where they announce new products, projects and developer tools. I attended several of the sessions and this newsletter contains a list of my top 10 insights that border on Machine Learning and HCI/Design.
LaMBDA - Open Domain Conversational Agent
Google is experimenting with language models for open domain conversation - Language Model for Dialogue Applications aka LaMBDA. This is interesting as open domain conversation is is notoriously hard 1 . LAMBDA parses existing data corpuses on multiple domains and generates responses. Once the rough edges are smoothed out (its still in research phase and has quirks) Google hopes to apply it to products like Google Assistant, Search, Workspaces, making them accessible and easier to use.
Building open domain conversational agents - agents that can hold human level conversations for extended turns, on any topic without degrading - ** is notoriously hard **. Such an agent must continually learn, provide engaging content, and stay well-behaved . As Google mentions, its still a bit early and LaMBDA still makes mistakes in its generated responses - logic mistakes (e.g. playing with the moon), factually incorrect responses etc. However this appears promising, especially its applications.
Multi Modal Search
With neural networks, an interesting trend is the use of multi modal models that can embed data across multiple modalities (text across multiple languages, images, speech, videos, routes etc) into a shared semantic space. If we can do this, we can enable interesting usecases such as cross modal retrieval (e.g. show me all parts of a video that are related to this text query). Or as Google mentions multi modal models allow new types of interesting queries - "show me a route with beautiful mountain views" or "show me the part in the video where the lion roars at sunset".
Google talked about an implementation - Multitask Unified Model (MUM) based on a transformer model but 1000x more powerful than BERT. MUM can acquire deep knowledge of the world, generate language, is trained on 75+ languages and understands multiple modalities
I really believe explorations in this sort of multi modal representation learning will become the standard for building intelligent recommendation systems. In theory, it lets you learn about the world by modelling joint spaces (e.g. two brands might be equivalent, two parks might be comparable, some things are changeable/moveable while others aren't .. etc) and it enables you use learned representations for new use cases (e.g. conversational retrieval across multi modal datasets).
Tensorflow.js and Tensorflow Lite - Improved Performance, TFLite Integration
I have worked with Tensorflow.js extensively in the past and I am quite excited about the recent updates.
Tensorflow lite models will now run directly in the browser. With this option, you can unify your mobile and web ML development with a single stack.
Tensorflow lite is bundled with Google Play Services which means your Android app apk file is smaller.
Profiling tools. TensorFlow Lite includes built-in support for Systrace, integrating seamlessly with perfetto for Android 10.
Model Optimization Toolkit The TensorFlow Model Optimization Toolkit is a suite of tools for optimizing ML models for deployment and execution.
MLKit . ML Kit brings Google’s machine learning expertise to mobile developers in a powerful and easy-to-use package. Make your iOS and Android apps more engaging, personalized, and helpful with solutions that are optimized to run on device. MLKit (for Android and iOS) now allows you to host your model such that they can be updated without updating your app!
ML and Health
There were several interesting ML and Health Applications.
Using AI to improve breast cancer screening. Google, in collaboration with clinicians, are exploring how artificial intelligence could support radiologists to spot the signs of breast cancer more accurately.
Improving tuberculosis screening using AI. To help catch the disease early and work toward eventually eradicating it, Google researchers developed an AI-based tool that builds on their existing work in medical imaging to identify potential TB patients for follow-up testing. Learn more here
Using the android image camera to identify skin conditions. The user uploads 3 photos of the area and a model provides a list of possible conditions and links to learn more. This app has marked as CE Class I medical device in the EU. Learn more here .
Google introduced a new product - Vertex AI and describe it as a managed machine learning (ML) platform that allows companies to accelerate the deployment and maintenance of artificial intelligence (AI) models. The goal here seems to be focused on streamlining tasks within the AI product lifecycle (which have been previously scattered across multiple GCP services), with special focus on ML operations (MLOps).
They claim that Vertex AI requires nearly 80% fewer lines of code to train a model versus competitive platforms, enabling data scientists and ML engineers across all levels of expertise the ability to Deploy more, useful AI applications, faster.
Given that the MLOps space is still fairly young, I am curious to see how the space gets standardized. As it stands, there are many many different ways to run ML pipeline tasks (e.g. bare metal compute engine instances, managed kubernetes, tensorflow extended, kubeflow pipelines, GCP AI platform etc) even on GCP and it can be confusing for new users.
And more ...
There were more announcements that touched on
Automated Design Palettes/Elements for Android
Giving users ML and Making it Controllable
Privacy - Android Privacy, Privacy Sandbox, Differential Privacy Library
Strides in Accessibility, Inclusivity and Responsible AI
Wearables - Google WearOS + Samsung Tizen + Fitbit
See the full blog post for more. https://victordibia.com/blog/google-io-2021/.