View profile

Deep Learning Models that Write Code - Issue #6

Victor Dibia
Victor Dibia
Consider the following scenario - You write a comment “create a function that fetches all machine learning tweets and saves it to a file”, and a deep learning model generates a full python function that achieves this goal (see screenshot below). It imports a suitable library, creates the required boilerplate authentication code and call structure such that you have decent functionality with minimal effort!
The research area, frequently referred to as deep learning for program synthesis has recently become quite active, with useful applications in improving developer productivity (e.g., reducing time to implement features), improving the experience for developers exploring a new language or framework or even helping novice developers understand existing code.

A screenshot of GitHub CoPilot in action. Given a short description of the task, CoPilot generates a full function that is useful for solving the task.
A screenshot of GitHub CoPilot in action. Given a short description of the task, CoPilot generates a full function that is useful for solving the task.
GitHub Copilot · Your AI pair programmer
Large Language Models and Program Synthesis
Simply put, a language model is a statistical model that learns the distribution or probabilities of words in a sequence. It turns out that if we can achieve such a model with high fidelity, we can solve a few interesting tasks. For example, if we know that a word is likely to occur given some sequence of words, we can implement some useful functionality like email autocomplete (e.g., given the sequence “Have a great ” .. we can predict that the next likely word is “day”).
When these statistical models are derived using large neural networks with billions of parameters (hence the term large language models or LLMs), the results and application areas are even more impressive. Results from transformer-based model architectures like BERT, GPT etc., show that these models excel at several complex tasks e.g., they can mimic creative writing, predict sentiment, identify topics within sentences with few examples, meaningfully summarize lengthy documents, translate languages etc.
GPT-3 Powers the Next Generation of Apps
Given that code is similar to natural language, recent research has shown that LLMs can also be trained to learn a distribution over code tokens and work in solving several code understanding and code generation tasks.
Code Understanding and Generation Tasks: Datasets and Metrics
In their paper, Lu, Shuai, et al [1] identify 10 general tasks in the area of code understanding and generation. They also highlight a set of datasets that are useful for addressing these tasks (including some datasets that they curate) and also discuss some metrics which are used in evaluating each of these tasks.
A list of tasks:
  1. Clone detection (semantic similarity)
  2. Defect detection (predict if code contains bugs)
  3. Cloze test (predict masked token given code snippet)
  4. Code completion (predict next tokens given some context)
  5. Code repair (produce bug free version of code)
  6. Code-to-code translation (translate from one programming language to another)
  7. Natural language code search (retrieve code based on text query, predict if code answers a NL text query)
  8. Text- to-code generation (explicitly generate code given some natural language query)
  9. Code summarization (generate natural language summaries or comments given some code)
  10. Documentation translation (translate code documentation from one natural language to another).
I have written a review/summary of the Lu Shai [1] paper which can be found here -
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation | Paper Review | Victor Dibia
Code Generation/Completion - Codex, AlphaCode
Perhaps, the more interesting task in the list above has to do with the opening use case presented in this newsletter post - code generation/completion. There has been very interesting progress in this area starting with the Codex model from OpenAI [2] which powers Github CoPilot and more recently the AlphaCode model from Deepmind [2] a transformer based model for solving competitive programming problems.
Codex[2] is the program synthesis model from OpenAI that powers GitHub Copilot. While Codex focuses on general purpose program completion, AlphaCode seems to focus specifically on solving competitive programming contest problems. While Codex is based on a multilayer decoder-only model (GPT-3), AlphaCode is based on an encoder-decoder model architecture. A recent paper [4] from Google evaluating LLMs for code generation also use a decoder-only setup.
Architecture overview for AlphaCode. Source: DeepMind
Architecture overview for AlphaCode. Source: DeepMind
I have written a review on the AlphaCode paper which can be found here -
AlphaCode: Competition-Level Code Generation with Transformer Based Architectures | Paper Review | Victor Dibia
The space is increasingly interesting! I have used CoPilot over the last 4 months and can attest to it actually improving my productivity (hint, it works better than having to consult stackoverflow from time to time). There are still many open issues e.g., understanding when these models fail, reasons why they fail (e.g., due to misalignment), improved metrics for evaluating these models, the impact of these tools on developer learning/education, and general societal impacts of LLMs etc.
There is an ICLR 2022 workshop track focused on Deep Learning for Code which should be really insightful!
[1] Lu, S., Guo, D., Ren, S., Huang, J., Svyatkovskiy, A., Blanco, A., Clement, C., Drain, D., Jiang, D., Tang, D. and Li, G., 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664.
[2] Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H.P.D.O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G. and Ray, A., 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
[3] Yujia Li et al Competition-Level Code Generation with AlphaCode. 2022
[4] Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., … & Sutton, C. (2021). Program synthesis with large language models. arXiv preprint arXiv:2108.07732
Did you enjoy this issue? Yes No
Victor Dibia
Victor Dibia @vykthur

Some organized thoughts on research, code and topics at the intersection of applied machine learning and human computer interaction.
Posts by Victor Dibia | @vykthur

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Created with Revue by Twitter.