The BigCode initiative’s aim is to build state-of-the-art large language learning models (LLMs) to build code in an open and responsible way.
Code LLMs enable the completion and synthesis of code from other code and natural language descriptions, and enables users to work across a wide range of domains, tasks, and programming languages.
The initiative is led by ServiceNow Research, which does research to futureproof AI-powered experiences, and Hugging Face, a community and data platform that provides tools to enable users to build, train, and deploy ML models based on open-source code and technologies.
BigCode is inviting AI researchers to collaborate on a representative evaluation suite for code LLMs covering a diverse set of tasks and programming languages, responsible development and governance of data sets for code LLMs, and faster training and inference methods for LLMs.
“The first goal of BigCode is to develop and release a data set large enough to train a state-of-the-art language model for code. We’ll ensure that only files from repositories with permissive licenses go into the data set,” ServiceNow Research wrote in a blog post.
“With that data set, we’ll train a 15-billion-parameter language model for code using ServiceNow’s in-house GPU cluster. With an adapted version of Megatron-LM, we’ll train the LLM on the distributed infrastructure.”
Additional details about the project are available here.