Training a 70B model from scratch: open-source tools, evaluation datasets, and learnings

June 25, 2024

Introduction

Earlier this year, we pre-trained and fine-tuned a 70B-parameter model that outperforms GPT-4o zero-shot on a range of reasoning and coding-related benchmarks and datasets. Our fine-tuned model, pre-trained on 2T tokens, approaches a fine-tuned Llama 3 70B, which was pre-trained on more than seven times as much data.

The accuracy of our model (Imbue 70B), Llama 3 70B, and GPT-4o zero-shot, without chain-of-thought. For visualization, the x-axis starts at 0.5. Error bars indicate standard error of the mean across questions.

Because we evaluated GPT-4o zero-shot without chain-of-thought, its performance above does not reflect the best possible scores it can achieve on these datasets. However, this is the most faithful comparison to the fine-tuned 70B model evaluations, which also do not include chain-of-thought.

Using our hyperparameter optimizer, CARBS, we scaled this system up to 70B parameters on our first attempt with minimal training instability and no loss spikes. This involved training thousands of dense transformer models with group query attention, SwiGLU activations, RMS normalization, and a custom tokenizer at a range of smaller sizes.

To help other teams train, scale, and evaluate models tailored to their own research and product goals, we’re releasing the tools that facilitated this work. The toolkit includes:

For all of the above tools, we expanded upon our process for creating and utilizing them in the following blog posts:

1. Ensuring accurate model evaluations: Clean, open-source datasets for models that reason and code

We found that both open-source and closed models achieved nearly 100% accuracy on some datasets when evaluated only on good, unambiguous questions. We are sharing our updated datasets to enable other teams to easily evaluate their own models on code and reasoning-related tasks. For more on why we selected these particular datasets, as well as details about the process of creating the data and the actual datasets themselves, see our detailed write-up on evaluations.

2. From bare metal to high-performance training: Infrastructure scripts and best practices

These scripts are a critical (and often undisclosed) piece of training very large language models. We hope that our efforts will make it easier for others to experiment at larger scales without needing to reproduce this infrastructure code and knowledge. For more details, see our write-up of our training process and infrastructure bring-up.

3. Open-sourcing CARBS: A cost-effective hyperparameter optimizer that helps scale small experiments to large language models

CARBS allowed us to scale to our large training run with minimal training instability and loss spikes on the first attempt — eliminating a huge source of risk for smaller teams experimenting with novel model architectures. We published an extended write-up on how we used CARBS to scale up to our 70B model.

Takeaways

We trained our model from scratch as an experiment to help answer a few critical questions:

  1. At a practical level, what does it take to build a proof of concept for technically robust agents that can reliably write and correctly implement robust, extensible code?
  2. What kinds of performance improvements can pre-training provide (vs. fine-tuning, reinforcement learning, and other post-training techniques)?
  3. What contributions do engineering optimizations in infrastructure, hardware, data, and evaluations contribute to building a robust and correct model?

Some key learnings from this experience:

  1. Clean evaluation datasets are key to properly assessing model accuracy. Identifying ambiguity and refining task specifications help build the critical foundations for building coding agents and other tools that can reliably take actions in the real world.
  2. Automated processes to diagnose and fix infrastructure problems are critical for efficient training, cluster utilization, and maximizing model performance.
  3. It is possible to run resource-efficient pre-training experiments that can effectively scale to a large model. Using CARBS, we could reliably predict the performance of any model with a given number of parameters according to well-defined scaling laws, lowering the barrier to entry to building large models.

This model training — including all of the above work on infrastructure, evaluations, and hyperparameter optimization — was completed by about a dozen of our engineers and researchers. It is one of many projects we are working on at Imbue. Our other focus areas include reinforcement learning, agent and reasoning architectures, data generation techniques, and experience design to make these powerful capabilities accessible and intuitive to users.

We’re excited to share more about these projects when they are ready. If you’re interested in learning more and getting involved, we’re hiring!