Code Llama models achieve state-of-the-art performance among open models on HumanEval (up to 67%) and MBPP (up to 65%), with the 7B Python variant outperforming Llama 2 70B and all models beating others on MultiPL-E.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2023 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Code Llama: Open Foundation Models for Code
Code Llama models achieve state-of-the-art performance among open models on HumanEval (up to 67%) and MBPP (up to 65%), with the 7B Python variant outperforming Llama 2 70B and all models beating others on MultiPL-E.