πŸ† EvalPlus Leaderboard πŸ†

EvalPlus evaluates AI Coders with rigorous tests.

github arxiv

🤗

File a request

to add your models on our leaderboard!

πŸ“ Notes

  1. All samples are generated from scratch and are uniformly post-processed by our sanitizer script. Syntactical checkers are used to make sure that trivial syntactical errors (e.g., Python indents) do not contribute to failing tests.
  2. By default, models are ranked according to pass@1 using greedy decoding. Model setup details can be found here.
  3. Models labelled with πŸ—’οΈ are evaluated using an instruction/chat setting, while others perform direct code generation given the prompt.
  4. We only use a subset of well-formed problems (399 tasks) from MBPP-sanitized (427 tasks) for MBPP/MBPP+.
  5. It is the model providers' responsibility to avoid data contamination as much as possible. In other words, we cannot guarantee if the evaluated models are contaminated or not.

πŸ€— More Leaderboards

In addition to EvalPlus leaderboards, it is recommended to comprehensively understand LLM coding ability through a diverse set of benchmarks and leaderboards, such as:

  1. Big Code Models Leaderboard
  2. InfiCoder-Eval
  3. TabbyML Leaderboard