← Back to PlotPoints

The Standings.Round 01 · April 2026

▌ At a glance
2,538 votes  ·  20 models  ·  469 voters
R01: 1,857 · R02: 507 in flight
75% catch-pair · CC-BY 4.0
Composite · how it works

rp-benchmark's composite score — the canonical 'best model overall' answer. Each model's normalized z-score across five axes is weighted, then mapped onto 0–100 for readability.

How it's scored
Multi-turn arena ELO (35%), LLM-judge Likert mean (25%), rubric overall (20%), flaw hunter (15%), behavioral metrics (5%). Models missing one of the five axes are still ranked; the missing component is treated as the pool mean so a partial entry doesn't get an artificial bump or drop.
How to read the table
Higher = better all-axis performance. The score isn't an ELO, it's a normalised blend — a model at 50 sits at the pool's average across all five axes. Compare each row's composite rank against its multi-turn-only rank in the cross-test grid: big gaps surface 'great vibe, dirty prose' or vice versa.
▌ What are you writing?Pick a use case — we'll re-rank for it.
▌ Access
The Pick · for all models · all models
I

Claude Opus 4.7

Anthropic · proprietary · ELO ± · n=
▌ Why this pick

Default ranking — rp-benchmark's composite score, a weighted blend of multi-turn arena ELO (35%), LLM-judge Likert (25%), rubric overall (20%), flaw hunter (15%), and behavioral metrics (5%). Claude Opus 4.7 leads at 97.6/100, Sonnet 4.5 second at 92.9 — Sonnet jumps from #6 multi-turn ELO to #2 composite because it scores well on every axis, not just engagement. DeepSeek v3.2 jumps from multi-turn #12 to composite #4 (83.3) — strongest open-weight option once cross-axis reliability is weighted in. Engagement column re-derived from a Sonnet-4 judge proxy on 2026-05-02.

Top of the multi-turn pool. Top-1 on agency respect and instruction drift. Runner-up: Claude Opus 4.6.

1627Multi-Turn ELO (R2) · ±109Round 02 in flight · n=26
SFW Win Rate wins on safe-scene votes
NSFW Win Rate wins on explicit-scene votes
#1Reliability Rank · avg 2.81 = most reliable in the 20-model pool
$39.00Cost / 1M tokens blended 60% input / 40% output
10174msResponse Time median generation time per response
SpreadModel · VerdictScoreSFWNSFWEngagejudge /5All Tests★ comp · E elo · MT m-turn · RU rub · AD adv · $ costVotes (R01+R02)
I119
Claude Opus 4.7Champion
Anthropic · proprietary · 200K
Top of the multi-turn pool. Top-1 on agency respect and instruction drift.
97.6/1004.05
E
MT
RU
AD
$
1
1
1
1
19
26
R1 0 · R2 26
II217
Claude Sonnet 4.5Reliable
Anthropic · proprietary · 200K
Round 01 reliability leader. Tied #1 on context attention.
92.9/10051%51%4.06
E
MT
RU
AD
$
2
5
6
4
3
17
242
R1 194 · R2 48
III220
Claude Opus 4.6Reliable
Anthropic · proprietary · 200K
Reliability runner-up. Top-2 on agency, complete failure-mode coverage.
88.1/1004.10
E
MT
RU
AD
$
3
2
2
2
20
40
R1 0 · R2 40
IV312
DeepSeek v3.2
DeepSeek · open · 128K
Reliable, NSFW-shy at 30%. Strong lore retention.
83.3/10051%30%4.05
E
MT
RU
AD
$
4
7
12
7
4
3
291
R1 241 · R2 50
V516
GPT-4.1
OpenAI · proprietary · 1M
Community last in Round 01, top-5 in Round 02 multi-turn. The great inversion.
78.6/10043%46%4.06
E
MT
RU
AD
$
5
11
5
10
6
16
267
R1 215 · R2 52
VI611
GLM 4.7
Z.AI · open · 128K
Mid-pack across the board. No standout strength.
73.8/10046%49%4.07
E
MT
RU
AD
$
6
9
10
9
7
11
335
R1 285 · R2 50
VII315
DeepSeek v4 Pro
DeepSeek · open · 128K
Strong tone consistency at fraction of Opus pricing.
69.0/1004.02
E
MT
RU
AD
$
7
3
3
5
15
31
R1 0 · R2 31
VIII114
Gemma 4 26BRound 01 #1
Google · open · local-friendly · 8K
Round 01 champion, mid-pack on multi-turn. The cheap local-friendly hold.
64.3/10055%51%3.79
E
MT
RU
AD
$
8
1
9
14
12
7
344
R1 302 · R2 42
IX513
Kimi K2.5
Moonshot · open · 128K
Strong on tone consistency. Slow generation.
59.5/1004.23
E
MT
RU
AD
$
9
11
5
8
13
37
R1 0 · R2 37
X817
Kimi K2.6⚠ Floor
Moonshot · open · 128K
Top-2 on flaw hunter. Catastrophic agency floor on bait scenes.
54.8/1004.19
E
MT
RU
AD
$
10
8
17
16
12
35
R1 0 · R2 35
XI215
Mistral SCNSFW
Mistral · open · local-friendly · 32K
NSFW specialist. Fastest in the field. Drifts on long sessions.
50.0/10051%67%4.14
E
MT
RU
AD
$
11
2
7
15
14
8
707
R1 646 · R2 61
XII216
DeepSeek v4 FlashCheap
DeepSeek · open · 128K
Cheapest tier, top flaw-hunter score. Multi-turn ELO drags it down.
45.2/1003.94
E
MT
RU
AD
$
12
16
8
11
2
26
R1 0 · R2 26
XIII115
Gemini 3.1 Flash LiteCheap
Google · proprietary · 1M
Cheapest tier with Round 02 top-5 multi-turn ELO.
40.5/1004.02
E
MT
RU
AD
$
13
4
13
15
1
28
R1 0 · R2 28
XIV417
MiniMax M2.7
MiniMax · proprietary · 200K
Strong narrative push. Fragile under adversarial pressure.
35.7/10054%45%3.62
E
MT
RU
AD
$
14
4
17
11
10
9
436
R1 393 · R2 43
XV419
Grok 4.1
xAI · proprietary · 128K
Personality up front. Drifts fast under pressure.
26.2/10050%52%4.04
E
MT
RU
AD
$
15
6
13
16
19
4
373
R1 322 · R2 51
XVI1218
Gemini 3.1 Pro
Google · proprietary · 1M
Deep context window, brittle on adversarial probes.
21.4/1004.11
E
MT
RU
AD
$
16
14
12
13
18
41
R1 0 · R2 41
XVII619
GLM 5.1
Z.AI · open · 128K
Strong on tone consistency, weak on multi-turn engagement.
16.7/1004.12
E
MT
RU
AD
$
17
19
6
9
14
30
R1 0 · R2 30
XVIII320
Gemini 2.5 Flash
Google · proprietary · 1M
Round 01 top-3, dropped to bottom of Round 02 multi-turn.
11.9/10053%54%3.84
E
MT
RU
AD
$
18
3
20
18
17
10
294
R1 241 · R2 53
XIX619
Qwen 3.5 Flash⚠ Floor
Alibaba · open · local-friendly · 128K
Floor on agency and instruction drift. Caveat emptor.
7.1/10048%42%3.92
E
MT
RU
AD
$
19
8
18
19
18
6
460
R1 401 · R2 59
XX520
Llama 4 Maverick
Meta · open · 128K
Last on every reliability mode. Open-source completist only.
2.4/10047%34%3.59
E
MT
RU
AD
$
20
10
15
20
20
5
539
R1 474 · R2 65
▌ Movers This Round
Climber · +8 → composite #4
DeepSeek v3.2
"Multi-turn ELO #12 by engagement alone — tops the open-weight pool when rubric, flaw-hunter, and behavior get weighted in"
Held · ═ #1
Claude Opus 4.7
"Top of both multi-turn ELO and the composite blend"
Diver · −9 → composite #13
Gemini 3.1 Flash Lite
"Multi-turn ELO #4 (cheap + fast voters loved it) but bottom-quartile on flaw-hunter + behavioral"
▌ Coverage
1,857 total votes
271 pairs · median 7 votes/pair
75% catch-pair · n=335
47% judge–human disagreement
Methodology · Raw votes (CSV) · GitHub · HF dataset
Next issue · 05-15-2026