GLM-4.5
Unifying agentic capabilities in one open model
About
GLM-4.5 is a large-scale language model developed with enhanced reasoning capacity, utilizing advanced architectural components like Grouped-Query Attention with partial RoPE and 2.5 times more attention heads. It employs the Muon optimizer for faster convergence and QK-Norm for stability. The model includes a Multi-Token Prediction (MTP) layer to support speculative decoding during inference. Its training involves multiple stages: 15T tokens of general pre-training, 7T tokens of a code & reasoning corpus, followed by domain-specific fine-tuning on instruction data. For efficient Reinforcement Learning (RL) training, GLM-4.5 leverages 'slime', an open-sourced RL infrastructure designed for flexibility, efficiency, and scalability, featuring a hybrid training architecture, decoupled agent-oriented design, and accelerated data generation with mixed precision. Post-training RL further enhances agentic capabilities (coding, deep search, general tool-using) and reasoning, using a difficulty-based curriculum and verifiable tasks. The model also incorporates an optimized user simulator for TAU-Bench.
Categories & Tags
Color Palette
Background White
#FFFFFF
Text Dark Gray
#333333
Accent Blue
#007BFF
Light Gray
#E0E0E0
Typography
Inter
Body and Headings
Design Review
Similar Products
Clear for Slack
Clear messages get answered quicker
Griply 2026
Achieve your goals with a goal-oriented task manager
vibecoder.date
Find who you vibe with, git commit to love
HappyMail
We made email simple again
Blober.io
The easiest way to transfer files between cloud providers.
Supaguard
Scan, Detect & Protect Your Supabase Data
Timelines Time Tracking 4
Track your time to achieve your New Year resolutions.
SoftReveal — Reveal less. Engage more.
Hide Content, Reveal on Click
CalPal
The notebook calculator that thinks for you (now with AI).
Reword
Rewrite messages without leaving your workflow
Radial
Your shortcuts, one gesture away
MoovAI
Launch viral AI ads & pro social content in minutes