Tianyu Liu is a researcher at Kimi working on coding and agents, and leads an effort toward developing foundational models for coding agents and broader agentic experiences. Before joining Kimi, he was a staff researcher in the early Qwen team at Alibaba, focusing on reasoning and coding, and prior to that a senior researcher and founding member of Tencent Hunyuan. He received his PhD from Peking University in 2021, advised by Zhifang Sui and Baobao Chang. During his PhD, he also had the opportunity to intern at or visit Microsoft Research (Beijing and Redmond) and Toyota Technological Institute at Chicago (TTIC).
Before the rise of LLMs, Tianyu's research centered on natural language generation, information extraction, and robustness in NLP. The arrival of large language models reshaped his trajectory: starting in mid 2022 at Tencent, he co-led the development of coding-oriented models and internal Copilot-style systems, and later became a founding member of Hunyuan. He then joined the early Qwen team at Alibaba, where he contributed across pretraining, mid-training, and post-training for reasoning and coding — serving as a core contributor to Qwen-Math and Qwen-Coder. He also led a team that won a gold medal at AIMO-2.
Now at Kimi, his work covers the full stack of coding and agent capability — from pretraining data and long-CoT reasoning to RL-based training, coding environments, and the supporting infrastructure. These efforts have contributed to a series of Kimi models such as K2.5. More broadly, he is passionate about building foundation models that can see, reason, code, and act — combining multimodal perception with long-horizon planning to deliver truly reliable and practical agent experiences.
* indicates equal contribution, † indicates project lead or corresponding author