AI-Insight

non-profit

AI & ML interests

None defined yet.

Recent Activity

AdinaY 
posted an update 2 days ago
view post
Post
2161
AI for science is moving fast🚀

Intern-S1-Pro 🔬 a MoE multimodal scientific reasoning model from Shanghai AI Lab

internlm/Intern-S1-Pro

✨ 1T total / 22B active
✨ Apache 2.0
✨ SoTA scientific reasoning performance
✨ FoPE enables scalable modeling of long physical time series (10⁰–10⁶)
  • 2 replies
·
AdinaY 
posted an update 3 days ago
view post
Post
1172
✨ China’s open source AI ecosystem has entered a new phase

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3

One year after the “DeepSeek Moment,” open source has become the default. Models, research, infrastructure, and deployment are increasingly shared to support large-scale, system-level integration.

This final blog examines how leading Chinese AI organizations are evolving ,and what this implies for the future of open source.
AdinaY 
posted an update 4 days ago
view post
Post
253
GLM just entered the OCR field🔥

zai-org/GLM-OCR

✨ 0.9B
✨ MIT licensed
✨ Multimodal GLM-V architecture
✨ #1 on OmniDocBench v1.5 (94.62)
AdinaY 
posted an update 4 days ago
AdinaY 
posted an update 8 days ago
view post
Post
1054
What a week 🤯

Following DeepSeek, Kimi, Qwen, Baidu, and Ant Group, Unitree Robotics
has now released a VLA model on the hub too!

unitreerobotics/UnifoLM-VLA-Base
AdinaY 
posted an update 9 days ago
view post
Post
257
LongCat-Flash-Lite🔥 a non-thinking MoE model released by Meituan LongCat team.

meituan-longcat/LongCat-Flash-Lite

✨ Total 68.5B / 3B active - MIT license
✨ 256k context
✨ Faster inference with N-gram embeddings
AdinaY 
posted an update 9 days ago
view post
Post
241
Ant Group is going big on robotics 🤖

They just dropped their first VLA and depth perception foundation model on huggingface.

✨ LingBot-VLA :
- Trained on 20k hours of real-world robot data
- 9 robot embodiments
- Clear no-saturation scaling laws
- Apache 2.0

Model: https://huggingface.co/collections/robbyant/lingbot-vla
Paper:
A Pragmatic VLA Foundation Model (2601.18692)

✨ LingBot-Depth:
- Metric-accurate 3D from noisy, incomplete depth
- Masked Depth Modeling (self-supervised)
- RGB–depth alignment, works with <5% sparse depth
- Apache 2.0

Model: https://huggingface.co/collections/robbyant/lingbot-depth
Paper:
Masked Depth Modeling for Spatial Perception (2601.17895)
AdinaY 
posted an update 10 days ago
AdinaY 
posted an update 11 days ago
AdinaY 
posted an update 11 days ago
view post
Post
873
Kimi K2.5 from Moonshot AI is more than just another large model🤯

https://huggingface.co/collections/moonshotai/kimi-k25

✨ Native multimodality : image + video + language + agents 💥
✨1T MoE / 32B active
✨ 256K context
✨ Modified MIT license
✨ Agent Swarm execution
✨ Open weights + open infra mindset
AdinaY 
posted an update 16 days ago
view post
Post
496
AgentCPM-report 🔥 local DeepResearch agent released from OpenBMB

openbmb/AgentCPM-Report

✨ 8B - Apache 2.0
✨ Gemini-2.5-Pro level DeepResearch report generation
✨ Fully offline, privacy-first local deployment
✨ + GGUF version
  • 1 reply
·
AdinaY 
posted an update 17 days ago
AdinaY 
posted an update 18 days ago
view post
Post
2380
Z.ai just released a powerful lightweight option of GLM 4.7

✨ 30B total/3B active - MoE

zai-org/GLM-4.7-Flash
  • 1 reply
·
AdinaY 
posted an update 18 days ago
view post
Post
259
Another Chinese model fully trained on domestic chips, released by China Telecom 👀

Tele-AI/TeleChat3-36B-Thinking

TeleChat3-36B-Thinking:
✨ Native support for the Ascend + MindSpore ecosystem
✨ Inspired by DeepSeek’s architecture design, bringing training stability and efficiency gains.
  • 2 replies
·
AdinaY 
posted an update 21 days ago
view post
Post
1077
After a VLM, StepFun dropped a new audio model: Step-Audio-R1.1, enabling thinking while speaking 🔥

stepfun-ai/Step-Audio-R1.1

✨ Apache 2.0
✨ Combines dual-brain architecture and acoustic-grounded reasoning to enable real-time dialogue with SOTA-level reasoning
  • 2 replies
·
AdinaY 
posted an update 22 days ago
view post
Post
1756
We have a new heatmap live on huggingface now🔥

woojun-jung/open-source-release-heatmap-ko

Korean community built their own version to track labs that actively publish open work, inspired by Chinese open source heat map!

This is the open source community at its best ♥️
  • 1 reply
·
AdinaY 
posted an update 23 days ago
view post
Post
718
More lightweight multimodal models are coming 👀

StepFun has been focused on multimodal AI from the very beginning. Their latest release a new foundational model: STEP3-VL🔥
https://huggingface.co/collections/stepfun-ai/step3-vl-10b
✨ 10B - Apache2.0
✨ Leads in the 10B class and competes with models 10–20× larger
AdinaY 
posted an update 23 days ago
view post
Post
356
Agentic capability is the new battleground🔥

LongCat-Flash-Thinking-2601, the latest reasoning model from Meituan- LongCat

✨ MoE - 560B total / 27B active
✨ MIT license
✨ Agentic tool use
✨ Multi-environment RL
✨ Parallel + iterative reasoning

meituan-longcat/LongCat-Flash-Thinking-2601
AdinaY 
posted an update 24 days ago
view post
Post
356
GLM-Image from Z.ai is out 🔥

It was fully trained on Ascend Atlas 800T A2 with MindSpore, probably the first SOTA multimodal model fully trained on domestic chips 👀

zai-org/GLM-Image

✨ Hybrid Architecture: combined autoregressive + diffusion design delivers strong semantic alignment with high-fidelity details
✨ Strong performance in long, dense, and multilingual text rendering
✨ MIT licensed (VQ tokenizer & ViT weights under Apache 2.0)
✨ Now live on Hugging Face inference provider 🤗