Quick Summary
- GLM 4.7, a Chinese open-source AI model, is closing the gap with proprietary AI, especially in coding and agent workflows.
- Manis Design View offers a new approach to AI image editing, allowing for specific edits without regenerating the entire image.
- Both releases signal a shift towards AI tools that prioritize usability, stability, and cost-efficiency for developers and creators.
Table of Contents
- GLM 4.7: Built for Coding Agents, Not Chat Demos
- Coding Performance: The Numbers That Matter
- Terminal & Agent Workflows: Where GLM 4.7 Shines
- Smarter Reasoning With Tools
- Three Thinking Modes That Improve Stability
- Real-World Adoption & Availability
- Manis Design View: Fixing AI Image Editing for Real
- From Prompt Roulette to Real Editing
- Powered by Nano Banana Pro
- Editable Text & Slides (Finally)
- Designed for Real Workflows
- Limitations (Let's Be Honest)
- Final Thoughts: The Shift from Generation to Iteration
The first month of 2026 has made one thing clear: the gap between "closed" and "open-source" AI is officially closing. While the world was watching major US labs, China has been shipping serious updates that challenge the status quo.
Two releases in particular caught my eye this week: Zhipu's GLM-4.7 and Manus 1.6 (with its new Design View). I've been digging into the documentation and testing these models to see if they live up to the hype. Here is the breakdown of why they matter for creators and developers.
GLM 4.7: Built for Coding Agents, Not Chat Demos
GLM 4.7 isn't designed to impress with clever one-line answers.
It's built for long, complex workflows — where an AI has to plan, execute, use tools, and stay consistent over time.
This is exactly where many AI models struggle.
Most models don't fail because they can't write code.
They fail because they lose context, forget earlier decisions, or contradict themselves during long sessions.
GLM 4.7 focuses on fixing that.
Coding Performance: The Numbers That Matter
GLM 4.7 shows major gains across real-world coding benchmarks:
- SWE-Bench Verified: 73.8%
(A huge milestone for an open-source model) - Live CodeBench v6: 84.9%
Closer to real coding tasks with constraints and edge cases - Multilingual SWE-Bench: 66.7%
A big jump over GLM 4.6
These benchmarks test more than just writing functions.
They check if a model can understand unfamiliar codebases, apply correct changes, and pass tests — exactly what developers care about.
Terminal & Agent Workflows: Where GLM 4.7 Shines
Terminal tasks are brutal for AI models.
They require correct sequencing, state awareness, and recovery when things go wrong.
GLM 4.7 shows strong improvement here:
- Terminal Bench 2.0: 41% (massive jump)
- Better performance across hard terminal benchmarks
This matters because terminal workflows are the backbone of AI coding agents.
Better terminal handling = fewer broken command chains and less workflow collapse.
Smarter Reasoning With Tools
GLM 4.7 performs much better when tools are enabled — which is exactly how modern AI systems are used.
Key results:
- Humanity's Last Exam (with tools): 42.8%
- Strong gains on GPQA Diamond, MMLU, and math-heavy benchmarks
- BrowseComp: jumps to 67.5% with context management
- Tao² Bench (tool interaction): 87.4%
The message is clear:
GLM 4.7 is designed to work with tools, not pretend everything lives inside the model.
Three Thinking Modes That Improve Stability
GLM 4.7 introduces smarter control over reasoning:
- Interleaved Thinking: thinks before every action
- Preserved Thinking: maintains reasoning across multiple turns
- Turn-Level Control: scale thinking up or down based on task complexity
Preserved thinking is the real upgrade here.
It reduces long-session drift, keeps plans consistent, and lowers cost by avoiding repeated rethinking.
This makes GLM 4.7 a strong backend for coding agents like Claude-style or multi-step automation tools.
Real-World Adoption & Availability
GLM 4.7 isn't just a research demo:
- Available via Z.AI API
- Integrated with OpenRouter for global access
- Compatible with existing agent workflows
- Designed with practical deployment in mind
It's also cheaper than premium proprietary models, making large-scale agent workflows more affordable.
Manis Design View: Fixing AI Image Editing for Real
Now let's switch gears.
Manis tackled one of the biggest problems in AI visuals:
you couldn't fix small mistakes without regenerating everything.
Design View changes that.
From Prompt Roulette to Real Editing
Instead of re-prompting from scratch, Manis lets you:
- Select a specific area
- Edit only that part
- Preserve lighting, layout, and style
This turns AI image generation into an editable workflow, not a lottery.
Small change, huge impact.
Powered by Nano Banana Pro
Manis uses Google's Nano Banana Pro for high-fidelity image generation.
The key advantage isn't just realism — it's consistency during edits.
Local changes don't break the rest of the image, which is surprisingly hard to do well.
Editable Text & Slides (Finally)
AI images struggle with text — Manis handles this smartly:
- Clean, editable text overlays
- Element-level slide editing
- Bulk edits across multiple slides
- Before/after comparisons
This solves a massive usability problem for presentations and marketing assets.
Designed for Real Workflows
Manis Design View offers:
- One canvas for generation + editing
- Fewer exports and tool switches
- Web and mobile editing support
- Clear ownership and commercial usage rights
It's moving AI design from generation-first to editor-first — exactly what professionals need.
Limitations (Let's Be Honest)
- Top proprietary models still win in some zero-shot tasks
- Full GLM 4.7 deployment requires serious hardware
- Manis consistency at scale will be the real test
But most users don't need perfection — they need control, stability, and cost efficiency.
Final Thoughts: The Shift from Generation to Iteration
If 2025 was about the world being amazed by what AI could start, 2026 is about how AI can finish the job.
- For Developers: GLM-4.7 is a signal that open-source is no longer just a "budget" alternative; it is becoming a specialized tool for high-stakes agentic workflows where context and stability are more valuable than a wide range of general knowledge.
- For Creators: Manus Design View solves the single biggest friction point in AI artistry—the lack of an "undo" button for specific details. By turning generation into an iterative canvas, it bridges the gap between AI imagination and professional design standards.
The most significant takeaway is that the "AI gap" is no longer about which model is smarter, but which model is more useful. China's focus on solving these "last-mile" problems—like terminal state awareness and non-destructive image editing—suggests that the industry is finally building tools for the people who actually use them for work.
📝 Frequently Asked Questions (FAQs)
Q: Is GLM-4.7 really free to use?
A: Yes and no. GLM-4.7 is open-source, meaning you can download the model weights from platforms like Hugging Face and run it on your own hardware for free. However, if you use the cloud-based Z.ai API, there are usage costs, though they are significantly lower (often 1/5th the price) than premium competitors like GPT-5.1.
Q: How does GLM-4.7 compare to GPT-5.1 in coding?
A: In testing, GLM-4.7 excels in tool-use and terminal stability, scoring a massive 73.8% on the SWE-bench. While GPT-5.1 may still have a slight edge in complex "creative" reasoning, GLM-4.7 is arguably better for autonomous coding agents that need to execute commands and manage a codebase without human intervention.
Q: What makes Manus "Design View" different from Canva or Photoshop?
A: Unlike traditional editors, Design View is AI-native and non-destructive. It allows you to "mark" an area and use natural language to make edits (e.g., "change this shirt to blue silk") without regenerating the entire image. It bridges the gap between the creative power of AI and the precision of professional design software.
Q: Do I need a powerful computer to run Manus 1.6?
A: No. Manus runs in the cloud. You can use Design View on a standard web browser or even your mobile device. The heavy lifting is done by Google's Nano Banana Pro servers, so you get high-fidelity 4K results regardless of your local hardware.
Q: Can GLM-4.7 handle multi-step coding projects?
A: Yes, that's where it shines. With features like Preserved Thinking and Turn-Level Control, GLM-4.7 maintains context across long coding sessions, making it ideal for complex projects that require multiple steps and tool integrations.
Q: Is Manus Design View available worldwide?
A: Currently, Manus 1.6 with Design View is primarily available in Asian markets but is expanding globally through partnerships. The web interface makes it accessible from anywhere with an internet connection.
📱 More from MadTech
Check out these related articles from our blog:
- Exynos vs Snapdragon: Real Performance, Heating, Camera & Battery Explained - Complete breakdown of Samsung's dual chipset strategy and which one delivers better real-world performance.
- Gemini 3.0 vs ChatGPT 5.1 vs Grok 4.1: Which AI Should You Use in 2026? - Detailed comparison of the top AI models and their best use cases for productivity and creativity.
- Android XR AI Glasses: The Future of Wearable Tech - Explore how AI-powered smart glasses are changing how we interact with digital content.



0 Comments