Cognitive Control is live: visible memory, sharper context, steadier execution
Codex Agent is stronger on complex work
- Optimized the long-context compression pipeline to cut wasted tokens and improve the speed-cost balance
- The event adapter layer was rebuilt for more consistent cross-source event semantics and ordering
- Thinking Levels are now supported, so reasoning depth can match task complexity
Cognitive Control and Cognitive Context are now live
- Added a Cognitive Control panel to manage Active Context, Core Memory, and Cold Storage directly
- Added Cognitive Context visibility so you can see what key signals informed each response
- Memory now runs on lane-based selection for more task-focused recall across agents
- Hit sources, weights, and details are surfaced more clearly for better trust
Queueing and command control are more predictable
- Added four interaction modes: collect, followup, steer, and interrupt
- Command routing and rejection feedback are now more consistent
- Approved plans can run directly with less manual handoff
Quick Prompt management is now customizable
- Added a dedicated Quick Prompts manager with grouped views for quick start, skills, and automations
- Custom prompts now support create, edit, duplicate, and delete for faster workflow reuse
- Filtering by id, icon, and text is supported with live validation; empty or invalid entries are skipped
- Custom prompt capacity is effectively unlimited for both personal and team playbooks
Polished connectivity and day-to-day reliability
- Deep links now standardize on tentarc:// while staying compatible with legacy dev schemes
- Event handling and run state transitions are more reliable in long sessions
- Rich text icon rendering is safer and more robust
Natural follow-ups, faster feedback loops, and a smoother Codex experience
Ask and answer inline, without breaking flow
- When a task needs confirmation, the agent now asks directly instead of guessing
- You can answer several inline questions quickly without leaving the conversation
- Follow-ups stay in one continuous context, so collaboration feels smoother
Clearer request states for long-running work
- Pending actions are easier to track, with clearer status visibility
- Timed-out and expired requests are cleaned up automatically to reduce stuck states
- Critical user responses are handled faster, even in long-running workflows
More reliable sequencing, more trustworthy status
- Task progress appears in a more stable and consistent order
- In-progress updates are easier to follow while work is running
- Status updates are more accurate when runs are interrupted or fail
Codex integration is more flexible, UI feels sharper
- Codex setup is more flexible across different environments
- Default behavior feels more consistent out of the box
- Search and UI interactions are faster and cleaner in daily use
More control for multi-task runs, safer planning, and easier secure skill growth
Multi-instance execution is now more reliable
- Multiple agent instances can run in parallel with less cross-task interference
- Repeated triggers now resolve more predictably to reduce duplicate execution
- Cancel and retry flows are clearer, making long-running operations easier to control
Plan Mode adds a stronger execution checkpoint
- Plan Mode lets teams submit and review a plan before execution starts
- After plan submission, runs pause automatically at a checkpoint for safer handoffs
- Temporary clarifications can be injected directly into the active run context
- Restricted actions are blocked earlier with clearer, traceable error feedback
Skill imports are easier and safer
- Skills can now be imported from .zip, .skill packages, local folders, and GitHub repositories
- Import validation now runs multi-dimensional security checks before activation
- Skills UI and copy updates make import status and risk outcomes easier to understand
Default experience and platform stability improved
- Workspace settings now remember active model and default permission mode
- Context and event handling are more consistent, reducing state drift in longer sessions
- Release quality gates are tighter, lowering regression risk
Faster setup, steadier replies, smoother first-run experience
Ready from the first second
- Startup now feels more reliable from launch to first conversation
- Cold boots are more predictable with fewer early-state hiccups
Cleaner model setup across providers
- Model configuration and discovery are now aligned into one consistent workflow
- Available models show up faster with cleaner, more useful candidate lists
- Less repeated probing means lower latency and less setup noise
More natural session feedback
- Session titles better match language and context, making history easier to scan
- Error messages are clearer and more consistent across flows
- Stale error carry-over is reduced for cleaner follow-up interactions
Smoother streaming behavior
- Streaming status now settles correctly after resyncs and cross-window handoffs
- No more long-lived "processing" states when the reply is already done
Cleaner chat presentation
- Welcome and core chat presentation are now separated for a clearer flow
- Draft creation and send timing feel smoother when entering a new conversation
Smarter agent, more automation, richer messaging
Advanced capabilities are now live
- Added Codex as an additional agent option alongside Claude
- Skills now detect and load capabilities from the .agents/skills directory
- Introduced long-term memory so the agent can retain context and preferences across longer cycles
- Added advanced automation to run repeatable workflows with less manual intervention
- Complex tasks can now run as sustained flows instead of one-off manual steps
More flexible model and API connectivity
- Added support for OpenAI-compatible custom API connections
- Onboarding new models or private gateways is now faster with lower migration overhead
Messaging upgrades in Apps
- Apps now support sending images, voice messages, and files
- Cross-platform conversations are richer and more expressive
- Multimodal message delivery is more resilient under failure scenarios
Smoother, more reliable daily use
- Long sessions and high-concurrency workflows are now more stable
- Deep-link and multi-window routing are more consistent in complex paths
- Queued messages are easier to control with update, steer, remove, and retry actions
Stronger background reliability
- Scheduler recovery and catch-up behavior were improved to reduce missed runs
- Retry/backoff paths were tuned to lower cascading failure risk
- Critical delivery and persistence paths now include stronger safeguards and observability
What we are improving next
- Skills will support more import paths (local folders, repository URLs, and packaged distributions)
- Introducing skill security scanning, review, and verification flows, plus service validation
- Security checks will expand to six dimensions: remote execution, data exfiltration, secret access, persistence, destructive ops, and privilege escalation
- Apps support for WeChat and iMessage is planned
- API Connect will add Codex official sign-in support
First public release
Core Experience
- Multi-session inbox with workflow states (Todo / In Progress / Needs Review / Done)
- Persistent session history, workspace isolation, and streaming interaction
- Drag-and-drop attachments for images, PDFs, and Office files
Integrations
- Unified source model across MCP, REST APIs, and local tools
- Apps support: Lark, Telegram, Slack, and Discord
- Configurable model providers (Anthropic, OpenRouter, Ollama, and more)
Control & Collaboration
- Three permission modes: Explore / Ask to Edit / Auto
- Background tasks with progress tracking for long operations
- Multi-file diff viewer for change review