Why the terminal could become the AI workbench product

.

For four decades, the command line has looked and behaved almost exactly the same. Developers still stare at a blank black box, hoping they can remember the commands that will build a project, clone a repo, or restart a service.

Zach Lloyd, former Google Sheets Principal Engineer and now Founder‑CEO of Warp, thinks it is no longer good enough. After closing a US$50 million Series B led by Sequoia, he’s betting that the next leap in developer productivity will come from giving the terminal a brain — one powered by large language models (LLMs)​.

From the 1980s command line to 2025 conversation

Lloyd’s core insight is simple: the terminal is “stuck in 40 years ago from a usability perspective.” Warp keeps the speed and composability developers love, but layers on a modern, user‑friendly UI and, more importantly, an AI that understands natural language.

Installation is as friction‑free as any other terminal: download the native Mac, Linux, or Windows build and open it in place of the stock shell, or the VS Code pane. Because Warp sits above your preferred shell, 98 per cent of your aliases, dotfiles, and plugins work unchanged.

Talk, don’t type

Instead of memorising 10s of commands, you can now just ask. Typing “Help me set up a new Python tool‑chain, clone the repo, make a branch and ensure it runs”  triggers the LLM to probe the environment, generate SSH keys when it hits an auth error, and iterate until the project builds cleanly.

The same principle applies to everyday frustrations: run your tests, hit a compiler error, and Warp automatically drafts the fix inline; all you do is accept or reject it​. Lloyd calls this shift “code by hand to code by prompt”.

What’s under the hood?

Today Warp lets users choose between Anthropic’s Claude‑3.5 / 3.7 “Sonic” models or a two‑step pipeline with a smaller model before delegating execution to OpenAI for raw code generation. The team exposes that choice because tinkerers love knobs, but long‑term they plan to pick the best model automatically so developers can focus on outcomes rather than tokens.

Also Read: Open source: The secret to boosting Singapore’s startup ecosystem

Keeping costs predictable is non‑trivial. Warp’s individual plans sit at US$15 and US$40 per month, each bundling a quota of AI requests. Even with volatile model pricing, Lloyd says margins stay in the healthy 30‑60 per cent range, helped by careful prompt engineering to avoid “context‑hungry” versions of models like Claude 3.7. Hosting an open‑source weights file themselves could widen margins further, but right now quality‑of‑experience trumps raw cost savings ​.

Adoption: Bottom‑up and outcome driven

Warp’s go‑to‑market mirrors early GitHub: release a delightful free tier, win mindshare, and count on developers to drag the tool into work. Hundreds of thousands already use it every month, split between people who simply want a nicer terminal and early adopters chasing hot AI workflows​.

Roughly 10 per cent of all actions executed in Warp are now either natural‑language prompts or autonomous AI commands, a metric the company tracks closely because higher AI engagement reliably precedes upgrades to paid plans. Most growth is organic: word‑of‑mouth, tweeted demos, and even copy‑pasted Warp “notebooks” that embed terminal output in a link‑shareable format ​.

A glimpse of the future workbench

Lloyd argues that neither the IDE nor the classic CLI will remain the centre of gravity. As models improve, developers will increasingly fire off long‑running agents that gather context, draft code, run tests, and even deploy — tasks that can take minutes, not seconds.

Also Read: Engineering the future: IMI’s 3-prong strategy to building new ventures in transformative sectors

The terminal’s multiplexer heritage (backgrounded jobs, REPL‑style streams, and session logs) already provides the right mental model for monitoring those agents while they work in parallel. In other words, Warp wants to turn your prompt into a workflow manager, surfacing progress and asking for human guidance only when the AI gets stuck.

If this vision lands, the traditional boundary between “write code in the editor” and “run it in the shell” blurs. Developers will open Warp, describe the feature they need, and grab a coffee while the agent assembles the skeleton. When it finishes, they’ll drop into the editor for the last 10‑20 per cent of polish that still benefits from human taste. Over time, Lloyd hopes most interactions will start in English and only fall back to Bash when necessary.

Lloyd and team at Warp spent years rewriting the terminal core in Rust before shipping because he wanted foundations that would outlast UI fads — a risky path he says only works when you swing big and hire slowly but exceptionally well​. The payoff is a product with deep technical moat just as the AI wave breaks over developer tooling.

In 2025 we are drowning in coding copilots, but Warp is betting that the interface — not just the model — is the real unlock. By melding the terminal’s proven ergonomics with conversational intelligence, it turns a place historically reserved for cryptic one‑liners into a collaborative teammate.

If Lloyd is right, the next time you open a shell you won’t type git clone; you’ll simply ask — and then watch the agent get to work. That feels like progress worth another 40 years.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected. We’re building the most useful WA community for founders and enablers. Join here and be part of it.

Image courtesy of the author.

The post Why the terminal could become the AI workbench product appeared first on e27.

Leave a Reply

Your email address will not be published.