Q&A: Writer CEO May Habib on competing with ChatGPT, AI opportunities, and regulation

AI startup Writer is looking to win in what's probably the most obvious place to make money in artificial intelligence right now: enterprise.

The generative AI company made headlines in September when, in a still-slow VC environment, it raised $100 million in its Series B funding round from ICONIQ Growth, WndrCo, Balderton Capital, Insight Partners, and Aspect Ventures. Vanguard and Accenture — both Writer customers — also participated in the round.

AI valuations have been a bright spot in the VC landscape, which has been under pressure amid waves of macroeconomic uncertainty and high interest rates.

But Writer had a particularly compelling proposition: taking on ChatGPT, the chatbot that kicked off the AI boom that its creator, OpenAI, has since sought to monetize as an enterprise application.

Yahoo Finance sat down with Writer CEO May Habib to talk about the company, how big the AI market really is, and the difference between the AI regulations that governments will do versus those that companies will need to spearhead themselves.

This interview has been lightly edited for clarity and length.

The co-founders of Writer, May Habib and Waseem Alshikh, are building generative AI models for enterprises. (Writer)
The co-founders of Writer, May Habib and Waseem Alshikh, are building generative AI models for enterprises. (Writer)

When the news of your Series B fundraising came out, headlines said that the startup is "taking on ChatGPT." What does Writer do?

Right now, 99% of the air in the room is about raw large language models, or LLMs. But the reality is that the bulk of use cases in enterprise requires the large language model plus a bunch of tooling and software to be useful. So what we have done is take a large language model of our own and actually put all of the tooling that you need to build useful stuff adjacent to the model.

We are still in such early parts of this platform shift and it's really only early adopters who have tried to build something useful. ... The value proposition is building these customized applications at scale that's secure, customizable, and inside a virtual private cloud and it's a holistic solution.

What I'm hearing is that you do compete with the enterprise version of ChatGPT, but what you're doing is getting the edge by specifically building for organizations. Is that correct?

Absolutely. For example, the prompt library is separated out by the team at Uber that uses it, the task it's for, and the content type.

I think that advantage and the moat that the collaboration layer provides are really critical to differentiate from the raw LLM companies. I would put Anthropic in the raw LLM category — excellent company. Open AI has got a bit more structure and user interface, but I think they're going to have a little bit of an identity crisis. Are they an application company or an infrastructure company? The pricing says the latter, but the position says the former.

At Writer, we only care about enterprise. We only care about enabling entire teams with workflows and with AI applications for internal use cases, and we solve that problem soup to nuts.

An illustration of Writer's software tailored for businesses. (Writer)
An illustration of Writer's software tailored for businesses. (Writer)

This distinction between raw LLMs and enterprise LLMs seems to be that they need to be different from start to finish.

I would say so. The frontier models — the models that are so good that they've got a pretty decent answer for just about any question — they're still not going to have information about your business.

The ability to take a model and pair it with actual information about your business securely is huge and is going to be a huge differentiator for us. It's one thing to build a generative AI feature in your product. It's completely another thing to say: How do I enable 10,000 people to use AI internally?

What's the market opportunity you're looking at here?

I think the revenue is in enterprise on an enduring basis. The consumer utility for models goes to zero because of competition.

Big Tech will [go] scorched earth on this. It's the race to zero, and so the market that remains, I think, is the raw LLM access to frontier models. That's going to be OpenAI and Anthropic eating that up. Then it's going to be all the internal use cases that require last-mile data and access and integration, and I think that's going to be us.

Looking at the big picture, why did you decide to focus on AI?

Ever since grade school, my extracurriculars were writing-related. I edited The Harvard Crimson, and the number of edits you need to make an OK sentence into a real sentence that sings — it's a lot of work. ... We no longer need to do that.

Being so close to the actual labor of what it takes to take content at work and edit it so that you're not embarrassed when it goes live, I just knew how much manual work that takes at a very high level, especially for non-writers. To see it done instantly in front of me, and knowing I would be okay publishing it — that was revelatory.

How are you thinking about safety and regulatory concerns? How do those concerns affect Writer?

The copyright issue is one thing, ethics is another, and then bias, toxicity, and safety is a whole other third realm. You want to think about it across three dimensions.

On copyright, we're very conservative, and our models don't have copyrighted information in them. ... I think there's a really low probability that we will have to completely rethink how we scrape and train. I think what will happen is default scraping laws will be "do not scrape" and folks will have to opt in, taking that very conservative route.

On ethics, the questions are for every company to answer themselves. What is the ethical amount of automation and augmentation that we want, knowing full well that it's going to impact the number of people we've got in any given role or function? ... I think we're going to start seeing published internal philosophies and companies making a commitment to reskilling and redeployment if and when processes get automated via AI.

Lastly, on bias and toxicity, the models are getting so much better, and the bigger models are actually better at self-moderating, coupled with AI guardrails and protocols that really ensure that what's coming out doesn't violate a company's own set of values.

I think the government and Big Tech will land in the right place. ... But in the enterprise realm, it's definitely going to be up to every company.

A student types on a laptop while using an AI tool.
A student at a high school sits in front of a laptop and uses an AI tool on July 26, 2023, in Baden-Württemberg, Karlsruhe. (Philipp von Ditfurth/picture alliance via Getty Images)

You've raised money in a tough VC environment. Do you have any favorite stories from the fundraising trail?

Honestly, this was the easiest fundraise I ever had. ... It's actually quite hard to find use cases in production in generative AI, and folks who do due diligence found dozens of them in Writer. So we had four different offers in the span of three weeks.

You must have been over the moon.

It was incredibly validating. I treat fundraising as a chance to really get the energy and the kind of wherewithal for this next leg of the journey to all of us as a team. ... The team is locked, loaded, focused on our market, and bringing tremendous value to our customers.

We don't care about building the biggest model that's ever existed. I'm not excited about burning a billion dollars in server costs. There are incredible opportunities to transform work with purpose-built LLMs.

Allie Garfinkle is a Senior Tech Reporter at Yahoo Finance. Follow her on Twitter at @agarfinks and on LinkedIn.

Click here for the latest technology news that will impact the stock market.

Read the latest financial and business news from Yahoo Finance