QuiverAI, a startup led by Juan (Joan) Rodríguez, launched its public beta on February 25 alongside an $8.3 million seed round led by Andreessen Horowitz. The product is Arrow 1.0, a closed-weight model that generates SVG files from text prompts and reference images. It is available now through a web interface and an API.
The pitch is straightforward: every major generative image model outputs pixels. Midjourney, DALL-E, Stable Diffusion, Flux. Pixels. And pixels are useless if you need a logo that works at billboard scale, an icon set you can recolor in Figma, or an animation that loads in under a kilobyte. Designers who use AI generation today still have to run the output through manual tracing or vectorization tools to get anything production-ready. QuiverAI wants to skip that step entirely.
The academic backstory
Rodríguez isn't coming at this cold. He's a PhD student at Mila and École de technologie supérieure in Montreal, and he's the lead author of StarVector, an open-source multimodal model for SVG generation that was published at CVPR 2025. The GitHub repo has accumulated over 4,000 stars. His follow-up work on Reinforcement Learning from Rendering Feedback (RLRF), which closes the loop between SVG code output and how it actually renders visually, was accepted at NeurIPS 2025.
So there's a real research lineage here, which is more than you can say for a lot of AI startups announcing seed rounds. StarVector was built on top of StarCoder and trained on SVG-Stack, a dataset of 2 million SVG samples. Arrow 1.0 is the commercial successor, trained from scratch and kept closed-weight.
Why not just use Gemini?
Here's the obvious question nobody in QuiverAI's launch materials addresses directly: general-purpose LLMs are getting better at SVG generation all the time. Gemini 2.5 Pro and Claude have both shown they can produce reasonable SVGs from text prompts. Simon Willison's "pelican riding a bicycle" test has become an informal benchmark, and recent Gemini models handle it surprisingly well. Google has been leaning into this capability explicitly with Gemini 3.1 Pro.
But "reasonable SVGs" and "production-ready SVGs" are different things. Ask Gemini to generate a logo and you'll get something that looks correct when rendered, but open the code and it's a mess of redundant paths and flattened layers. Try changing a color system or isolating an element and it falls apart. The a16z investment thesis puts it this way: a model can render plausible-looking pixels, but until it understands composition, its output can't be meaningfully edited, animated, or reused.
I'm somewhat sympathetic to this argument. SVG is code, and the structure of that code matters as much as the visual output. A 500-line SVG that looks identical to a 50-line SVG when rendered is categorically worse for anyone who needs to work with it.
What Arrow 1.0 actually does
The model handles three tasks: text-to-SVG generation, image-to-SVG vectorization (converting raster images into vector format), and editing existing SVGs via natural language prompts. The API, documented at QuiverAI's docs, exposes these as separate endpoints. The model identifier in API calls is arrow-preview, which suggests they're being careful about the "1.0" branding relative to what the API actually serves.
QuiverAI claims Arrow is particularly strong on icons, logos, typography (including full font generation), technical drawings like floorplans, and layered illustrations. That's a broad set of claims for a first release, and I haven't seen independent benchmarks confirming any of them. The company's own examples look clean, but cherry-picked demos always do.
One capability that caught my attention: font generation. According to a16z's writeup, the model can generate entire typefaces where glyphs share consistent geometry and rhythm. QuiverAI demonstrated this by generating a custom font based on a16z's existing brand design. If that works reliably beyond demos, it's a genuinely novel capability. Font design is painstaking work even for specialists.
The investor list
The angel roster reads like a who's-who of design tool founders. Linda Tong, CEO of Webflow. Amjad Masad, CEO of Replit. Michele Catasta, Head of AI at Replit. Eric Zacariason from Cursor. Adrian Mato, Design Director at GitHub. Andrew Pouliot, ex-Figma. K Fund, JME, and Mission also participated.
That's a dense concentration of people who build tools for designers and developers. Whether that translates into distribution or just looks good in a press release is another question entirely.
What's missing
Arrow 1.0 is closed-weight, which is a departure from Rodríguez's academic work. StarVector was fully open. The commercial model is not. That's a perfectly rational business decision, but it means the community that formed around StarVector can't inspect, fine-tune, or build on Arrow directly.
There are no published benchmarks comparing Arrow 1.0 to StarVector, to Gemini's SVG output, or to traditional vectorization tools like Adobe's Image Trace. The company hasn't disclosed model architecture details or training data composition beyond "trained from scratch." And the pricing page, while it exists, just says "start free" without clear details on what happens after the beta.
The rate limit is 20 requests per minute per organization. For a beta, that's fine. For production use, it's a constraint worth watching.
Does the market exist?
QuiverAI is betting that there's a large enough population of designers who (a) need vector output, (b) want AI generation, and (c) find current workarounds insufficient. The first two conditions seem clearly met. The third is where it gets interesting, because the workaround of generating a raster image and then auto-tracing it keeps getting better too.
The Node.js SDK and API-first approach suggest QuiverAI is also targeting developers building design tools, not just end-user designers. In the era of coding agents that can emit SVG markup, having a specialized model you can call from Cursor or similar environments is a plausible wedge.
The public beta is live at app.quiver.ai. Whether Arrow 1.0 is genuinely better than what a well-prompted Gemini 3.1 Pro can produce is something the design community will sort out faster than any benchmark could.




