Building a Figma plugin with AI as a zero-coder
The rollercoaster ride of building a JavaScript plugin by a UX designer and researcher.
As a non-developer with a vision for a Figma plugin, I turned to AI to bridge the gap between my ideas and reality, hoping to create a tool that would seamlessly integrate with Figma’s design environment. I expected AI to deliver working code with minimal effort, but the journey was a frustrating mix of successes and failures. From promising starts to maddening setbacks, my experience with various AI tools: Google Gemini Pro 2.5, Claude, ChatGPT, and local setups like Devstral, Gemma with Ollama, and OpenWeb UI; revealed both the potential and the pitfalls of AI-driven development for someone without coding skills.
The hope
With no programming background, I saw AI as a lifeline to build my plugin. I wanted a simple, functional tool with a clean, modern interface, and I provided clear instructions to each AI tool. The promise was enticing: AI could generate code, explain technical steps in plain language, and adapt to my feedback. Initially, the responses were encouraging—structured code, detailed explanations, and beginner-friendly setup guides. I felt optimistic, believing I’d soon have a working plugin ready for Figma.
The first hurdle
My optimism quickly faded when I tested the AI-generated code in Figma. The plugin often failed to work as expected—sometimes the interface loaded but showed nothing, other times key features didn’t function, or I got cryptic error messages. As a non-developer, I was lost, unable to debug or understand why things broke. Each AI tool produced code that looked legitimate, but implementing it in Figma’s strict environment (limited to manifest.json, code.js, and ui.html, with no extra files or ads allowed) exposed flaws that required constant revisions.
Success
Among the AI tools, Google Gemini Pro 2.5 stood out as a success. It generated code that came closest to my vision, producing a functional plugin that could detect and manipulate elements in Figma with reasonable accuracy. Its responses were concise and tailored, and it seemed to grasp Figma’s API constraints better than others. For example, Gemini’s code correctly prioritised the current selection or page, and its UI suggestions aligned with my request for a minimalistic, Japanese-inspired design. While not perfect—occasional bugs still required tweaks—it provided a working prototype that I could test in Figma, a small victory that kept me hopeful.
A close second
Claude was another bright spot, offering thoughtful and structured code. It excelled at explaining Figma’s limitations in plain terms, helping me understand why certain features were tricky. Its code was reliable for basic functionality, and it handled iterative feedback well, adjusting to my requests without losing context. However, Claude’s solutions sometimes felt overly cautious, missing the robustness needed for edge cases like empty selections or complex Figma files. Still, it was a dependable ally, getting me closer to a usable plugin than many others.
A disappointing letdown
ChatGPT, despite its popularity, was a major disappointment. Its initial code ignored Figma’s strict file restrictions, including extra files like styles.css or assets that Figma doesn’t allow. Even after I clarified the constraints, ChatGPT’s fixes were inconsistent, often reintroducing errors or failing to address core issues like font detection. The code frequently crashed in Figma, and its verbose explanations were overwhelming for a non-coder. ChatGPT felt like it was guessing rather than understanding Figma’s ecosystem, leaving me frustrated and stuck.
A complete mess
Trying to use X-Grok for insights was a non-starter. It didn’t even get to first base, offering irrelevant or generic responses that showed no understanding of Figma’s plugin system. X-Grok’s suggestions were too vague to be useful, and they failed to provide actionable code or guidance. It was quickly clear that X-Grok wasn’t equipped for this task, so I abandoned it early on, focusing on other tools that at least showed some promise.
A mixed bag
Running Devstral, and Gemma locally with Ollama and OpenWeb UI was a surprising success in some ways, functionality-wise. The setup allowed me to experiment with AI models offline, and the code it generated was decent, handling basic Figma API calls effectively. It felt empowering to have a local environment, free from reliance on cloud-based AIs. However, the UI it generated posed significant challenges. Tweaking the UI required manual adjustments I wasn’t equipped to handle, or maybe I got too bored, and the lack of a polished approach added to the struggle.
The constant grind
Testing was a nightmare across all tools. Each new code version meant reloading the plugin in Figma’s development menu (I have to restart my system twice), a tedious process of selecting manifest.json and hoping it worked. When it didn’t, I was left with error messages I couldn’t decipher or blank interfaces that gave no clues except a toast without much direction, a failing ux pattern, saying that you are wrong, but why no one knows. The AI tools suggested basic checks, like ensuring text layers existed, but as a non-coder, I had no way to dig deeper. The cycle of copying code, testing, failing, and asking for fixes ate up hours, turning a simple project into a marathon.
The Bigger Picture
This journey highlighted AI’s strengths and weaknesses for non-developers. Gemini Pro 2.5 and Claude showed that AI can generate functional code and adapt to feedback, but they still missed edge cases, ui finesse, experience building and Figma’s nuances. ChatGPT and X-Grok fell short, either through inconsistency or irrelevance. Local setups like Devstral, and Gemma offered control but stumbled on UI polish. The biggest frustration was AI’s inability to fully understand Figma’s constraints or test code in a real Figma environment. As a non-developer, I needed a plug-and-play solution, not a puzzle requiring constant fixes.
What Could Make AI Better?
AI could improve for non-coders like me by:
Retaining project context across iterations to avoid repeated errors.
Deeply understanding Figma’s API and file restrictions from the start.
Simulating Figma’s environment to test code before sharing.
Providing clear, non-technical error messages and fixes.
Offering minimal code with simple setup steps, skipping lengthy explanations.
The Outcome
In the end, I got a partially working plugin thanks to Gemini Pro 2.5 and Claude, with a sleek UI that looked promising but faltered in real-world use. The Buy Me a Coffee link was a nice touch, but without reliable functionality, it felt premature. The process taught me that AI can get you close, but it can’t replace a developer’s intuition for navigating platforms like Figma. I’m stepping back for now, but the experience was a crash course in resilience.
Two cents for zero-coders
If you’re a non-developer using AI for a Figma plugin, brace for challenges:
Test with simple Figma files to catch issues early.
Study Figma’s plugin rules to avoid AI-generated missteps.
Try multiple AI tools; Gemini and Claude worked better than ChatGPT or X-Grok for me, but for you might be something.
Learn to write a prompt to give proper instructions to AI
Take note of AI hallucination - it is real
Consider local setups like Ollama if you want control, but expect UI hurdles.
Be ready to hire a developer if AI falls short.
Building a Figma plugin with AI was a mix of small wins and big frustrations. While I saw glimpses of success, the constant setbacks showed me AI is not yet a magic wand for non-coders. Here’s hoping future tools close that gap.
Note: This reflects a personal experience and an opinion piece with AI tools and Figma’s plugin system as of June 2025.




