Key Points
- Gemini 3 Pro delivers deeper reasoning and full‑code replacements after each tweak.
- Gemini 2.5 Flash is faster but often requires highly specific prompts and manual code swaps.
- The thinking model successfully integrated TMDB API data, while the fast model produced mostly incorrect poster matches.
- Both models eventually completed the movie‑listing web app, but Gemini 3 Pro needed fewer error‑fixing cycles.
- The experiment highlights a speed‑versus‑depth trade‑off inherent in Gemini’s model families.
Background
Vibe coding—using conversational AI to create functional code—has become a popular way for developers and hobbyists to build small projects without deep programming expertise. Google’s Gemini platform offers two distinct model families: a “thinking” model (Gemini 3 Pro) optimized for deep reasoning, and a “fast” model (Gemini 2.5 Flash) that balances speed with reasoning.
Testing Approach
The author selected a simple web‑app concept that would list horror movies, display posters, and provide additional information on click. Identical prompts were fed to both Gemini 3 Pro and Gemini 2.5 Flash, and the resulting code was iteratively refined. The process tracked how many iterations were needed, the nature of the model’s suggestions, and the completeness of the final output.
Findings with Gemini 3 Pro
Gemini 3 Pro consistently offered detailed explanations and full‑code replacements after each adjustment, allowing the user to copy‑paste the entire updated script without needing to locate specific sections. The model suggested using an API key from The Movie Database to automatically pull posters and details, and it successfully integrated those suggestions. Despite encountering a few persistent layering issues, the model eventually resolved them after repeated requests. The final product displayed movie posters, linked to YouTube trailers, and included optional design enhancements such as a 3D wheel effect. The author noted that the project required roughly twenty iterations but resulted in a functional and feature‑rich web page.
Findings with Gemini 2.5 Flash
Gemini 2.5 Flash operated more quickly but often required the user to provide highly specific prompts. When asked to make changes, the model supplied only the modified code snippet and instructed the user to replace the original section manually. It suggested “acquiring” images rather than automatically retrieving them via an API, and its attempts to add an API key produced many incorrect poster matches—approximately ninety‑nine percent of the images were wrong. The model also declined to rewrite the entire codebase when requested, describing the ask as “huge.” Overall, the Flash model delivered a partially workable project but left the user to correct numerous errors and manually assemble many components.
Overall Assessment
The comparison underscores a clear trade‑off: Gemini 3 Pro provides deeper reasoning, more comprehensive code updates, and better integration of external data sources, at the cost of slower response times. Gemini 2.5 Flash offers faster interactions but demands greater user guidance, more manual code swaps, and frequent debugging. For users seeking a smoother, more autonomous coding experience, the higher‑tier model appears preferable, while those prioritizing speed and willing to handle additional manual steps may opt for the fast model.
Source: cnet.com