5 minute read

Most conversations about AI flatten everything into a single bucket.
“AI did this.” “AI decided that.” “AI is thinking.”

In reality, modern AI-powered products are usually composites: deterministic systems doing structured work, paired with large language models doing human-facing work. I built ConnectAI to make that distinction visible using a game everyone already understands.

Instead of writing another explainer post about decision trees vs. LLMs, I wanted a show-don’t-tell demo. So I used Connect 4!

You can play the app here:
👉 https://connect-ai-v1.web.app

And if you want a quick walkthrough first, here’s the short demo video I posted on LinkedIn:

The idea: make the architecture impossible to miss

ConnectAI is an educational web app that looks like a normal Connect 4 game at first glance. You drop pieces into a 7×6 board. The computer responds. You win, lose, or draw.

But right under the board, the app exposes what’s actually happening:

  • An AI Engine labeled Decision Tree
  • An LLM Chat panel labeled Gemini 2.0 Flash-Lite
  • A “See prompt” link that reveals how the LLM call is constructed

That pairing is the lesson. A classic search-based AI doing structured reasoning, with an LLM layered on top for communication.

Three layers working together

The app naturally falls into three layers, each optimized for a different kind of intelligence.

1. Gameplay layer: the product surface

At the top is a clean, mobile-first Connect 4 experience:

  • Standard 7×6 grid
  • Turn-based play
  • Immediate visual feedback as pieces drop
  • Lightweight navigation (New, About, Author)

This layer is intentionally familiar. By removing cognitive overhead, the user can focus on how the AI behaves rather than learning new rules.

2. Decision layer: deterministic intelligence

When it’s the computer’s turn, the game state is handed to a deterministic decision engine.

Under the hood:

  • Minimax-style adversarial search
  • Alpha–beta pruning to cut off branches that can’t influence the final decision, improving performance
  • A bounded search depth (8 plies) to balance strength and performance

The engine evaluates hundreds of thousands of possible future board states and selects a move assuming optimal opponent play. The UI even surfaces some metadata (depth searched, time taken, and scenarios evaluated) so users can see that this component is doing exhaustive computation rather than just guessing.

This approach is well-established in classical AI research. For two-player, turn-based, zero-sum games, deterministic search remains one of the most reliable techniques available.

Connect 4 is a particularly effective teaching tool here. Despite its simple rules, it’s a solved game—perfect play on a 7×6 board guarantees a win for the first player. That fact reinforces the core message: when a problem is well-defined and fully observable, structured algorithms are often the right tool.

3. Communication layer: language, not logic

Once the move is chosen, a second system takes over.

This is where the LLM comes in.

The app sends the current board state and the deterministic engine’s analysis to a server-side endpoint. A lightweight language model generates short, personality-driven commentary based on that context: trash talk, reactions, confidence, or frustration.

Crucially, the LLM is not deciding the move.

Instead, it’s doing what large language models excel at:

  • Natural language generation
  • Tone and style variation
  • Human-like responses that would be tedious to hard-code

The “personality” slider modifies the prompt, and the “See prompt” link exposes that prompt directly. This turns prompting into a first-class engineering artifact rather than a hidden implementation detail.

That design choice aligns with modern prompt-engineering best practices: clear instructions, structured context, and explicit constraints dramatically improve LLM reliability and usefulness.

A modern hybrid AI system you can interact with

Zooming out, ConnectAI is really a miniature example of a modern hybrid AI architecture:

  • Deterministic core
    Reliable, repeatable, optimized for structured reasoning.
  • Generative interface
    Flexible, expressive, and human-facing.
  • Explicit boundaries
    Each system is constrained to the job it’s best at.

By labeling these components in the UI and exposing the prompt, the app becomes an interactive system diagram. Users don’t have to trust an explanation, they can see the separation in action.

This mirrors a growing consensus in applied AI: the most effective systems combine classical algorithms with LLMs, rather than replacing one with the other.

Under the hood: the tech stack

ConnectAI is built on a modern, full-stack TypeScript foundation:

  • Next.js for frontend and backend
  • React for UI and state management
  • Tailwind CSS + shadcn/ui for styling and accessible components
  • Firebase Hosting for deployment and scalability

On the frontend:

  • Game state is managed with useReducer for predictable transitions
  • The board is an interactive SVG with animations
  • The AI move computation runs in a Web Worker, preventing UI freezes during deep searches

The deterministic engine implements Minimax with alpha–beta pruning and returns metadata that’s surfaced directly in the UI.

On the backend:

  • Next.js API routes handle LLM calls
  • Structured game state and engine analysis are passed into the prompt
  • The response is streamed back as real-time commentary

Firebase’s serverless hosting model keeps the deployment simple while allowing the app to scale without manual server management.

Where this started

This project actually traces back to coursework I completed during my Graduate Certificate in Applied Artificial Intelligence at UNC Charlotte. One of the early assignments involved building a Connect 4 AI using classical search techniques, which left a lasting impression on me.

That program does an excellent job of grounding modern AI discussions in fundamentals—search, optimization, and reasoning—before layering in machine learning and LLMs. I highly recommend it to anyone looking to deepen their understanding beyond surface-level tooling.

Program link here.

The bigger lesson

ConnectAI isn’t really about Connect 4.

It’s about demonstrating that:

  • LLMs shouldn’t always replace deterministic systems
  • Deterministic systems don’t need to be human-friendly
  • The best AI products combine both, with clear, intentional boundaries

Instead of saying “don’t use LLMs for everything,” the app shows why.
The move engine is optimized and predictable.
The LLM is creative, bounded, and expressive.

That mental model scales far beyond games. And it’s increasingly critical as AI systems move from demos into real, high-stakes products.


Sources & Further Reading

  1. Russell, S., & Norvig, P. Artificial Intelligence: A Modern Approach — Chapters on adversarial search and minimax
  2. Knuth, D. E., & Moore, R. W. (1975). An analysis of alpha-beta pruning
  3. Allis, L. V. (1988). A knowledge-based approach of Connect-Four
  4. Google DeepMind. Prompt engineering best practices
  5. OpenAI. Best practices for building LLM-powered applications
  6. Browne et al. (2012). A Survey of Monte Carlo Tree Search Methods (contrast with deterministic search)

Comments