Skip to main content

Architecture Q: How do you handle "Stateless" AI for multi-step interviews?

  • December 22, 2025
  • 1 reply
  • 22 views

patrickb

Hi everyone, I’m trying to build a flow where the AI drives a multi-step interview with the user, but I'm hitting a wall with context.

The Issue: I found that standard direct integrations are stateless. They don't naturally remember the previous turn of the conversation. I tested both a direct ChatGPT and Claude setup, and both treat every message as an independent event.

What I've tried: I briefly tinkered with n8n over the weekend. It looks like "context-aware" is definitely possible there (using JSON I/O and loops), but it feels like a heavy lift to build from scratch. I'm not sure if the time investment is worth it just to solve the memory problem.

The Question: How are you solving this middle ground?

  1. Is n8n (or Make) the standard way to go, and I should just bite the bullet and build the infrastructure?

  2. Or is there a more efficient way/tool to keep the AI aligned with the topic over multiple messages without over-engineering it?

Thanks in advance!

December 22, 2025

Hi ​@patrickb  Great question — this is a very common issue.

Yes, AI calls are stateless by default. The key is not trying to make the AI hold memory, but letting ManyChat manage the state.

What usually works best:

  • Treat each interview step as a structured question (Not AI)

  • Save each answer in Custom User Fields

  • Then… On every AI call, inject a short summary of the data collected so far + ask for the next question only

This keeps the AI aligned without needing full conversation memory or heavy tools like n8n.

I only use external orchestration (n8n / Make) when I need long-term memory or complex looping logic.
For most multi-step interviews, ManyChat + CUFs + smart prompting is enough.

Hope this helps!

Cheers!

Catalina Rendon

1 reply

cata_rendon
Forum|alt.badge.img+4
  • Manychat Community Moderator
  • December 22, 2025

Hi ​@patrickb  Great question — this is a very common issue.

Yes, AI calls are stateless by default. The key is not trying to make the AI hold memory, but letting ManyChat manage the state.

What usually works best:

  • Treat each interview step as a structured question (Not AI)

  • Save each answer in Custom User Fields

  • Then… On every AI call, inject a short summary of the data collected so far + ask for the next question only

This keeps the AI aligned without needing full conversation memory or heavy tools like n8n.

I only use external orchestration (n8n / Make) when I need long-term memory or complex looping logic.
For most multi-step interviews, ManyChat + CUFs + smart prompting is enough.

Hope this helps!

Cheers!

Catalina Rendon