Skip to main content

Hi guys, please help me.

Context / stack
Building an IG bot. Pipeline: ManyChat (Instagram) → External Request (JSON) → n8n (self-hosted v1.105.4) → OpenAI (4o-mini, Assistants API) → Respond to Webhook (JSON) → (ManyChat) Response Mapping → Send Message {ai_response}.
Logs/records in Supabase.

Conversation memory design (how we keep context)

  • We use the OpenAI Assistants API with persistent threads per Instagram subscriber.

  • On the first message we create a thread and store the mapping in Supabase: subscriber_id → thread_id.

  • On the next messages we reuse the same thread (read thread_id from Supabase), append the user’s text, create a run, then poll run status in n8n (Check Run Status → If Completed? → Wait 3s & Retry).

  • When the run is completed, we fetch the latest assistant message, save it to Supabase, and return JSON to ManyChat via Respond to Webhook:

     

    { "ai_response": "..." }

Expected behavior
ManyChat sends JSON to n8n:

 

{ "subscriber_id": "1482029640", "text": "Hi" }

n8n generates a reply and returns:

 

{ "ai_response": "Hello! How can I help you today?" }

Then ManyChat sends that ai_response back to the user.

Actual behavior

  • Test Request in External Request consistently fails with:
    Operation timed out after 10001 ms with 0 bytes received → Logs show “Response is null.”

  • In real conversations:

    • Short replies sometimes work.

    • For longer ones (processing > ~10s due to thread+run polling) ManyChat shows “Response is null,” even though n8n reaches Respond to Webhook and the reply is written to Supabase.

  • After this happens, the next user message is often not processed (it doesn’t appear in ManyChat logs and doesn’t reach n8n); only the second attempt triggers again.

Please, can anyone help me solve this problem? 🙏

 

Be the first to reply!

Reply