Introduction: Giving Your AI a “Memory”

Welcome to Chapter 7! So far, you’ve learned how to integrate AI models and agents into your React applications, consume streaming responses, and even trigger tool calls. But have you ever noticed that sometimes, AI seems to “forget” what you just said? It’s like having a conversation where the other person only remembers your very last sentence. Frustrating, right?

This chapter is all about solving that problem! We’ll explore how to give your AI-powered interfaces a true sense of “memory” and “context.” Most large language models (LLMs) are inherently stateless; each API request is treated as a brand new interaction. It’s up to your frontend application to manage the conversation history and other relevant information, sending it along with each new prompt to ensure the AI understands the ongoing dialogue.

By the end of this chapter, you’ll master different types of AI memory, learn how to construct context-rich prompts, and implement robust state management strategies in React to keep your AI interactions coherent, intelligent, and truly engaging. Get ready to transform your AI applications from one-off query tools into smart, conversational partners!

Prerequisites: Before we dive in, make sure you’re comfortable with:

  • Basic React concepts: components, props.
  • React Hooks: useState, useEffect, useRef.
  • Making API calls in React (e.g., using fetch or Axios), as covered in previous chapters.

Core Concepts: What is AI Context and Memory?

Let’s start by defining what we mean by “context” and “memory” in the world of AI, especially from a frontend perspective.

The Stateless Nature of Many AI Models

Imagine talking to a brilliant but forgetful assistant. Every time you ask a question, they respond perfectly, but if your next question relies on information from the previous one, they draw a blank. That’s essentially how many AI models, particularly large language models (LLMs), operate by default. Each request to their API is typically processed independently, without any inherent knowledge of prior interactions.

This is where your frontend application steps in. It becomes the “brain” that remembers the conversation, relevant user data, or any other pieces of information needed to make the AI’s responses intelligent and coherent.

Distinguishing AI Memory Types

When we talk about AI “memory” in a UI, we’re usually referring to different layers of information that we feed back into the AI’s prompt:

1. Ephemeral Memory (Current Turn)

This is the most basic form of memory, encompassing just the immediate user input and the AI’s direct response. It’s the “what just happened” in the conversation.

2. Short-Term Conversational Memory

This refers to a window of recent messages or interactions within a single conversation session. For a chatbot, this might be the last 5-10 turns of dialogue. This allows the AI to follow the thread of a conversation and answer follow-up questions accurately.

3. Long-Term User/Session Memory

This goes beyond the current conversation. It includes user preferences, historical summaries of past sessions, or specific knowledge about the user that persists across different interactions or even application launches. While the actual storage of this often lives on a backend database, the frontend is responsible for retrieving and injecting this relevant long-term context into prompts when needed.

Why is Managing Context Crucial?

Without proper context management:

  • Irrelevant Responses: The AI might give generic or off-topic answers.
  • Repetitive Interactions: Users might have to repeat information.
  • Poor User Experience: The application feels unintelligent and frustrating.
  • Ineffective Agentic Behavior: Agents can’t chain tool calls or complex reasoning steps if they forget previous actions or observations.

Prompt Engineering for Context

The primary way we inject “memory” into a stateless AI model is through prompt engineering. This means carefully crafting the input prompt to include all necessary context.

A typical context-rich prompt might include:

  1. System Instructions: High-level directives about the AI’s role, tone, and constraints.
  2. Few-Shot Examples (Optional): Examples of desired input/output pairs to guide the AI’s behavior.
  3. Long-Term User Context: Specific user preferences, profile information, or summaries from past interactions.
  4. Short-Term Conversational History: The recent back-and-forth dialogue.
  5. Current User Query: The immediate question or command from the user.

Here’s a conceptual flow of how context builds up:

graph TD A[User Input] --> B{Frontend App} B --> C[Retrieve Long-Term Context] B --> D[Gather Short-Term Conversation History] B --> E[Apply System Instructions] C --> F(Construct AI Prompt) D --> F E --> F F --> G[Send to AI Model API] G --> H[AI Response] H --> B B --> I[Display to User]

Token Limits: The Memory Constraint

A critical consideration for managing context is token limits. LLMs have a maximum number of “tokens” (words or sub-words) they can process in a single request, including both the input prompt and the expected output. Sending too much context will result in an API error or an incomplete response. Your frontend must intelligently manage the size of the conversation history to stay within these limits.

Frontend State Management for AI Context

React provides powerful tools for managing the state of your application, which is exactly what we need for AI context.

useState for Localized Memory

For simple pieces of state, like the current user input or a loading indicator, useState is perfect. It’s straightforward and easy to use for component-specific data.

useReducer for Complex State Transitions

When your AI’s memory involves an array of messages with different types (user, AI, tool call), or requires more complex state updates, useReducer shines. It centralizes state logic, making it more predictable and testable, especially when multiple actions can modify the same piece of state.

React Context API for Global AI State

If certain AI configurations (e.g., the specific model being used, a system prompt that applies across multiple features, or user-specific agent settings) need to be accessible by many components without prop drilling, the React Context API is your friend. It allows you to create a “global” store that any component can subscribe to.

useRef for Non-Reactive Data

Sometimes, you need to store data that doesn’t trigger a re-render when it changes, or you need a stable reference to an object (like an AI API client instance). useRef is ideal for this. It provides a mutable object whose .current property can hold any value, and changes to it won’t cause your component to re-render.

Step-by-Step Implementation: Building a Conversational Memory

Let’s build a simple chat interface that remembers the last few messages to provide context to a simulated AI.

We’ll start with a basic React component and incrementally add features.

Project Setup

Assuming you have a basic React project set up (e.g., created with Create React App or Vite):

  1. Open your src/App.js (or src/App.tsx if using TypeScript) file.
  2. Clear out any boilerplate code, leaving a functional component structure.
// src/App.js
import React from 'react';

function App() {
  return (
    <div className="App">
      <h1>AI Chat with Memory</h1>
      {/* Our chat component will go here */}
    </div>
  );
}

export default App;

Step 1: Basic Chat Component with useState

First, let’s create a ChatInterface component that manages its own messages and user input using useState.

Create a new file src/components/ChatInterface.js:

// src/components/ChatInterface.js
import React, { useState } from 'react';

function ChatInterface() {
  // State to hold all messages in the conversation
  // Each message will be an object { id: string, text: string, sender: 'user' | 'ai' }
  const [messages, setMessages] = useState([]);
  // State to hold the current input from the user
  const [input, setInput] = useState('');

  // Function to handle sending a user message
  const handleSendMessage = () => {
    if (input.trim() === '') return; // Don't send empty messages

    const newUserMessage = {
      id: Date.now().toString(), // Simple unique ID
      text: input,
      sender: 'user',
    };

    // Add the user's message to the messages array
    setMessages((prevMessages) => [...prevMessages, newUserMessage]);
    setInput(''); // Clear the input field

    // In a real app, this is where we'd call our AI function
    console.log("User sent:", input);
    // We'll simulate an AI response in the next step
  };

  return (
    <div className="chat-container">
      <div className="message-list">
        {messages.map((msg) => (
          <div key={msg.id} className={`message ${msg.sender}`}>
            <strong>{msg.sender === 'user' ? 'You' : 'AI'}:</strong> {msg.text}
          </div>
        ))}
      </div>
      <div className="input-area">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && handleSendMessage()}
          placeholder="Type your message..."
        />
        <button onClick={handleSendMessage}>Send</button>
      </div>
    </div>
  );
}

export default ChatInterface;

Now, import and render this component in src/App.js:

// src/App.js
import React from 'react';
import ChatInterface from './components/ChatInterface'; // Import our new component
import './App.css'; // We'll add some basic styles

function App() {
  return (
    <div className="App">
      <h1>AI Chat with Memory</h1>
      <ChatInterface /> {/* Render the chat component */}
    </div>
  );
}

export default App;

And add some basic CSS to src/App.css for readability:

/* src/App.css */
.App {
  font-family: sans-serif;
  max-width: 600px;
  margin: 20px auto;
  border: 1px solid #ddd;
  border-radius: 8px;
  padding: 20px;
  box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}

.chat-container {
  display: flex;
  flex-direction: column;
  height: 400px;
  border: 1px solid #eee;
  border-radius: 4px;
  overflow: hidden;
}

.message-list {
  flex-grow: 1;
  overflow-y: auto;
  padding: 10px;
  background-color: #f9f9f9;
}

.message {
  margin-bottom: 8px;
  padding: 8px 12px;
  border-radius: 16px;
  max-width: 80%;
}

.message.user {
  background-color: #e0f7fa;
  align-self: flex-end;
  margin-left: auto;
}

.message.ai {
  background-color: #f0f0f0;
  align-self: flex-start;
  margin-right: auto;
}

.input-area {
  display: flex;
  padding: 10px;
  border-top: 1px solid #eee;
  background-color: #fff;
}

.input-area input {
  flex-grow: 1;
  padding: 10px;
  border: 1px solid #ddd;
  border-radius: 20px;
  margin-right: 10px;
  font-size: 1rem;
}

.input-area button {
  padding: 10px 20px;
  background-color: #007bff;
  color: white;
  border: none;
  border-radius: 20px;
  cursor: pointer;
  font-size: 1rem;
}

.input-area button:hover {
  background-color: #0056b3;
}

Run your app (npm start or yarn dev) and you should see a basic chat interface where you can type and send messages, but the AI won’t respond yet.

Step 2: Integrating AI Call with Context

Now, let’s simulate an AI response. We’ll create a mock AI service and demonstrate how to pass the conversation history as context.

First, let’s create a simple mock AI service in src/services/aiService.js:

// src/services/aiService.js

// This function simulates an AI API call.
// In a real application, you'd replace this with a fetch() call to your backend
// or a client-side library for an AI API.
export const getAIResponse = async (conversationHistory) => {
  console.log("AI Service received context:", conversationHistory);

  // Simulate network delay
  await new Promise((resolve) => setTimeout(resolve, 1000));

  const lastUserMessage = conversationHistory.filter(msg => msg.sender === 'user').pop();
  const userText = lastUserMessage ? lastUserMessage.text.toLowerCase() : '';

  let aiResponseText = "I'm not sure how to respond to that.";

  // Simple logic to demonstrate context awareness
  if (userText.includes("hello") || userText.includes("hi")) {
    aiResponseText = "Hello there! How can I assist you today?";
  } else if (userText.includes("your name")) {
    aiResponseText = "I am a helpful AI assistant.";
  } else if (userText.includes("weather")) {
    aiResponseText = "I don't have real-time weather data. Perhaps you could tell me what you're looking for?";
  } else if (userText.includes("previous message")) {
    // Demonstrate basic short-term memory by looking at the second to last message
    if (conversationHistory.length >= 2) {
      const secondLastMessage = conversationHistory[conversationHistory.length - 2];
      aiResponseText = `You previously said: "${secondLastMessage.text}"`;
    } else {
      aiResponseText = "There isn't enough history to recall a previous message.";
    }
  }

  return {
    id: Date.now().toString(),
    text: aiResponseText,
    sender: 'ai',
  };
};

Now, integrate this into ChatInterface.js:

// src/components/ChatInterface.js
import React, { useState } from 'react';
import { getAIResponse } from '../services/aiService'; // Import our AI service

function ChatInterface() {
  const [messages, setMessages] = useState([]);
  const [input, setInput] = useState('');
  const [isLoading, setIsLoading] = useState(false); // New state for loading

  const handleSendMessage = async () => { // Make this function async
    if (input.trim() === '') return;

    const newUserMessage = {
      id: Date.now().toString(),
      text: input,
      sender: 'user',
    };

    // Add the user's message immediately
    setMessages((prevMessages) => [...prevMessages, newUserMessage]);
    setInput(''); // Clear input

    setIsLoading(true); // Set loading state

    try {
      // CRITICAL: Pass the entire conversation history as context to the AI
      // In a real LLM API call, you'd format this into a structured prompt
      // like [{role: "user", content: "..." }, {role: "assistant", content: "..."}]
      const aiResponse = await getAIResponse([...messages, newUserMessage]); // Include the new user message

      setMessages((prevMessages) => [...prevMessages, aiResponse]); // Add AI's response
    } catch (error) {
      console.error("Error getting AI response:", error);
      setMessages((prevMessages) => [
        ...prevMessages,
        { id: Date.now().toString(), text: "Oops! Something went wrong with the AI.", sender: 'ai' },
      ]);
    } finally {
      setIsLoading(false); // Reset loading state
    }
  };

  return (
    <div className="chat-container">
      <div className="message-list">
        {messages.map((msg) => (
          <div key={msg.id} className={`message ${msg.sender}`}>
            <strong>{msg.sender === 'user' ? 'You' : 'AI'}:</strong> {msg.text}
          </div>
        ))}
        {isLoading && ( // Show loading indicator
          <div className="message ai">
            <strong>AI:</strong> Thinking...
          </div>
        )}
      </div>
      <div className="input-area">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && handleSendMessage()}
          placeholder="Type your message..."
          disabled={isLoading} // Disable input while loading
        />
        <button onClick={handleSendMessage} disabled={isLoading}>Send</button>
      </div>
    </div>
  );
}

export default ChatInterface;

Now, try chatting! Ask “Hello”, then “What is your name?”, then “What was my previous message?”. You’ll see the AI service receives the full conversation history and can use it to generate more context-aware responses.

Step 3: Managing Conversation History with useReducer

While useState works for simple chat, useReducer can make state updates cleaner, especially when dealing with various message types, loading states, and error handling. Let’s refactor our ChatInterface to use useReducer.

// src/components/ChatInterface.js
import React, { useReducer, useEffect, useRef } from 'react'; // Import useReducer and useRef
import { getAIResponse } from '../services/aiService';

// Define action types for our reducer
const ChatActionTypes = {
  ADD_MESSAGE: 'ADD_MESSAGE',
  SET_LOADING: 'SET_LOADING',
  SET_ERROR: 'SET_ERROR',
  CLEAR_HISTORY: 'CLEAR_HISTORY', // For the upcoming mini-challenge
};

// Our reducer function
function chatReducer(state, action) {
  switch (action.type) {
    case ChatActionTypes.ADD_MESSAGE:
      return {
        ...state,
        messages: [...state.messages, action.payload],
        isLoading: false, // Assume loading is done when a message is added
      };
    case ChatActionTypes.SET_LOADING:
      return { ...state, isLoading: action.payload };
    case ChatActionTypes.SET_ERROR:
      return {
        ...state,
        error: action.payload,
        isLoading: false,
      };
    case ChatActionTypes.CLEAR_HISTORY:
        return { ...state, messages: [], error: null };
    default:
      throw new Error(`Unhandled action type: ${action.type}`);
  }
}

// Initial state for our chat
const initialChatState = {
  messages: [],
  isLoading: false,
  error: null,
};

function ChatInterface() {
  const [state, dispatch] = useReducer(chatReducer, initialChatState);
  const { messages, isLoading, error } = state;
  const [input, setInput] = useState(''); // Input still managed by useState locally

  // useRef to keep the scroll position at the bottom of the chat
  const messageListRef = useRef(null);

  useEffect(() => {
    if (messageListRef.current) {
      messageListRef.current.scrollTop = messageListRef.current.scrollHeight;
    }
  }, [messages]); // Scroll to bottom whenever messages change

  const handleSendMessage = async () => {
    if (input.trim() === '') return;

    const newUserMessage = {
      id: Date.now().toString(),
      text: input,
      sender: 'user',
    };

    // Dispatch action to add user message and set loading
    dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: newUserMessage });
    dispatch({ type: ChatActionTypes.SET_LOADING, payload: true });
    setInput('');

    try {
      // Pass the current messages (including the new user message) as context
      const aiResponse = await getAIResponse([...messages, newUserMessage]);
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: aiResponse }); // Add AI response
    } catch (err) {
      console.error("Error getting AI response:", err);
      dispatch({ type: ChatActionTypes.SET_ERROR, payload: "Failed to get AI response." });
      // Optionally add an error message to chat
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: { id: Date.now().toString(), text: "AI encountered an error.", sender: 'ai' } });
    } finally {
      dispatch({ type: ChatActionTypes.SET_LOADING, payload: false });
    }
  };

  return (
    <div className="chat-container">
      <div className="message-list" ref={messageListRef}>
        {messages.map((msg) => (
          <div key={msg.id} className={`message ${msg.sender}`}>
            <strong>{msg.sender === 'user' ? 'You' : 'AI'}:</strong> {msg.text}
          </div>
        ))}
        {isLoading && (
          <div className="message ai">
            <strong>AI:</strong> Thinking...
          </div>
        )}
        {error && (
          <div className="message error">
            <strong>Error:</strong> {error}
          </div>
        )}
      </div>
      <div className="input-area">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && handleSendMessage()}
          placeholder="Type your message..."
          disabled={isLoading}
        />
        <button onClick={handleSendMessage} disabled={isLoading}>Send</button>
      </div>
    </div>
  );
}

export default ChatInterface;

With useReducer, our state logic is more organized. The dispatch function sends actions, and the chatReducer handles how the state changes in response. This is particularly useful as our chat application grows in complexity.

Step 4: Implementing a “Memory Window” (Token Management)

To prevent exceeding AI token limits and manage costs, we often need to send only the most recent parts of the conversation. Let’s add a simple function to truncate our message history.

We’ll define a MAX_MESSAGES_FOR_CONTEXT constant. In a real-world scenario, you’d use a more sophisticated token counter, but for demonstration, limiting by message count is sufficient.

Modify ChatInterface.js:

// src/components/ChatInterface.js
import React, { useReducer, useEffect, useRef, useState } from 'react';
import { getAIResponse } from '../services/aiService';

// ... (ChatActionTypes and chatReducer remain the same) ...

const initialChatState = {
  messages: [],
  isLoading: false,
  error: null,
};

// Define a constant for the maximum number of messages to send as context
// In a real app, this would be based on token count, not just message count.
const MAX_MESSAGES_FOR_CONTEXT = 6; // e.g., last 3 user + 3 AI messages

function ChatInterface() {
  const [state, dispatch] = useReducer(chatReducer, initialChatState);
  const { messages, isLoading, error } = state;
  const [input, setInput] = useState('');

  const messageListRef = useRef(null);

  useEffect(() => {
    if (messageListRef.current) {
      messageListRef.current.scrollTop = messageListRef.current.scrollHeight;
    }
  }, [messages]);

  const handleSendMessage = async () => {
    if (input.trim() === '') return;

    const newUserMessage = {
      id: Date.now().toString(),
      text: input,
      sender: 'user',
    };

    dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: newUserMessage });
    dispatch({ type: ChatActionTypes.SET_LOADING, payload: true });
    setInput('');

    try {
      // Create the context for the AI by taking the last N messages
      // This is our "memory window"
      const contextMessages = [...messages, newUserMessage].slice(-MAX_MESSAGES_FOR_CONTEXT);

      console.log("Sending context to AI:", contextMessages); // Log what's being sent
      const aiResponse = await getAIResponse(contextMessages); // Pass truncated context
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: aiResponse });
    } catch (err) {
      console.error("Error getting AI response:", err);
      dispatch({ type: ChatActionTypes.SET_ERROR, payload: "Failed to get AI response." });
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: { id: Date.now().toString(), text: "AI encountered an error.", sender: 'ai' } });
    } finally {
      dispatch({ type: ChatActionTypes.SET_LOADING, payload: false });
    }
  };

  return (
    <div className="chat-container">
      {/* ... (rest of the component remains the same) ... */}
      <div className="message-list" ref={messageListRef}>
        {messages.map((msg) => (
          <div key={msg.id} className={`message ${msg.sender}`}>
            <strong>{msg.sender === 'user' ? 'You' : 'AI'}:</strong> {msg.text}
          </div>
        ))}
        {isLoading && (
          <div className="message ai">
            <strong>AI:</strong> Thinking...
          </div>
        )}
        {error && (
          <div className="message error">
            <strong>Error:</strong> {error}
          </div>
        )}
      </div>
      <div className="input-area">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && handleSendMessage()}
          placeholder="Type your message..."
          disabled={isLoading}
        />
        <button onClick={handleSendMessage} disabled={isLoading}>Send</button>
      </div>
    </div>
  );
}

export default ChatInterface;

Now, if you have a very long conversation, the getAIResponse function will only receive the last MAX_MESSAGES_FOR_CONTEXT messages. You can test this by making the MAX_MESSAGES_FOR_CONTEXT a small number (e.g., 2) and seeing how the AI “forgets” earlier messages.

Step 5: Using React Context for Global AI Configuration

For settings that need to be shared across many components (e.g., different AI models, temperature settings, or a global system prompt), React Context is invaluable. Let’s create an AIConfigContext.

First, create src/context/AIConfigContext.js:

// src/context/AIConfigContext.js
import React, { createContext, useState, useContext } from 'react';

// Create the context
const AIConfigContext = createContext(null);

// Create a provider component
export const AIConfigProvider = ({ children }) => {
  // Global AI configuration state
  const [config, setConfig] = useState({
    model: 'gpt-4o', // As of 2026, assuming this is a stable, commonly used model
    temperature: 0.7,
    systemPrompt: "You are a helpful and friendly AI assistant. Keep responses concise.",
  });

  // Function to update the configuration
  const updateConfig = (newConfig) => {
    setConfig((prevConfig) => ({ ...prevConfig, ...newConfig }));
  };

  return (
    <AIConfigContext.Provider value={{ config, updateConfig }}>
      {children}
    </AIConfigContext.Provider>
  );
};

// Custom hook to easily consume the context
export const useAIConfig = () => {
  const context = useContext(AIConfigContext);
  if (!context) {
    throw new Error('useAIConfig must be used within an AIConfigProvider');
  }
  return context;
};

Now, wrap your App component with AIConfigProvider in src/App.js:

// src/App.js
import React from 'react';
import ChatInterface from './components/ChatInterface';
import { AIConfigProvider } from './context/AIConfigContext'; // Import the provider
import './App.css';

function App() {
  return (
    <div className="App">
      <h1>AI Chat with Memory</h1>
      <AIConfigProvider> {/* Wrap ChatInterface with the provider */}
        <ChatInterface />
      </AIConfigProvider>
    </div>
  );
}

export default App;

Finally, let’s modify src/services/aiService.js to optionally accept system prompts, and ChatInterface.js to use the global systemPrompt.

Update src/services/aiService.js:

// src/services/aiService.js

export const getAIResponse = async (conversationHistory, systemPrompt = "") => { // Add systemPrompt parameter
  console.log("AI Service received system prompt:", systemPrompt);
  console.log("AI Service received context:", conversationHistory);

  await new Promise((resolve) => setTimeout(resolve, 1000));

  const lastUserMessage = conversationHistory.filter(msg => msg.sender === 'user').pop();
  const userText = lastUserMessage ? lastUserMessage.text.toLowerCase() : '';

  let aiResponseText = "I'm not sure how to respond to that.";

  // Incorporate system prompt into basic logic (simplified)
  if (systemPrompt.includes("concise")) {
    aiResponseText = "OK."; // Shorter response if system prompt suggests conciseness
  }

  if (userText.includes("hello") || userText.includes("hi")) {
    aiResponseText = "Hello there! How can I assist you today?";
  } else if (userText.includes("your name")) {
    aiResponseText = "I am a helpful AI assistant.";
  } else if (userText.includes("previous message")) {
    if (conversationHistory.length >= 2) {
      const secondLastMessage = conversationHistory[conversationHistory.length - 2];
      aiResponseText = `You previously said: "${secondLastMessage.text}"`;
    } else {
      aiResponseText = "There isn't enough history to recall a previous message.";
    }
  }

  // Example of using system prompt to influence response length
  if (systemPrompt.includes("very short")) {
      aiResponseText = aiResponseText.split('.')[0] + ".";
  }


  return {
    id: Date.now().toString(),
    text: aiResponseText,
    sender: 'ai',
  };
};

And update src/components/ChatInterface.js to use useAIConfig:

// src/components/ChatInterface.js
import React, { useReducer, useEffect, useRef, useState } from 'react';
import { getAIResponse } from '../services/aiService';
import { useAIConfig } from '../context/AIConfigContext'; // Import the custom hook

// ... (ChatActionTypes, chatReducer, initialChatState, MAX_MESSAGES_FOR_CONTEXT remain the same) ...

function ChatInterface() {
  const [state, dispatch] = useReducer(chatReducer, initialChatState);
  const { messages, isLoading, error } = state;
  const [input, setInput] = useState('');

  const { config } = useAIConfig(); // Consume the AI configuration

  const messageListRef = useRef(null);

  useEffect(() => {
    if (messageListRef.current) {
      messageListRef.current.scrollTop = messageListRef.current.scrollHeight;
    }
  }, [messages]);

  const handleSendMessage = async () => {
    if (input.trim() === '') return;

    const newUserMessage = {
      id: Date.now().toString(),
      text: input,
      sender: 'user',
    };

    dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: newUserMessage });
    dispatch({ type: ChatActionTypes.SET_LOADING, payload: true });
    setInput('');

    try {
      const contextMessages = [...messages, newUserMessage].slice(-MAX_MESSAGES_FOR_CONTEXT);

      // Pass the system prompt from our global config to the AI service
      const aiResponse = await getAIResponse(contextMessages, config.systemPrompt);
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: aiResponse });
    } catch (err) {
      console.error("Error getting AI response:", err);
      dispatch({ type: ChatActionTypes.SET_ERROR, payload: "Failed to get AI response." });
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: { id: Date.now().toString(), text: "AI encountered an error.", sender: 'ai' } });
    } finally {
      dispatch({ type: ChatActionTypes.SET_LOADING, payload: false });
    }
  };

  return (
    <div className="chat-container">
      <div className="message-list" ref={messageListRef}>
        {messages.map((msg) => (
          <div key={msg.id} className={`message ${msg.sender}`}>
            <strong>{msg.sender === 'user' ? 'You' : 'AI'}:</strong> {msg.text}
          </div>
        ))}
        {isLoading && (
          <div className="message ai">
            <strong>AI:</strong> Thinking...
          </div>
        )}
        {error && (
          <div className="message error">
            <strong>Error:</strong> {error}
          </div>
        )}
      </div>
      <div className="input-area">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && handleSendMessage()}
          placeholder="Type your message..."
          disabled={isLoading}
        />
        <button onClick={handleSendMessage} disabled={isLoading}>Send</button>
      </div>
    </div>
  );
}

export default ChatInterface;

Now, your ChatInterface is aware of the global AI configuration! While our mock AI service has very basic logic, in a real application, the systemPrompt would be crucial for guiding the AI’s behavior and personality.

Mini-Challenge: Clearing AI Memory

It’s common for users to want to start fresh with an AI.

Challenge: Add a “Clear History” button to the ChatInterface component. When clicked, this button should reset the conversation history, effectively giving the AI a clean slate.

Hint: You’ve already added a CLEAR_HISTORY action type to your chatReducer. Now, you just need a button to dispatch that action.

What to observe/learn: How to implement an explicit memory reset and confirm that the AI responds as if it has no prior context.

Click for Solution (after you've tried it!)
// src/components/ChatInterface.js
import React, { useReducer, useEffect, useRef, useState } from 'react';
import { getAIResponse } from '../services/aiService';
import { useAIConfig } from '../context/AIConfigContext';

// Define action types for our reducer
const ChatActionTypes = {
  ADD_MESSAGE: 'ADD_MESSAGE',
  SET_LOADING: 'SET_LOADING',
  SET_ERROR: 'SET_ERROR',
  CLEAR_HISTORY: 'CLEAR_HISTORY', // Already defined
};

// Our reducer function (remains the same)
function chatReducer(state, action) {
  switch (action.type) {
    case ChatActionTypes.ADD_MESSAGE:
      return {
        ...state,
        messages: [...state.messages, action.payload],
        isLoading: false,
      };
    case ChatActionTypes.SET_LOADING:
      return { ...state, isLoading: action.payload };
    case ChatActionTypes.SET_ERROR:
      return {
        ...state,
        error: action.payload,
        isLoading: false,
      };
    case ChatActionTypes.CLEAR_HISTORY:
        return { ...state, messages: [], error: null }; // Clears messages
    default:
      throw new Error(`Unhandled action type: ${action.type}`);
  }
}

// Initial state for our chat (remains the same)
const initialChatState = {
  messages: [],
  isLoading: false,
  error: null,
};

const MAX_MESSAGES_FOR_CONTEXT = 6;

function ChatInterface() {
  const [state, dispatch] = useReducer(chatReducer, initialChatState);
  const { messages, isLoading, error } = state;
  const [input, setInput] = useState('');

  const { config } = useAIConfig();

  const messageListRef = useRef(null);

  useEffect(() => {
    if (messageListRef.current) {
      messageListRef.current.scrollTop = messageListRef.current.scrollHeight;
    }
  }, [messages]);

  const handleSendMessage = async () => {
    if (input.trim() === '') return;

    const newUserMessage = {
      id: Date.now().toString(),
      text: input,
      sender: 'user',
    };

    dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: newUserMessage });
    dispatch({ type: ChatActionTypes.SET_LOADING, payload: true });
    setInput('');

    try {
      const contextMessages = [...messages, newUserMessage].slice(-MAX_MESSAGES_FOR_CONTEXT);
      const aiResponse = await getAIResponse(contextMessages, config.systemPrompt);
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: aiResponse });
    } catch (err) {
      console.error("Error getting AI response:", err);
      dispatch({ type: ChatActionTypes.SET_ERROR, payload: "Failed to get AI response." });
      dispatch({ type: ChatActionTypes.ADD_MESSAGE, payload: { id: Date.now().toString(), text: "AI encountered an error.", sender: 'ai' } });
    } finally {
      dispatch({ type: ChatActionTypes.SET_LOADING, payload: false });
    }
  };

  // New function to handle clearing history
  const handleClearHistory = () => {
    dispatch({ type: ChatActionTypes.CLEAR_HISTORY });
  };

  return (
    <div className="chat-container">
      <div className="message-list" ref={messageListRef}>
        {messages.map((msg) => (
          <div key={msg.id} className={`message ${msg.sender}`}>
            <strong>{msg.sender === 'user' ? 'You' : 'AI'}:</strong> {msg.text}
          </div>
        ))}
        {isLoading && (
          <div className="message ai">
            <strong>AI:</strong> Thinking...
          </div>
        )}
        {error && (
          <div className="message error">
            <strong>Error:</strong> {error}
          </div>
        )}
      </div>
      <div className="input-area">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && handleSendMessage()}
          placeholder="Type your message..."
          disabled={isLoading}
        />
        <button onClick={handleSendMessage} disabled={isLoading}>Send</button>
        <button onClick={handleClearHistory} disabled={isLoading} style={{ marginLeft: '10px', backgroundColor: '#dc3545' }}>Clear History</button> {/* New button */}
      </div>
    </div>
  );
}

export default ChatInterface;

Common Pitfalls & Troubleshooting

  1. Forgetting to Send Context:

    • Symptom: AI responses are generic, repetitive, or completely ignore previous turns in the conversation.
    • Cause: You’re sending only the latest user message to the AI API, without any of the preceding dialogue.
    • Fix: Ensure your AI API call’s payload explicitly includes the conversation history (e.g., an array of message objects) formatted correctly for the specific AI model you’re using. Double-check the console.log statements in aiService.js to see what context is actually being received.
  2. Exceeding Token Limits:

    • Symptom: AI API returns errors (e.g., “context window exceeded”, “payload too large”) or AI responses are abruptly cut off.
    • Cause: The combined length of your system prompt, conversation history, and current user query is too long for the AI model’s context window.
    • Fix: Implement a “memory window” strategy as shown in Step 4, truncating older messages. For production, use a proper tokenization library (like tiktoken for OpenAI models) to accurately count tokens and manage the history more precisely. Consider summarizing older parts of the conversation if long-term memory is critical.
  3. Improper State Updates:

    • Symptom: UI doesn’t update as expected, messages appear out of order, or loading states get stuck.
    • Cause: Incorrect use of useState or useReducer, leading to stale closures or direct modification of state objects instead of creating new ones.
    • Fix: Always use the functional update form for setMessages((prevMessages) => [...prevMessages, newMessage]) when adding to arrays. With useReducer, ensure your reducer is a pure function that returns a new state object, never mutating the state directly.
  4. Exposing API Keys in Client-Side Code:

    • Symptom: Your AI service works, but security audits flag exposed sensitive credentials.
    • Cause: Directly embedding AI API keys (e.g., OpenAI API key) in your React frontend.
    • Fix: NEVER expose API keys on the client-side. Always route AI API calls through your own secure backend server. The backend can then add the API key before forwarding the request to the AI provider. This protects your credentials and allows for better rate limiting, cost management, and data logging. Our aiService.js is a mock for this reason; a real one would fetch from your server.

Summary

Congratulations! You’ve successfully navigated the complexities of managing AI context and memory in React. This is a fundamental skill for building truly intelligent and conversational AI-powered applications.

Here’s a quick recap of what we covered:

  • Understanding AI’s Statelessness: We learned that most AI models don’t inherently remember past interactions, making frontend memory management essential.
  • Types of AI Memory: We distinguished between ephemeral, short-term conversational, and long-term user/session memory, and understood their roles.
  • Prompt Engineering for Context: We explored how to construct comprehensive prompts by including system instructions, conversation history, and user context.
  • Token Limits: We acknowledged the critical constraint of token limits and the need for strategies like memory windows.
  • React State Management: We implemented AI context management using:
    • useState for simple, localized state.
    • useReducer for more structured and predictable management of complex state like chat history.
    • React Context API for sharing global AI configurations across components.
    • useRef for non-reactive data like scrolling references.
  • Practical Implementation: We built a functional chat interface that demonstrates these concepts step-by-step.
  • Mini-Challenge: You enhanced the chat with a “Clear History” feature, providing explicit memory control to the user.
  • Common Pitfalls: We identified and discussed solutions for common issues like forgetting context, token limit errors, and crucial security considerations.

You now have a solid foundation for making your AI applications feel smart and coherent.

What’s next? In Chapter 8, we’ll dive deeper into handling the dynamic nature of AI interactions, covering Async Flows, Loading States, Cancellations, Retries, and Fallbacks. Get ready to make your AI UIs robust and resilient!

References


This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.