Introduction: Your First Intelligent Chat Project!

Welcome to Chapter 14! So far, we’ve explored the foundational concepts of integrating AI into frontend applications, from understanding prompt engineering to managing AI state and implementing essential guardrails. Now, it’s time to put that knowledge into action and build something truly interactive and exciting: an intelligent chat interface.

This chapter will guide you through the creation of a fully functional chat application using React Native. Our focus will be strictly on the UI-side integration, demonstrating how your frontend consumes AI model responses, manages conversation flow, and provides a smooth user experience. You’ll learn how to handle streaming AI responses, manage chat history as context, and ensure a responsive UI, all while reinforcing best practices for client-side AI consumption. Get ready to transform theoretical knowledge into practical, tangible results!

Core Concepts: The Anatomy of an AI Chat Interface

Building an intelligent chat interface involves several key frontend considerations. We’re not building the AI model itself, nor a complex backend, but rather the user-facing experience that interacts with an existing AI service.

1. Conversation State Management

At the heart of any chat application is its conversation history. For an AI chat, this history isn’t just for display; it’s crucial context for the AI model.

  • Messages Array: We’ll store messages in an array, with each message object containing at least the sender (user or AI) and the text.
  • Prompt Context: The AI often needs the entire conversation history (or a significant portion of it) to maintain coherence. We’ll learn how to assemble this history into a single prompt for the AI.

2. Consuming AI Streaming Responses

Modern AI models can generate responses incrementally, character by character or word by word. This “streaming” capability is vital for a good chat UX, as it makes the AI feel more responsive.

  • fetch API with Response.body.getReader(): The standard Web fetch API, combined with Response.body.getReader(), allows us to read data from the server as it arrives, rather than waiting for the entire response.
  • TextDecoder: As the stream often sends byte chunks, TextDecoder helps convert these bytes back into human-readable text.
  • Incremental UI Updates: As new chunks of text arrive, we’ll update the AI’s message in the UI progressively.

3. Asynchronous Flow and UI Feedback

Interacting with an AI is inherently an asynchronous process. Users send a message, and there’s a delay before the AI responds. Managing this delay gracefully is crucial for a good user experience.

  • Loading States: Showing a “typing” indicator or a spinner lets the user know their message has been received and the AI is processing.
  • Error Handling: What happens if the AI service is down or an API call fails? We need to inform the user and potentially offer retry options.
  • Cancellation (Advanced): For longer AI generations, users might want to cancel an ongoing request. We’ll touch upon how AbortController can facilitate this.

4. Frontend Security for AI Endpoints

This is paramount. While we’re focusing on client-side consumption, it’s critical to reiterate: NEVER embed your AI service API keys directly in client-side code.

  • Backend Proxy: Your React Native application should communicate with your own secure backend API endpoint (e.g., /api/chat). This backend endpoint then securely calls the external AI service (e.g., OpenAI, Google Gemini, Anthropic) using its API key, and streams the response back to your frontend. This protects your API keys from being exposed in your client bundle.
  • Rate Limiting & Abuse Prevention: Your backend proxy can also implement rate limiting and other security measures to prevent abuse of the AI service.

5. Basic Chat UI Structure

A simple chat UI typically includes:

  • Message List: A scrollable area displaying all messages.
  • Text Input: Where the user types their message.
  • Send Button: To submit the user’s message.
  • Loading Indicator: To show when the AI is processing.

Let’s visualize the client-side AI interaction flow:

flowchart TD User[User Types Message] --> A[User Clicks Send] A --> B{Add User Message to UI State} B --> C[Clear Input Field] C --> D{Show AI Typing Indicator} D --> E[Frontend Calls Backend API] E -->|User Input & Chat History| F[Backend Proxy] F -->|Streaming AI Response Chunks| G[Frontend Receives Stream] G --> H{Update AI Message in UI Incrementally} H --> I{Hide AI Typing Indicator} I --> J[Display Final AI Response]

Step-by-Step Implementation: Building Our Chat Interface

We’ll build a simple chat interface using React Native. If you’re following along with React for web, most of the useState, useEffect, and fetch logic will be identical; only the UI components (View, Text, TextInput, ScrollView, TouchableOpacity) will differ slightly from web (div, p, input, button).

Prerequisites: Ensure you have Node.js (v18.x or higher) and a React Native development environment set up. We’ll use Expo for simplicity.

  1. Initialize a New React Native Project:

    Let’s create a new Expo project. We’ll use the latest stable version of Expo CLI as of January 2026.

    npx create-expo-app@latest ai-chat-app
    cd ai-chat-app
    # If prompted, choose a blank template.
    

    Once initialized, run npm start (or expo start) to get your development server going.

  2. Install Necessary Dependencies (for streaming):

    While fetch is built-in, for a robust chat experience, we might want to consider a library for handling markdown rendering of AI responses, or for more advanced streaming utilities. For this basic project, fetch will suffice, but be aware of libraries like react-native-markdown-display for later. We’ll keep it minimal for now.

  3. Create the ChatScreen Component:

    Open App.js and replace its content with the following initial structure. We’ll create a dedicated ChatScreen component shortly.

    // App.js
    import React from 'react';
    import ChatScreen from './ChatScreen'; // We'll create this next
    
    export default function App() {
      return (
        <ChatScreen />
      );
    }
    

    Now, create a new file named ChatScreen.js in your project root:

    // ChatScreen.js
    import React, { useState, useRef } from 'react';
    import {
      View,
      Text,
      TextInput,
      TouchableOpacity,
      FlatList,
      KeyboardAvoidingView,
      Platform,
      StyleSheet,
      ActivityIndicator,
    } from 'react-native';
    
    // A simple component to display individual messages
    const Message = ({ message, isUser }) => (
      <View style={[styles.messageBubble, isUser ? styles.userBubble : styles.aiBubble]}>
        <Text style={isUser ? styles.userText : styles.aiText}>
          {message.text}
        </Text>
      </View>
    );
    
    const ChatScreen = () => {
      // State to hold all messages in the conversation
      const [messages, setMessages] = useState([]);
      // State for the current text input value
      const [input, setInput] = useState('');
      // State to indicate if the AI is currently processing a response
      const [isLoading, setIsLoading] = useState(false);
    
      // Ref to scroll to the bottom of the chat automatically
      const flatListRef = useRef(null);
    
      // Function to handle sending a message
      const handleSendMessage = async () => {
        if (input.trim() === '') return; // Don't send empty messages
    
        const userMessage = { id: Date.now(), text: input.trim(), sender: 'user' };
        // Add user's message to the chat
        setMessages((prevMessages) => [...prevMessages, userMessage]);
        setInput(''); // Clear the input field
        setIsLoading(true); // Show loading indicator
    
        // Scroll to bottom after adding user message
        setTimeout(() => flatListRef.current.scrollToEnd({ animated: true }), 100);
    
        try {
          // IMPORTANT: This is a placeholder for your backend endpoint.
          // In a real application, you would call your own secure backend API
          // which then calls the actual AI service (e.g., OpenAI, Gemini).
          // NEVER expose AI API keys directly in client-side code.
          const response = await fetch('YOUR_BACKEND_AI_ENDPOINT', {
            method: 'POST',
            headers: {
              'Content-Type': 'application/json',
            },
            body: JSON.stringify({
              // Send the full conversation history for context
              messages: [...messages, userMessage].map(msg => ({
                role: msg.sender === 'user' ? 'user' : 'assistant',
                content: msg.text,
              })),
            }),
          });
    
          if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
          }
    
          // Initialize a new AI message object for streaming
          let aiMessageId = Date.now() + 1; // Unique ID for AI message
          setMessages((prevMessages) => [
            ...prevMessages,
            { id: aiMessageId, text: '', sender: 'ai' }, // Placeholder for streaming
          ]);
    
          const reader = response.body.getReader();
          const decoder = new TextDecoder();
          let accumulatedText = '';
    
          // Loop to read the stream
          while (true) {
            const { value, done } = await reader.read();
            if (done) break;
    
            const chunk = decoder.decode(value, { stream: true });
            accumulatedText += chunk;
    
            // Update the AI message in state incrementally
            setMessages((prevMessages) =>
              prevMessages.map((msg) =>
                msg.id === aiMessageId ? { ...msg, text: accumulatedText } : msg
              )
            );
    
            // Scroll to bottom as AI types
            flatListRef.current.scrollToEnd({ animated: true });
          }
        } catch (error) {
          console.error('Error sending message to AI:', error);
          // Add an error message to the chat
          setMessages((prevMessages) => [
            ...prevMessages,
            { id: Date.now(), text: 'Oops! Something went wrong. Please try again.', sender: 'ai', isError: true },
          ]);
        } finally {
          setIsLoading(false); // Hide loading indicator
          // Ensure scroll to end in case of error or completion
          setTimeout(() => flatListRef.current.scrollToEnd({ animated: true }), 100);
        }
      };
    
      return (
        <KeyboardAvoidingView
          style={styles.container}
          behavior={Platform.OS === 'ios' ? 'padding' : 'height'}
          keyboardVerticalOffset={Platform.OS === 'ios' ? 60 : 0}
        >
          <Text style={styles.header}>Intelligent Chatbot</Text>
    
          <FlatList
            ref={flatListRef}
            data={messages}
            renderItem={({ item }) => <Message message={item} isUser={item.sender === 'user'} />}
            keyExtractor={(item) => item.id.toString()}
            contentContainerStyle={styles.messageList}
            onContentSizeChange={() => flatListRef.current.scrollToEnd({ animated: true })}
            onLayout={() => flatListRef.current.scrollToEnd({ animated: true })}
          />
    
          {isLoading && (
            <View style={styles.loadingContainer}>
              <ActivityIndicator size="small" color="#007bff" />
              <Text style={styles.loadingText}>AI is typing...</Text>
            </View>
          )}
    
          <View style={styles.inputContainer}>
            <TextInput
              style={styles.textInput}
              value={input}
              onChangeText={setInput}
              placeholder="Type your message..."
              editable={!isLoading} // Disable input while AI is loading
            />
            <TouchableOpacity
              style={[styles.sendButton, isLoading && styles.sendButtonDisabled]}
              onPress={handleSendMessage}
              disabled={isLoading || input.trim() === ''}
            >
              <Text style={styles.sendButtonText}>Send</Text>
            </TouchableOpacity>
          </View>
        </KeyboardAvoidingView>
      );
    };
    
    const styles = StyleSheet.create({
      container: {
        flex: 1,
        backgroundColor: '#f0f2f5',
        paddingTop: Platform.OS === 'android' ? 30 : 0, // Adjust for Android status bar
      },
      header: {
        fontSize: 24,
        fontWeight: 'bold',
        textAlign: 'center',
        paddingVertical: 15,
        backgroundColor: '#fff',
        borderBottomWidth: 1,
        borderBottomColor: '#e0e0e0',
      },
      messageList: {
        paddingVertical: 10,
        paddingHorizontal: 10,
      },
      messageBubble: {
        maxWidth: '80%',
        padding: 12,
        borderRadius: 20,
        marginBottom: 10,
        elevation: 1, // Android shadow
        shadowColor: '#000', // iOS shadow
        shadowOffset: { width: 0, height: 1 },
        shadowOpacity: 0.1,
        shadowRadius: 1.5,
      },
      userBubble: {
        alignSelf: 'flex-end',
        backgroundColor: '#007bff',
      },
      aiBubble: {
        alignSelf: 'flex-start',
        backgroundColor: '#fff',
      },
      userText: {
        color: '#fff',
        fontSize: 16,
      },
      aiText: {
        color: '#333',
        fontSize: 16,
      },
      inputContainer: {
        flexDirection: 'row',
        padding: 10,
        borderTopWidth: 1,
        borderTopColor: '#e0e0e0',
        backgroundColor: '#fff',
        alignItems: 'center',
      },
      textInput: {
        flex: 1,
        borderWidth: 1,
        borderColor: '#ccc',
        borderRadius: 25,
        paddingHorizontal: 15,
        paddingVertical: 10,
        marginRight: 10,
        fontSize: 16,
      },
      sendButton: {
        backgroundColor: '#007bff',
        borderRadius: 25,
        paddingVertical: 10,
        paddingHorizontal: 15,
        justifyContent: 'center',
        alignItems: 'center',
      },
      sendButtonDisabled: {
        backgroundColor: '#a0c7ff', // Lighter blue when disabled
      },
      sendButtonText: {
        color: '#fff',
        fontSize: 16,
        fontWeight: 'bold',
      },
      loadingContainer: {
        flexDirection: 'row',
        alignItems: 'center',
        justifyContent: 'center',
        paddingVertical: 8,
        backgroundColor: 'rgba(255, 255, 255, 0.9)',
        borderTopWidth: 1,
        borderTopColor: '#e0e0e0',
      },
      loadingText: {
        marginLeft: 8,
        color: '#555',
        fontSize: 14,
        fontStyle: 'italic',
      },
    });
    
    export default ChatScreen;
    

    Explanation of Additions:

    • Message Component: A simple functional component to render each chat bubble, styling it differently for user and AI messages.
    • useState Hooks:
      • messages: An array to store { id, text, sender } objects, representing our conversation history.
      • input: Holds the current text in the TextInput.
      • isLoading: A boolean to manage the AI typing indicator and disable the input/send button.
    • useRef(null) for flatListRef: This ref allows us to programmatically scroll the FlatList to the bottom, ensuring the latest messages are always visible.
    • handleSendMessage Function: This is the core logic:
      1. User Message Handling: Adds the user’s message to the messages state and clears the input.
      2. Loading State: Sets isLoading to true.
      3. fetch Call:
        • Crucially, YOUR_BACKEND_AI_ENDPOINT needs to be replaced. This is where your frontend talks to your own backend.
        • The body of the request sends the entire messages array (mapped to role and content for common AI API formats like OpenAI’s chat completion). This is how the AI gets conversation context.
      4. Streaming Logic:
        • Initializes a placeholder AI message in the state with an empty text.
        • response.body.getReader(): Gets a ReadableStreamDefaultReader from the response.
        • TextDecoder(): Decodes the byte chunks received from the stream.
        • The while (true) loop continuously reads chunks from the stream.
        • setMessages is called on each chunk to update the text of the last AI message incrementally. This creates the “typing” effect.
        • flatListRef.current.scrollToEnd(): Keeps the chat scrolled to the bottom.
      5. Error Handling: Catches any network or API errors and displays a user-friendly error message.
      6. finally Block: Ensures isLoading is set back to false and the chat scrolls to the end, regardless of success or failure.
    • KeyboardAvoidingView: Essential for React Native to prevent the keyboard from obscuring the input field, especially on iOS.
    • FlatList: An efficient way to render long lists of data in React Native, perfect for chat messages. onContentSizeChange and onLayout help ensure it scrolls to the end automatically.
    • Styling (StyleSheet.create): Basic styles for a clean chat interface.
  4. Important: Replace the Placeholder Endpoint!

    For this project to work, you need a backend endpoint that acts as a proxy to an actual AI service. Since this guide focuses on frontend integration, we won’t build that backend here. However, for testing, you could use a simple mock API (e.g., using json-server or a cloud function) that returns a hardcoded streaming response, or connect it to a real AI service if you have one set up.

    Example of a hypothetical YOUR_BACKEND_AI_ENDPOINT response structure (Server-Sent Events - SSE):

    A backend might send data in chunks like this over an HTTP stream:

    data: {"text": "Hello"}
    data: {"text": " there!"}
    data: {"text": " How can I help?"}
    data: [DONE]
    

    Your current TextDecoder and accumulatedText logic is designed for simple text streaming. For structured JSON streaming (like OpenAI’s SSE format), you’d need to parse each data: line. For simplicity, we assume the backend sends raw text chunks, or that the accumulatedText += chunk handles concatenation correctly.

    For a production setup, your backend might look something like this (conceptual, not actual code):

    // Conceptual Node.js backend endpoint for /api/chat
    // DO NOT expose this code directly in client-side!
    app.post('/api/chat', async (req, res) => {
      res.setHeader('Content-Type', 'text/event-stream');
      res.setHeader('Cache-Control', 'no-cache');
      res.setHeader('Connection', 'keep-alive');
    
      const { messages } = req.body;
      const systemMessage = { role: 'system', content: 'You are a helpful assistant.' };
      const conversation = [systemMessage, ...messages];
    
      try {
        const openaiResponse = await openai.chat.completions.create({
          model: 'gpt-4o', // Or latest model as of 2026
          messages: conversation,
          stream: true,
        });
    
        for await (const chunk of openaiResponse) {
          const content = chunk.choices[0]?.delta?.content || '';
          if (content) {
            res.write(content); // Send raw text content
          }
        }
        res.end();
      } catch (error) {
        console.error('OpenAI API error:', error);
        res.status(500).end('Error processing request.');
      }
    });
    

    The frontend code provided above expects the backend to stream raw text. If your backend streams JSON, you’ll need to adjust the handleSendMessage function to parse the JSON chunks.

Mini-Challenge: Enhance User Control

Our chat is functional, but let’s give the user a bit more power!

Challenge: Add a “Clear Chat” button to the header of the ChatScreen. When pressed, it should reset the messages array, effectively starting a new conversation.

Hint:

  • You’ll need another TouchableOpacity in your header View.
  • Create a new function, handleClearChat, that sets setMessages([]).
  • Make sure the button is styled appropriately and positioned well.

What to observe/learn: This challenge reinforces state management (useState) and how user actions can directly manipulate the application’s data flow. It also highlights the importance of providing convenient controls for users in AI applications.

Common Pitfalls & Troubleshooting

  1. API Key Exposure: This cannot be stressed enough. If you hardcode an AI API key into your ChatScreen.js and deploy it, that key is publicly visible in your app’s bundle. Solution: Always use a secure backend proxy to handle API key authentication.
  2. Non-Streaming UX: If your AI endpoint doesn’t support streaming or you don’t implement the getReader() logic, the user will experience a long delay with no feedback until the entire AI response is generated. This feels broken. Solution: Ensure your backend supports streaming (e.g., Server-Sent Events, WebSockets), and implement the frontend streaming logic to update the UI incrementally.
  3. Context Window Limits: If you send all messages in messages array to the AI without truncation, you might hit the AI model’s context window limit (the maximum number of tokens it can process). Solution: For long conversations, implement logic in your handleSendMessage or backend to truncate older messages or summarize them before sending them to the AI.
  4. Scroll Issues: FlatList can sometimes be tricky with automatic scrolling. If messages aren’t scrolling to the bottom, double-check your onContentSizeChange and onLayout props, and ensure your setTimeout calls are giving enough time for the UI to render before attempting to scroll.
  5. Network Errors: AI services can be temporarily unavailable or return errors. Make sure your try...catch block is robust and provides meaningful feedback to the user, rather than just crashing the app.

Summary

Congratulations! You’ve just built your first intelligent chat interface in React Native, consuming streaming AI responses from the frontend.

Here’s what we covered in this chapter:

  • Conversation State: How to manage and update chat messages using React’s useState.
  • Prompt Construction: Assembling the entire conversation history to provide context to the AI.
  • Streaming AI Responses: Leveraging fetch with Response.body.getReader() and TextDecoder for real-time, incremental UI updates.
  • Asynchronous UX: Implementing loading indicators and managing UI feedback during AI processing.
  • Frontend Security: Reinforcing the critical need for backend proxies to protect AI API keys.
  • React Native Components: Using FlatList, TextInput, TouchableOpacity, and KeyboardAvoidingView to build a responsive chat UI.

This project lays a strong foundation. In upcoming chapters, we’ll expand on this, integrating more advanced features like agentic tool calling, in-browser AI, and sophisticated guardrails to make your AI applications truly production-ready. Keep experimenting, and keep building!


References

This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.