Introduction: Your First Intelligent Chat Project!
Welcome to Chapter 14! So far, we’ve explored the foundational concepts of integrating AI into frontend applications, from understanding prompt engineering to managing AI state and implementing essential guardrails. Now, it’s time to put that knowledge into action and build something truly interactive and exciting: an intelligent chat interface.
This chapter will guide you through the creation of a fully functional chat application using React Native. Our focus will be strictly on the UI-side integration, demonstrating how your frontend consumes AI model responses, manages conversation flow, and provides a smooth user experience. You’ll learn how to handle streaming AI responses, manage chat history as context, and ensure a responsive UI, all while reinforcing best practices for client-side AI consumption. Get ready to transform theoretical knowledge into practical, tangible results!
Core Concepts: The Anatomy of an AI Chat Interface
Building an intelligent chat interface involves several key frontend considerations. We’re not building the AI model itself, nor a complex backend, but rather the user-facing experience that interacts with an existing AI service.
1. Conversation State Management
At the heart of any chat application is its conversation history. For an AI chat, this history isn’t just for display; it’s crucial context for the AI model.
- Messages Array: We’ll store messages in an array, with each message object containing at least the
sender(user or AI) and thetext. - Prompt Context: The AI often needs the entire conversation history (or a significant portion of it) to maintain coherence. We’ll learn how to assemble this history into a single prompt for the AI.
2. Consuming AI Streaming Responses
Modern AI models can generate responses incrementally, character by character or word by word. This “streaming” capability is vital for a good chat UX, as it makes the AI feel more responsive.
fetchAPI withResponse.body.getReader(): The standard WebfetchAPI, combined withResponse.body.getReader(), allows us to read data from the server as it arrives, rather than waiting for the entire response.TextDecoder: As the stream often sends byte chunks,TextDecoderhelps convert these bytes back into human-readable text.- Incremental UI Updates: As new chunks of text arrive, we’ll update the AI’s message in the UI progressively.
3. Asynchronous Flow and UI Feedback
Interacting with an AI is inherently an asynchronous process. Users send a message, and there’s a delay before the AI responds. Managing this delay gracefully is crucial for a good user experience.
- Loading States: Showing a “typing” indicator or a spinner lets the user know their message has been received and the AI is processing.
- Error Handling: What happens if the AI service is down or an API call fails? We need to inform the user and potentially offer retry options.
- Cancellation (Advanced): For longer AI generations, users might want to cancel an ongoing request. We’ll touch upon how
AbortControllercan facilitate this.
4. Frontend Security for AI Endpoints
This is paramount. While we’re focusing on client-side consumption, it’s critical to reiterate: NEVER embed your AI service API keys directly in client-side code.
- Backend Proxy: Your React Native application should communicate with your own secure backend API endpoint (e.g.,
/api/chat). This backend endpoint then securely calls the external AI service (e.g., OpenAI, Google Gemini, Anthropic) using its API key, and streams the response back to your frontend. This protects your API keys from being exposed in your client bundle. - Rate Limiting & Abuse Prevention: Your backend proxy can also implement rate limiting and other security measures to prevent abuse of the AI service.
5. Basic Chat UI Structure
A simple chat UI typically includes:
- Message List: A scrollable area displaying all messages.
- Text Input: Where the user types their message.
- Send Button: To submit the user’s message.
- Loading Indicator: To show when the AI is processing.
Let’s visualize the client-side AI interaction flow:
Step-by-Step Implementation: Building Our Chat Interface
We’ll build a simple chat interface using React Native. If you’re following along with React for web, most of the useState, useEffect, and fetch logic will be identical; only the UI components (View, Text, TextInput, ScrollView, TouchableOpacity) will differ slightly from web (div, p, input, button).
Prerequisites: Ensure you have Node.js (v18.x or higher) and a React Native development environment set up. We’ll use Expo for simplicity.
Initialize a New React Native Project:
Let’s create a new Expo project. We’ll use the latest stable version of Expo CLI as of January 2026.
npx create-expo-app@latest ai-chat-app cd ai-chat-app # If prompted, choose a blank template.Once initialized, run
npm start(orexpo start) to get your development server going.Install Necessary Dependencies (for streaming):
While
fetchis built-in, for a robust chat experience, we might want to consider a library for handling markdown rendering of AI responses, or for more advanced streaming utilities. For this basic project,fetchwill suffice, but be aware of libraries likereact-native-markdown-displayfor later. We’ll keep it minimal for now.Create the
ChatScreenComponent:Open
App.jsand replace its content with the following initial structure. We’ll create a dedicatedChatScreencomponent shortly.// App.js import React from 'react'; import ChatScreen from './ChatScreen'; // We'll create this next export default function App() { return ( <ChatScreen /> ); }Now, create a new file named
ChatScreen.jsin your project root:// ChatScreen.js import React, { useState, useRef } from 'react'; import { View, Text, TextInput, TouchableOpacity, FlatList, KeyboardAvoidingView, Platform, StyleSheet, ActivityIndicator, } from 'react-native'; // A simple component to display individual messages const Message = ({ message, isUser }) => ( <View style={[styles.messageBubble, isUser ? styles.userBubble : styles.aiBubble]}> <Text style={isUser ? styles.userText : styles.aiText}> {message.text} </Text> </View> ); const ChatScreen = () => { // State to hold all messages in the conversation const [messages, setMessages] = useState([]); // State for the current text input value const [input, setInput] = useState(''); // State to indicate if the AI is currently processing a response const [isLoading, setIsLoading] = useState(false); // Ref to scroll to the bottom of the chat automatically const flatListRef = useRef(null); // Function to handle sending a message const handleSendMessage = async () => { if (input.trim() === '') return; // Don't send empty messages const userMessage = { id: Date.now(), text: input.trim(), sender: 'user' }; // Add user's message to the chat setMessages((prevMessages) => [...prevMessages, userMessage]); setInput(''); // Clear the input field setIsLoading(true); // Show loading indicator // Scroll to bottom after adding user message setTimeout(() => flatListRef.current.scrollToEnd({ animated: true }), 100); try { // IMPORTANT: This is a placeholder for your backend endpoint. // In a real application, you would call your own secure backend API // which then calls the actual AI service (e.g., OpenAI, Gemini). // NEVER expose AI API keys directly in client-side code. const response = await fetch('YOUR_BACKEND_AI_ENDPOINT', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ // Send the full conversation history for context messages: [...messages, userMessage].map(msg => ({ role: msg.sender === 'user' ? 'user' : 'assistant', content: msg.text, })), }), }); if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } // Initialize a new AI message object for streaming let aiMessageId = Date.now() + 1; // Unique ID for AI message setMessages((prevMessages) => [ ...prevMessages, { id: aiMessageId, text: '', sender: 'ai' }, // Placeholder for streaming ]); const reader = response.body.getReader(); const decoder = new TextDecoder(); let accumulatedText = ''; // Loop to read the stream while (true) { const { value, done } = await reader.read(); if (done) break; const chunk = decoder.decode(value, { stream: true }); accumulatedText += chunk; // Update the AI message in state incrementally setMessages((prevMessages) => prevMessages.map((msg) => msg.id === aiMessageId ? { ...msg, text: accumulatedText } : msg ) ); // Scroll to bottom as AI types flatListRef.current.scrollToEnd({ animated: true }); } } catch (error) { console.error('Error sending message to AI:', error); // Add an error message to the chat setMessages((prevMessages) => [ ...prevMessages, { id: Date.now(), text: 'Oops! Something went wrong. Please try again.', sender: 'ai', isError: true }, ]); } finally { setIsLoading(false); // Hide loading indicator // Ensure scroll to end in case of error or completion setTimeout(() => flatListRef.current.scrollToEnd({ animated: true }), 100); } }; return ( <KeyboardAvoidingView style={styles.container} behavior={Platform.OS === 'ios' ? 'padding' : 'height'} keyboardVerticalOffset={Platform.OS === 'ios' ? 60 : 0} > <Text style={styles.header}>Intelligent Chatbot</Text> <FlatList ref={flatListRef} data={messages} renderItem={({ item }) => <Message message={item} isUser={item.sender === 'user'} />} keyExtractor={(item) => item.id.toString()} contentContainerStyle={styles.messageList} onContentSizeChange={() => flatListRef.current.scrollToEnd({ animated: true })} onLayout={() => flatListRef.current.scrollToEnd({ animated: true })} /> {isLoading && ( <View style={styles.loadingContainer}> <ActivityIndicator size="small" color="#007bff" /> <Text style={styles.loadingText}>AI is typing...</Text> </View> )} <View style={styles.inputContainer}> <TextInput style={styles.textInput} value={input} onChangeText={setInput} placeholder="Type your message..." editable={!isLoading} // Disable input while AI is loading /> <TouchableOpacity style={[styles.sendButton, isLoading && styles.sendButtonDisabled]} onPress={handleSendMessage} disabled={isLoading || input.trim() === ''} > <Text style={styles.sendButtonText}>Send</Text> </TouchableOpacity> </View> </KeyboardAvoidingView> ); }; const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#f0f2f5', paddingTop: Platform.OS === 'android' ? 30 : 0, // Adjust for Android status bar }, header: { fontSize: 24, fontWeight: 'bold', textAlign: 'center', paddingVertical: 15, backgroundColor: '#fff', borderBottomWidth: 1, borderBottomColor: '#e0e0e0', }, messageList: { paddingVertical: 10, paddingHorizontal: 10, }, messageBubble: { maxWidth: '80%', padding: 12, borderRadius: 20, marginBottom: 10, elevation: 1, // Android shadow shadowColor: '#000', // iOS shadow shadowOffset: { width: 0, height: 1 }, shadowOpacity: 0.1, shadowRadius: 1.5, }, userBubble: { alignSelf: 'flex-end', backgroundColor: '#007bff', }, aiBubble: { alignSelf: 'flex-start', backgroundColor: '#fff', }, userText: { color: '#fff', fontSize: 16, }, aiText: { color: '#333', fontSize: 16, }, inputContainer: { flexDirection: 'row', padding: 10, borderTopWidth: 1, borderTopColor: '#e0e0e0', backgroundColor: '#fff', alignItems: 'center', }, textInput: { flex: 1, borderWidth: 1, borderColor: '#ccc', borderRadius: 25, paddingHorizontal: 15, paddingVertical: 10, marginRight: 10, fontSize: 16, }, sendButton: { backgroundColor: '#007bff', borderRadius: 25, paddingVertical: 10, paddingHorizontal: 15, justifyContent: 'center', alignItems: 'center', }, sendButtonDisabled: { backgroundColor: '#a0c7ff', // Lighter blue when disabled }, sendButtonText: { color: '#fff', fontSize: 16, fontWeight: 'bold', }, loadingContainer: { flexDirection: 'row', alignItems: 'center', justifyContent: 'center', paddingVertical: 8, backgroundColor: 'rgba(255, 255, 255, 0.9)', borderTopWidth: 1, borderTopColor: '#e0e0e0', }, loadingText: { marginLeft: 8, color: '#555', fontSize: 14, fontStyle: 'italic', }, }); export default ChatScreen;Explanation of Additions:
MessageComponent: A simple functional component to render each chat bubble, styling it differently for user and AI messages.useStateHooks:messages: An array to store{ id, text, sender }objects, representing our conversation history.input: Holds the current text in theTextInput.isLoading: A boolean to manage the AI typing indicator and disable the input/send button.
useRef(null)forflatListRef: This ref allows us to programmatically scroll theFlatListto the bottom, ensuring the latest messages are always visible.handleSendMessageFunction: This is the core logic:- User Message Handling: Adds the user’s message to the
messagesstate and clears the input. - Loading State: Sets
isLoadingtotrue. fetchCall:- Crucially,
YOUR_BACKEND_AI_ENDPOINTneeds to be replaced. This is where your frontend talks to your own backend. - The
bodyof the request sends the entiremessagesarray (mapped toroleandcontentfor common AI API formats like OpenAI’s chat completion). This is how the AI gets conversation context.
- Crucially,
- Streaming Logic:
- Initializes a placeholder AI message in the state with an empty
text. response.body.getReader(): Gets aReadableStreamDefaultReaderfrom the response.TextDecoder(): Decodes the byte chunks received from the stream.- The
while (true)loop continuously reads chunks from the stream. setMessagesis called on each chunk to update thetextof the last AI message incrementally. This creates the “typing” effect.flatListRef.current.scrollToEnd(): Keeps the chat scrolled to the bottom.
- Initializes a placeholder AI message in the state with an empty
- Error Handling: Catches any network or API errors and displays a user-friendly error message.
finallyBlock: EnsuresisLoadingis set back tofalseand the chat scrolls to the end, regardless of success or failure.
- User Message Handling: Adds the user’s message to the
KeyboardAvoidingView: Essential for React Native to prevent the keyboard from obscuring the input field, especially on iOS.FlatList: An efficient way to render long lists of data in React Native, perfect for chat messages.onContentSizeChangeandonLayouthelp ensure it scrolls to the end automatically.- Styling (
StyleSheet.create): Basic styles for a clean chat interface.
Important: Replace the Placeholder Endpoint!
For this project to work, you need a backend endpoint that acts as a proxy to an actual AI service. Since this guide focuses on frontend integration, we won’t build that backend here. However, for testing, you could use a simple mock API (e.g., using
json-serveror a cloud function) that returns a hardcoded streaming response, or connect it to a real AI service if you have one set up.Example of a hypothetical
YOUR_BACKEND_AI_ENDPOINTresponse structure (Server-Sent Events - SSE):A backend might send data in chunks like this over an HTTP stream:
data: {"text": "Hello"} data: {"text": " there!"} data: {"text": " How can I help?"} data: [DONE]Your current
TextDecoderandaccumulatedTextlogic is designed for simple text streaming. For structured JSON streaming (like OpenAI’s SSE format), you’d need to parse eachdata:line. For simplicity, we assume the backend sends raw text chunks, or that theaccumulatedText += chunkhandles concatenation correctly.For a production setup, your backend might look something like this (conceptual, not actual code):
// Conceptual Node.js backend endpoint for /api/chat // DO NOT expose this code directly in client-side! app.post('/api/chat', async (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); const { messages } = req.body; const systemMessage = { role: 'system', content: 'You are a helpful assistant.' }; const conversation = [systemMessage, ...messages]; try { const openaiResponse = await openai.chat.completions.create({ model: 'gpt-4o', // Or latest model as of 2026 messages: conversation, stream: true, }); for await (const chunk of openaiResponse) { const content = chunk.choices[0]?.delta?.content || ''; if (content) { res.write(content); // Send raw text content } } res.end(); } catch (error) { console.error('OpenAI API error:', error); res.status(500).end('Error processing request.'); } });The frontend code provided above expects the backend to stream raw text. If your backend streams JSON, you’ll need to adjust the
handleSendMessagefunction to parse the JSON chunks.
Mini-Challenge: Enhance User Control
Our chat is functional, but let’s give the user a bit more power!
Challenge: Add a “Clear Chat” button to the header of the ChatScreen. When pressed, it should reset the messages array, effectively starting a new conversation.
Hint:
- You’ll need another
TouchableOpacityin your headerView. - Create a new function,
handleClearChat, that setssetMessages([]). - Make sure the button is styled appropriately and positioned well.
What to observe/learn: This challenge reinforces state management (useState) and how user actions can directly manipulate the application’s data flow. It also highlights the importance of providing convenient controls for users in AI applications.
Common Pitfalls & Troubleshooting
- API Key Exposure: This cannot be stressed enough. If you hardcode an AI API key into your
ChatScreen.jsand deploy it, that key is publicly visible in your app’s bundle. Solution: Always use a secure backend proxy to handle API key authentication. - Non-Streaming UX: If your AI endpoint doesn’t support streaming or you don’t implement the
getReader()logic, the user will experience a long delay with no feedback until the entire AI response is generated. This feels broken. Solution: Ensure your backend supports streaming (e.g., Server-Sent Events, WebSockets), and implement the frontend streaming logic to update the UI incrementally. - Context Window Limits: If you send all messages in
messagesarray to the AI without truncation, you might hit the AI model’s context window limit (the maximum number of tokens it can process). Solution: For long conversations, implement logic in yourhandleSendMessageor backend to truncate older messages or summarize them before sending them to the AI. - Scroll Issues:
FlatListcan sometimes be tricky with automatic scrolling. If messages aren’t scrolling to the bottom, double-check youronContentSizeChangeandonLayoutprops, and ensure yoursetTimeoutcalls are giving enough time for the UI to render before attempting to scroll. - Network Errors: AI services can be temporarily unavailable or return errors. Make sure your
try...catchblock is robust and provides meaningful feedback to the user, rather than just crashing the app.
Summary
Congratulations! You’ve just built your first intelligent chat interface in React Native, consuming streaming AI responses from the frontend.
Here’s what we covered in this chapter:
- Conversation State: How to manage and update chat messages using React’s
useState. - Prompt Construction: Assembling the entire conversation history to provide context to the AI.
- Streaming AI Responses: Leveraging
fetchwithResponse.body.getReader()andTextDecoderfor real-time, incremental UI updates. - Asynchronous UX: Implementing loading indicators and managing UI feedback during AI processing.
- Frontend Security: Reinforcing the critical need for backend proxies to protect AI API keys.
- React Native Components: Using
FlatList,TextInput,TouchableOpacity, andKeyboardAvoidingViewto build a responsive chat UI.
This project lays a strong foundation. In upcoming chapters, we’ll expand on this, integrating more advanced features like agentic tool calling, in-browser AI, and sophisticated guardrails to make your AI applications truly production-ready. Keep experimenting, and keep building!
References
- React Native Official Documentation
- Expo Documentation
- MDN Web Docs: Using the Fetch API
- MDN Web Docs: ReadableStreamDefaultReader
- MDN Web Docs: TextDecoder
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.