Case Study : Chatbot with Streamlit

Streamlit

It is an open-source Python library to change python scripts(written with streamlit library) into web apps using machine learning and data science.

Install streamlit

pip install streamlit

Note : Write as .py file

Import streamlit

import streamlit as st

Maintain Conversation State

The following statement helps to keep track of the messages exchanged during a session.
from langchain_community.chat_message_histories                                             import StreamlitChatMessageHistory
msgs = StreamlitChatMessageHistory()
print(msgs)

Output:

{'_messages': []}

Step 1 : Add the First AI Message to chat history


# Give initial ai message from Chatbot
if len(msgs.messages) == 0:
    msgs.add_ai_message("How can I help you?")
    print(msgs)

Output

{'_messages': [AIMessage(content='How can I help you today')]}

I have a single message and lets try to get the values.
print(f"Type of message : {msgs.messages[0].type}, Content : {msgs.messages[0].content}")

Output:

Type of message : ai, Content : How can I help you today

Create Chat Type Interface with Streamlit

Insert chat message container in web app. 
Mention the following:
i) Message Type : if user or ai message
ii) Message content

st.chat_message(msgs.messages[0].type).
            write(msgs.messages[0].content)

Step 2 : Create Prompt

from langchain_core.prompts import \
(ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder)
prompt = ChatPromptTemplate.from_messages(
    [
SystemMessagePromptTemplate.from_template(
"You are an AI chatbot having a conversation with a human on {topic}."
), # System level prompt for all conversations
MessagesPlaceholder(
variable_name="history"
), # Give information of chat history in memory
HumanMessagePromptTemplate.from_template(
"{human_input}"
), # Insert the user input
]
)

Step 3 : Create chain with prompt and model

from langchain_openai import ChatOpenAI
chain = prompt |                 ChatOpenAI(openai_api_key=return_openai_key(),max_tokens=20)

Step 4 : Add message history to chain

To get context-aware response from AI, take the accumulated chat history and new user input and apply to chain using RunnableWithMessageHistory.

Using RunnableWithMessageHistory
The prompt we have used to create chain has a list of BaseMessages and so we can use RunnableWithMessageHistory.

from langchain_core.runnables.history import RunnableWithMessageHistory
chain_with_history = RunnableWithMessageHistory(
chain,
    #Dynamically create instances of StreamlitChatMessageHistory for every session id
lambda session_id: msgs,
input_messages_key="human_input", # The latest user input is added in this key
history_messages_key="history", # The chat history is added in this key
)

Step 5 : Invoke with config

When we use a chain with history of messages we need to maintain the integrity and personal experience of each user. So configure with the session_id.

[

Example: If you're using a framework like Flask for a web application, you might access the session ID using flask.session.sid and pass it to your configuration dynamically:

from flask import session config = {"configurable": {"session_id": session.sid}}

]

config={"configurable": {"session_id": "<SESSION_ID>"}}
Eg. This will ensure the chat messages are retrieved from the right session.
config = {"configurable": {"session_id": "any"}}

Step 6 : Get input from user and write in chat window

Streamlit events are triggered based on user input

if prompt := st.chat_input():
st.chat_message("human").write(prompt)

Step 7 : Get response from AI

response = chain_with_history.invoke(
{"topic":"car", "human_input": prompt},
config)

Step 8 : Write response in the chat window

st.chat_message("ai").write(response.content)

Running streamlit

streamlit run your_script.py

Important Note:

Streamlit reruns the entire script on interaction. This ensures that the UI is stateful.

Comments

Popular posts from this blog

Prompt - ChatPromptTemplate

Retrieval 1 - Steps to Perform RAG