Unlike prompts to llms that are strings, prompts to chat models must be a list of messages. We will assume the model to be considered as OpenAIs chat completion API and proceed.
Types of Messages
Messages can be associated with 3 types of roles. The roles are
1. AI assistant
- this is the response for each prompt from the user
2. System
- this sets the overall environment for the conversation. Eg. You are a salesperson
3. Human
- this is the prompt given by the user
With each role a prompt may be associated and is referred to as "content".
Using Chat Models without templating - BaseMessage
It is possible to use chat models without templating as follows:
!pip install langchain-openai langchain
from langchain_openai import ChatOpenAI
chat = ChatOpenAI(
temperature=0,
openai_api_key=openai_api_key
)
from langchain_core.messages import SystemMessage, HumanMessage
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French. Hello"
),
]
response = chat.invoke(messages)
ChatPromptTemplate
This is class used to template prompts for chat models. The 2 basic steps to follow are
1. Create a ChatPromptTemplate object using from_messages() and pass the prompt associated with each role.
2. Format the messages as a list of messages to be passed to the chat model using format_messages().
Using from_messages()
1. List of 2-tuple
In the first type we pass a list [] of tuples () with each tuple having the pair (role,content) where both role and content are strings.
from langchain_core.prompts import ChatPromptTemplate
chat_template = ChatPromptTemplate.from_messages(
[
("system","You are a bot created to act as customer support"),
("human", "Hello, I need some help"),
("ai","Hi, How can I help you?")
]
)
2. Using from_template()
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate
from langchain_core.messages import SystemMessage
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template("You write technical blogs"),
HumanMessagePromptTemplate.from_template("Give some suggestions for latest in {topic}")
]
)
3. Using BaseMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import SystemMessage, HumanMessage
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessage(content = "You write technical blogs"), # String content of message
HumanMessage(content="Give some suggestions for latest in {topic}")
]
)
prompt = chat_template.format_messages(topic="GAI")
print(prompt)
4. Combining BaseMessage and MessagePromptTemplate
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain_core.messages import SystemMessage
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessage(content = "You write technical blogs"), # String content of message
HumanMessagePromptTemplate.from_template("Give some suggestions for
latest in {topic}")
]
)
prompt = chat_template.format_messages(topic="GAI")
print(prompt)
Using format_messages()
This function is called with an object of ChatPromptTemplate and the input variables are passed as arguments.
chat_prompt = chat_template.format_messages(topic="GAI", content_type = "technical")
print(chat_prompt)
Using invoke()
PromptTemplate and ChatPromptTemplate implement the Runnable interface. This means we can call invoke() and other LCEL methods.
The input parameter for ChatPromptTemplate is a dictionary of the input variables and the output is ChatPromptValue.
The following code shows how to get the final response with and without using ChatPromptValue.
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import SystemMessage, HumanMessage
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessage(content = "You write {content_type} blogs"), # String content of message
HumanMessage(content="Give some suggestions for latest in {topic}")
]
)
chat_prompt = chat_template.format_messages(topic="GAI", content_type = "technical")
print(chat_prompt)
chat_prompt_val = chat_template.invoke({"content_type": "technical", "topic": "GAI"})
from langchain_openai import ChatOpenAI
chat = ChatOpenAI(
temperature=0,
openai_api_key=openai_api_key,
max_tokens = 20
)
# Invoke with prompt messages directly
chat_response = chat.invoke(chat_prompt)
print(chat_response)
# Invoke with ChatPromptValue
chat_response = chat.invoke(chat_prompt_val)
print(chat_response)
Comments
Post a Comment