Posts

Showing posts from January, 2024

Retrieval 1 - Steps to Perform RAG

Image
As we seen the below given link, LLMs are trained with large amount of data.  https://gaillms.blogspot.com/2024/01/training-llm-dataset.html But this data does not include proprietary data and data not available for free on the internet. When we give prompts to an LLM, it replies based on the data in which it was trained. The problem arises when we want answers for data that the LLM was not trained on. Two options are generally considered for this purpose. 1. Model Fine Tuning 2. Retrieval Augmented Generation(RAG) Based on our requirement we can decide between the 2. Our topic of discussion is RAG. Retrieval It is to retrieve external data like pdf, html, videos etc. Generation It refers to generating output with the external data retrieved. The steps used in RAG are as follows: Why we need RAG? Consider a pdf document with 1000 pages about my company. This is the context within which we pose questions to the llm. That is the prompt expects answers based on the document.  For...

Prompt - ChatPromptTemplate

 Unlike prompts to llms that are strings, prompts to chat models must be a list of messages. We will assume the model to be considered as OpenAIs chat completion API and proceed. Types of Messages Messages can be associated with 3 types of roles. The roles are 1. AI assistant  - this is the response for each prompt from the user 2. System - this sets the overall environment for the conversation. Eg. You are a salesperson 3. Human - this is the prompt given by the user With each role a prompt may be associated and is referred to as "content". Using Chat Models without templating - BaseMessage It is possible to use chat models without templating as follows: ! pip install langchain-openai langchain from langchain_openai import ChatOpenAI chat = ChatOpenAI (     temperature= 0 ,     openai_api_key=openai_api_key ) from langchain_core.messages import SystemMessage , HumanMessage messages = [     SystemMessage (         content=...

My first OpenAI with Langchain v 0.1.0

 Github Link: To understand the working of langchain and how it interacts with llms, let us start with a simple sample program that sends prompt to llm and gets response. Step 1 : Install the required packages. ! pip install langchain-openai langchain Step 2 : Interact with OpenAI from langchain_openai import ChatOpenAI chat = ChatOpenAI (     temperature= 0 ,     openai_api_key=openai_api_key ) Step 3 : Structure prompt as a list of messages. from langchain_core.messages import SystemMessage , HumanMessage messages = [     SystemMessage (         content= "You are a helpful assistant that translates English to French."     ),     HumanMessage (         content= "Translate this sentence from English to French. Hello"     ), ]   Step 4 : Pass the message to the default gpt model(gpt-3.5-turbo) and get response. response = chat.invoke ( messages ) Step 5 : Print the response ...