No doubt Gen AI term has grabbed the attention of the whole world. And everyone now wants to try out its potential. Chat GPT, Google Gemini has become everyone’s personal assistant to get things drafted, to get ideas, to get potential solutions over technical error etc.
It has become a topic of discussion in every company and as a goal for individuals for next review meet 😉
But well, it’s interesting to know, how do these things work out.
Tried out our own SAP AI Core to get a hands-on by creating a chat assistant.
Purpose: To use ‘generative-ai-hub-sdk’ to access the ‘gpt-35-turbo’ LLM model from SAP’s Generative AI Hub in AI Core
Few Pre-requisite Steps Required:
Provision SAP AI core entitlement in SAP BTP cockpit and create the corresponding instance with sap-internal service plan and generate the service key.The service key details enable to derive the authentication by using the clientid, clientsecret and request auth_url to generate Auth Token. We saved the following values as .env file with below details.Next Create the deployment configuration using the below curl command by providing the executableId, modelName and modelVersionThe response of the same would provide configurationid, which needs to be passed to start the deployment, which would in turn provide deploymentId
Once the desired LLM is deployed that will help to understand the text-based context, as its built using deep learning techniques, enabling to capture complex patterns in language based on various parameters.
Next went ahead with chat completion using the LangChain PromptTemplates. Also tried with memory using langchain.memory ConversationBufferWindowMemory.
ChatPromptTemplate helps to structure and format prompts to design multi turn interaction.
SystemMessagePromptTemplate sets the initial context or behaviour for the AI.HumanMessagePromptTemplate represents the user’s input.AIMessagePromptTemplate is used to generate the AI’s response.
‘temperature’, ‘max_tokens’, and ‘top_p’ are parameters in a model that determines the behaviour or helps to fine tune the output.
A higher temperature (e.g., 0.7 to 1.0) makes the output more random and creative, while a lower temperature (e.g., 0.0 to 0.3) makes the output more focused and deterministic.
max_tokens determine the maximum number of tokens (words or word pieces) that the model can generate in the response.
top_p controls the cumulative probability for nucleus sampling.
Output:
May be a small start for people who are still yet to venture out…..
Learning References:
Learning how to use the SAP AI Core service on SAP Business Techn
Prompt LLMs in the generative AI hub in SAP AI Core & Launchpad | SAP Tutorials
No doubt Gen AI term has grabbed the attention of the whole world. And everyone now wants to try out its potential. Chat GPT, Google Gemini has become everyone’s personal assistant to get things drafted, to get ideas, to get potential solutions over technical error etc.It has become a topic of discussion in every company and as a goal for individuals for next review meet 😉But well, it’s interesting to know, how do these things work out.Tried out our own SAP AI Core to get a hands-on by creating a chat assistant.Purpose: To use ‘generative-ai-hub-sdk’ to access the ‘gpt-35-turbo’ LLM model from SAP’s Generative AI Hub in AI Core Few Pre-requisite Steps Required:Provision SAP AI core entitlement in SAP BTP cockpit and create the corresponding instance with sap-internal service plan and generate the service key.The service key details enable to derive the authentication by using the clientid, clientsecret and request auth_url to generate Auth Token. We saved the following values as .env file with below details.Next Create the deployment configuration using the below curl command by providing the executableId, modelName and modelVersionThe response of the same would provide configurationid, which needs to be passed to start the deployment, which would in turn provide deploymentIdOnce the desired LLM is deployed that will help to understand the text-based context, as its built using deep learning techniques, enabling to capture complex patterns in language based on various parameters. Next went ahead with chat completion using the LangChain PromptTemplates. Also tried with memory using langchain.memory ConversationBufferWindowMemory.ChatPromptTemplate helps to structure and format prompts to design multi turn interaction.SystemMessagePromptTemplate sets the initial context or behaviour for the AI.HumanMessagePromptTemplate represents the user’s input.AIMessagePromptTemplate is used to generate the AI’s response. ‘temperature’, ‘max_tokens’, and ‘top_p’ are parameters in a model that determines the behaviour or helps to fine tune the output. A higher temperature (e.g., 0.7 to 1.0) makes the output more random and creative, while a lower temperature (e.g., 0.0 to 0.3) makes the output more focused and deterministic. max_tokens determine the maximum number of tokens (words or word pieces) that the model can generate in the response. top_p controls the cumulative probability for nucleus sampling. Output: May be a small start for people who are still yet to venture out….. Learning References:Learning how to use the SAP AI Core service on SAP Business TechnPrompt LLMs in the generative AI hub in SAP AI Core & Launchpad | SAP Tutorials Read More Technology Blogs by SAP articles
#SAP
#SAPTechnologyblog