Take a deep dive into the Einstein Trust Layer and learn how your data flows through it to create rich, meaningful, and trusted responses …
Here are highlights from article Inside the Einstein Trust Layer
1. Generating content with large language models (LLMs)
– Can be complex or simple, depending on training your own model or using existing LLMs through APIs
– Einstein 1 Platform provides secure entry point to LLM offerings from AI partners
2. Three ways to generate content within Salesforce
– CRM solutions like Sales Cloud and Service Cloud use generative AI
– Einstein Reply Recommendations helps users write chat responses
– Einstein Copilot Studio includes Prompt Builder for generating text responses and emails
– Developers can make calls to Einstein using the Einstein LLM Generations API
3. The Trust Layer
– Secure intermediary for user interactions with LLMs
– Masks personally identifiable information, checks output toxicity, ensures data privacy, prevents data persistence or use in further training, and standardizes differences among model providers
4. Securing a prompt before generation
– First step is providing a prompt to the Trust Layer
– Prompt can come from CRM apps, Prompt Builder, or passed from Apex
– Prompt is the input given to the model to generate content
5. Example of prompt created in Prompt Builder
– Prompt takes a contact record and creates an account overview
– Parameters include customer name, customer description, lifetime spend
You can read it here: https://sfdc.blog/JGyDU
Source from developer(dot)salesforce(dot)com