The Salesforce Einstein Trust Layer is a robust security framework designed to protect customer data while enabling trusted use of generative AI tools. This layer integrates features like secure data retrieval, dynamic grounding, data masking, and zero data retention to ensure privacy and security throughout the AI's interaction with company data. Specifically, it works by only exposing necessary information through secure, controlled processes, and masking any personally identifiable information (PII) before it reaches external generative AI models.
(The image is provided courtesy of Salesforce. More)
Dynamic grounding is the key here, as it allows data to be integrated from various Salesforce tools like Flow or external sources via APIs, Apex functions for tailoring responses to relevant business contexts by limiting exposure to sensitive data. Additionally, a prompt defense system safeguards against unauthorized prompt modifications and ensures output reliability. The Einstein Trust Layer also provides an "LLM Gateway," which routes requests securely to large language models (LLMs) from various providers, starting with OpenAI, by ensuring "zero retention" policy where data is neither stored nor reused by external models.
The key aspects include:
Data Protection and Privacy: Customer data is protected through grounding AI responses in CRM data, masking sensitive information, and partnering with LLM providers like OpenAI under the strict data retention agreements.
Bias and Safety Mitigation: To prevent harmful or biased outputs, the Trust Layer includes toxicity detection, feedback loops, and regular audits by promoting ethical AI use across all the Salesforce applications from Sales GPT to Tableau GPT.
Transparency and Control: The Trust Layer emphasizes transparency by marking AI-generated content and empowering users with tools to edit or control AI outputs before they’re shared externally.
No comments:
Post a Comment