Skip to main content
Ravenna AI process your data to answer questions, analyze tickets, and automate workflows. All AI processing uses enterprise-grade security with no training on your data.

Data protection

All data encrypted in transit and at rest. AI model providers (Anthropic, OpenAI) do not train on your data or retain prompts and responses. SOC 2 Type II certified.
Workspace admins control which channels have AI agents, restrict access, and review AI usage logs. Users can request AI not respond to their messages.
AI responses cite knowledge base sources. Conversation logs help you monitor quality and accuracy. Clear limitations documented to set appropriate expectations.

Understanding AI processing

How Ravenna uses AI

AI features automate support tasks and provide intelligent assistance:
  • Answer questions using your knowledge base articles and documentation
  • Create tickets with pre-filled based on conversation context
  • Trigger workflows to automate approval processes and provisioning
  • Classify requests to route tickets to appropriate teams
  • Generate insights from ticket patterns and trends
  • Execute tool actions through integrated services like Fleet or HubSpot

What data is processed

AI features process data necessary to provide intelligent responses: Ticket data:
  • Ticket titles, descriptions, and comments
  • Ticket metadata (status, priority, assignee)
  • Historical ticket patterns and resolution data
Knowledge base content:
  • Articles and documentation you’ve connected to agents
  • Document metadata and structure
Conversation data:
  • User messages in connected Slack channels
  • Message context and thread history
  • User mentions and reactions
Integration data:
  • Tool execution results from connected integrations
  • External system data accessed through tool actions
Only data explicitly connected to agents (through knowledge bases, channels, or tool permissions) is processed by AI. Agents cannot access data outside their configured scope.

Data flow

AI processing follows a secure data flow:
  1. User sends message in Slack or creates ticket
  2. Agent retrieves context from configured knowledge bases and conversation history
  3. Data sent to AI model (Anthropic Claude, OpenAI, or Google Vertex AI) with relevant context
  4. Model generates response using provided context without accessing external data
  5. Response delivered to user with citations to knowledge base sources
  6. Conversation logged in Ravenna for monitoring and debugging
AI models process data in real-time and do not retain your data after generating responses. All API calls use encrypted connections.

Data protection

Security measures

Encryption

All data encrypted in transit using TLS 1.3 and at rest using AES-256 encryption

Access controls

Role-based permissions control AI feature access at workspace and channel levels

Audit logs

Complete audit trail of AI interactions, tool executions, and configuration changes

Compliance

SOC 2 Type II certified with GDPR and CCPA compliance measures

Data retention

Conversation logs:
  • Stored in Ravenna for debugging and quality monitoring
  • Retained according to your organization’s data retention policy
  • Can be exported or deleted upon request
AI model provider data:
  • Prompts and responses not retained by model providers
  • No training on your data by Anthropic, OpenAI, or Google
  • API calls processed in real-time without storage
Knowledge base content:
  • Stored in Ravenna and synchronized with connected sources
  • Not sent to AI models unless explicitly referenced by agents
  • Removed when knowledge base connection is deleted

Data processing agreements

Ravenna maintains Data Processing Agreements (DPAs) with AI model providers:
  • Anthropic (Claude AI models)
  • OpenAI (GPT models)
  • Google (Vertex AI)
These agreements ensure data is processed securely and in compliance with privacy regulations including GDPR, CCPA, and other regional requirements. DPAs specify that providers do not train models on your data or retain data after processing.
Review the full Ravenna Privacy Policy for complete details on data handling and compliance

Control & permissions

Configure AI features at multiple levels to control data access and agent behavior.

Workspace-level controls

Workspace admins control AI features for the entire workspace:
  • Enable or disable AI agents globally
  • Configure which have AI agents deployed
  • Restrict knowledge base access to specific agents
  • Review AI usage logs and conversation history
  • Configure auto-respond settings for email and integration tickets
  • Manage tool permissions for agent integrations
Each channel can have independent AI configuration:
  • Deploy specific agent or no agent
  • Connect distinct knowledge base folders
  • Configure custom
  • Set channel-specific response behavior
  • Control which apply to the agent
Individual users have privacy controls:
  • See which AI features are active in their channels
  • Request AI not respond to their messages
  • Provide feedback on AI responses (thumbs up/down)
  • Request data deletion for their conversations
  • View AI response sources and citations
Changes to agent configuration, knowledge base connections, and permissions are logged in audit trails accessible to workspace admins.

Responsible AI usage

Limitations

AI agents have limitations you should understand before deployment:
Known limitations:
  • May generate incorrect information or hallucinate facts not present in knowledge bases
  • Cannot access real-time external data without configured tool integrations
  • Limited by quality and completeness of connected knowledge base content
  • Cannot perform actions requiring human judgment, compliance review, or legal interpretation
  • May misinterpret ambiguous requests or questions lacking sufficient context
  • Cannot guarantee 100% accuracy even when citing knowledge base sources
Configure to route complex, sensitive, or ambiguous requests to human agents. Monitor chat logs to identify patterns requiring improved knowledge base content or rule refinement.

Best practices

Review knowledge bases

Audit connected knowledge bases to ensure they don’t contain sensitive information you don’t want AI to access. Remove outdated or incorrect documentation that could lead to wrong answers. Keep knowledge base content current and accurate.
Configure agents to escalate sensitive requests (HR issues, security incidents, legal questions) to humans. Define escalation criteria in agent instructions. Test escalation behavior before full deployment.
Review conversation logs regularly to assess response quality. Track negative feedback patterns to identify knowledge gaps. Use to measure performance and identify improvement opportunities.
Deploy agents to small channels or pilot groups initially. Expand gradually as you validate response quality and refine configuration. Test thoroughly before enabling auto-respond features.
Train your team on AI capabilities and limitations. Set appropriate expectations about what agents can and cannot do. Provide clear guidance on when to request human assistance.
Keep documentation current and comprehensive. Add articles addressing questions that generate “Not found response” results. Update content when product features or policies change.

Support & compliance

Contact information

For questions about AI trust, privacy, or data processing:
  • Privacy inquiries: [email protected]
  • Security questions: Contact your account manager
  • Data processing agreements: Request through your account manager
  • Privacy policy: ravenna.ai/privacy

Compliance resources

  • SOC 2 Type II report: Available upon request through your account manager
  • Data Processing Addendum (DPA): Contact your account manager
  • Subprocessor list: Available at ravenna.ai/subprocessors
  • GDPR compliance documentation: Contact [email protected]

Data subject requests

Users can exercise data privacy rights:
  • Access: Request copy of data processed by AI features
  • Deletion: Request removal of conversation logs and AI-processed data
  • Correction: Request updates to inaccurate information
  • Restriction: Request limits on AI data processing
Submit data subject requests to [email protected] with required identification and specifics about the request.