📄️ Auto-Inject Prompt Caching Checkpoints
Reduce costs by up to 90% by using LiteLLM to auto-inject prompt caching checkpoints.
📄️ Using Anthropic File API with LiteLLM Proxy
Overview
📄️ Call Gemini Realtime API with Audio Input/Output
Requires LiteLLM Proxy v1.70.1+
📄️ Aporia Guardrails with LiteLLM Gateway
In this tutorial we will use LiteLLM AI Gateway with Aporia to detect PII in requests and profanity in responses
📄️ Presidio PII Masking with LiteLLM - Complete Tutorial
This tutorial will guide you through setting up PII (Personally Identifiable Information) masking with Microsoft Presidio and LiteLLM Gateway. By the end of this tutorial, you'll have a production-ready setup that automatically detects and masks sensitive information in your LLM requests.