https://www.usevelvet.com [64d28c5c39] Schedule a call to get started - [64d28c5c0e] velvet logo DocumentationPricingArticles [63549c3e41] 0 Shopping Cart Spend $300+ for free U.S. shipping [63549c3e41] [placeholde] This is some text inside of a div block. * : Remove [1 ] Subtotal Shipping and taxes calculated at checkout. Pay with browser. Continue to Checkout No items found. Product is not available in this quantity. Sign in AI GATEWAY for engineers Develop & deploy AI with confidence Log requests to your DB, optimize, and run experiments. Just two lines of code to get started. Get started for free See demo app -> Try the sandbox -> illustration of how to use Velvet's AI gateway with 2 lines of code Trusted by innovative engineering teams [66e3723730][66e372376b][66e37239aa][66e37237a7][66e372372a] [66e3723930][66e372376b][66e211a0e3] [64caab9afc] [64caab9afc] How it works observe Log requests query Analyze usage OPTIMIZE Run experiments Data pipeline from OpenAI to Postgres easy onboarding Add 2 lines of code to get started Warehouse every OpenAI and Anthropic request to your PostgreSQL database. Use logs to analyze, evaluate, and generate datasets. Get started -> Website code snippet example of base URL Code snippet of JSON object analyze models Analyze usage & run experiments We store a customizable JSON object so you can granularly monitor usage, calculate cost, run evaluations, and fine-tune models. Learn more -> reduce costs Optimize with caching & batching Enable caching to reduce costs and latency. Get full transparency into OpenAI's Batch and Files APIs using our built-in proxy support. Learn more -> code snippet for enabling caching in the header [64caab9afc] [64caab9afc] [64caab9afc] [64caab9afc] Backend workflows designed for engineers Customer testimonial: Blaze AI "We experiment with LLM models, settings, and optimizations. Velvet made it easy to implement logging and caching. And we're storing training sets to eventually fine-tune our own models. Chirag Mahapatra CTO, Blaze AI Customer testimonial: Revo AI "Velvet gives us a source of truth for what's happening between the Revo copilot, and the LLMs it orchestrates. We have the data we need to run evaluations, calculate costs, and quickly resolve issues." Mehdi Djabri CEO, Revo.pm Customer testimonial: Find AI "Our engineers use Velvet daily. It monitors AI features in production, even opaque APIs like batch. The caching feature reduces costs significantly. And, we use the logs to observe, test, and fine-tune." Philip Thomas CTO, Find AI Use velvet Flexible infrastructure for scale table icon Full data ownership Log every request to your database. Secure and compliant. code icon Granular observability Store data as JSON to gain deep insights into usage, costs, and more. graph icon Powerful analysis Understand API usage to optimize AI features and resolve problems. flag icon Intelligent caching Reduce costs and latency with our smart caching system. sparkle icon Experiment framework Run experiments on test datasets to optimize outputs at scale. groupings icon Dataset generation Export datasets for fine-tuning models and other batch workflows. AI gateway Analyze and optimize your AI features Free up to 10k requests per month. 2 lines of code to get started. Try Velvet for free Q & A Who is Velvet made for? Velvet is a tool for engineers leveraging OpenAI and Anthropic APIs. We warehouse every request to your PostgreSQL database. Once warehoused, use the logs however you want. Most teams use logs to analyze, evaluate, and fine-tune their AI features. Schedule a call to learn more. How do I get started? (1) Create an account at usevelvet.com/register. (2) Read the docs at docs.usevelvet.com. (3) Set your baseURL to the Velvet gateway. (4) Use our database or connect your own. Schedule a call to learn more. Which models and DBs do you support? Velvet supports OpenAI and Anthropic endpoints, and warehouse requests to PostgreSQL. Looking for something else? Email us your requirements and we'll add features to our roadmap. Schedule a call, or read the docs. Email team@usevelvet.com. What are common use cases? Use logs to analyze, optimize, and fine-tune AI. Analyze model usage Query logs to understand usage, trouble shoot problems, calculate COGS, and evaluate models. Optimize AI features Enable caching and batching to reduce costs and latency. Test different prompts and models. Run experiments Select datasets to run one-off or continuous experiments. Test models, settings, and metrics. Implement fine-tuning & batch Generate training sets for fine-tuning. Use the batch API for evaluations, classification, or embeddings. Email us at team@usevelvet.com. How much does it cost? It's free to get started. Create your account at usevelvet.com/ register to try it out. See our pricing page. Email team@usevelvet.com, or schedule a call. [64caab9afc] [64caab9afc] Articles from Velvet [66c48abdfb] Engineering Why Find AI logs OpenAI requests with Velvet AI-powered B2B search engine logged 1,500 requests per second. Learn More [66a3aead01] Engineering Create a fine-tuning dataset for gpt-4o-mini Use Velvet to identify and export a fine-tuning dataset. Learn More [66abe97824] Engineering Cache LLM requests to reduce latency and costs Return results in milliseconds and don't waste calls on identical requests. Learn More [64caab9afc] [64caab9afc] velvet logo AI gateway team@usevelvet.com PRODUCT Overview Pricing USE VELVET Documentation Login COMPANY Schedule a call Email us Terms | Privacy