Monitor your application powered by Anyscale language models to ensure, get visibility to what you send to Anyscale, responses received from Anyscale, latency, usage and errors. By monitoring the usage, you can infer the cost.
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement.
Detect and address issues early to prevent them from affecting model performance.
Our Anyscale quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models.
New Relic Anyscale monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Anyscale usage. These reports include: