Skip to main content
The Flows dashboard shows metrics for your Flows through the following visualizations:

Line Charts

FLow Execution Time (90th percentile)

Shows the 90th percentile of Flow execution times, measured in milliseconds. This means that 90% of all executed Flows finish within this duration.

Custom Flow Nodes (Extensions) Execution

Shows the number of Extensions that are executed per minute within your Flows. This metric helps evaluate Extension usage and identify potential performance bottlenecks caused by custom code. The metric also provides insights into how often specific integrations or logic are triggered during Flow execution.

Knowledge AI Queries

Shows the number of successful Knowledge AI queries executed against the vector database. This metric is useful for tracking query volume and system reliability.

Outbound HTTP Requests

Shows the number of API calls made via the HTTP Request Node. This metric helps monitor the volume of external requests and track integration activity.

LLM Fallbacks Triggered

Shows the number of times an LLM fallback was triggered because the main model failed. This metric helps identify reliability issues with the main model.

LLM Retries Triggered

Shows the number of API calls to LLM providers that were retried due to issues with a model. This metric helps identify potential reliability or performance problems affecting model responses.

Heatmaps

Extension Processing Time

Shows the execution time, in milliseconds, for running Extension code. The metric helps assess the performance impact of the Extension code on Flow execution. Monitoring this metric can identify potential delays or inefficiencies in the Extension code.

NLU Scoring Time

Shows the duration, in milliseconds, that the NLU model takes to score a user input. This metric helps evaluate the performance and responsiveness of the NLU model under load.

Knowledge AI Query Latency

Shows the time, in milliseconds, to query Knowledge AI and receive a response. This metric is useful for monitoring Knowledge AI performance. High query latency can slow down the AI Agent. Slow responses make the AI Agent unresponsive, leading users to lose interest or abandon the conversation.

Outbound HTTP Request Latency

Shows the time, in milliseconds, taken to complete outbound HTTP requests performed via the HTTP Request Node. This metric helps identify potential latency issues in outbound HTTP requests.