On this page
The SQL metrics let you monitor SQL performance.
To view these graphs, select a cluster from the Clusters page, and click Metrics in the Monitoring section of the left side navigation. On the Metrics page, click the SQL tab.
Time interval selection
The time interval selector at the top of each tab allows you to filter the view for a predefined or custom time interval. Use the navigation buttons to move to the previous, next, or current time interval. When you select a time interval, the same interval is selected for all charts on the Metrics page.
SQL Connections
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Connections Per Second |
sql.new_conns
|
Number of SQL connections created |
The rate of this metric shows how frequently new connections are being established. This can be useful in determining if a high rate of incoming new connections is causing additional load on the server due to a misconfigured application. |
SQL Statements
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Select |
sql.select.count
|
Number of SQL SELECT statements successfully executed |
This high-level metric reflects workload volume. Monitor this metric to identify abnormal application behavior or patterns over time. If abnormal patterns emerge, apply the metric's time range to the SQL Activity pages to investigate interesting outliers or patterns. For example, on the Transactions page and the Statements page, sort by the Execution Count column. To find problematic sessions, on the Sessions page, sort by the Transaction Count column. Find the sessions with high transaction counts and trace back to a user or application. |
Update |
sql.update.count
|
Number of SQL UPDATE statements successfully executed |
This high-level metric reflects workload volume. Monitor this metric to identify abnormal application behavior or patterns over time. If abnormal patterns emerge, apply the metric's time range to the SQL Activity pages to investigate interesting outliers or patterns. For example, on the Transactions page and the Statements page, sort by the Execution Count column. To find problematic sessions, on the Sessions page, sort by the Transaction Count column. Find the sessions with high transaction counts and trace back to a user or application. |
Insert |
sql.insert.count
|
Number of SQL INSERT statements successfully executed |
This high-level metric reflects workload volume. Monitor this metric to identify abnormal application behavior or patterns over time. If abnormal patterns emerge, apply the metric's time range to the SQL Activity pages to investigate interesting outliers or patterns. For example, on the Transactions page and the Statements page, sort by the Execution Count column. To find problematic sessions, on the Sessions page, sort by the Transaction Count column. Find the sessions with high transaction counts and trace back to a user or application. |
Delete |
sql.delete.count
|
Number of SQL DELETE statements successfully executed |
This high-level metric reflects workload volume. Monitor this metric to identify abnormal application behavior or patterns over time. If abnormal patterns emerge, apply the metric's time range to the SQL Activity pages to investigate interesting outliers or patterns. For example, on the Transactions page and the Statements page, sort by the Execution Count column. To find problematic sessions, on the Sessions page, sort by the Transaction Count column. Find the sessions with high transaction counts and trace back to a user or application. |
Service Latency: SQL Statements
Short Name |
CockroachDB Metric Name |
Description |
Usage |
P90, P99, P99.9, P99.99 |
sql.service.latency
|
Latency of SQL request execution |
These high-level metrics reflect workload performance. Monitor these metrics to understand latency over time. If abnormal patterns emerge, apply the metric's time range to the SQL Activity pages to investigate interesting outliers or patterns. The Statements page has P90 Latency and P99 latency columns to enable correlation with this metric. |
Connection Latency
Short Name |
CockroachDB Metric Name |
Description |
Usage |
P90, P99 |
sql.conn.latency
|
Latency to establish and authenticate a SQL connection |
Connection latency is calculated as the time in nanoseconds between when the cluster receives a connection request and establishes the connection to the client, including authentication. This graph shows the p90 and p99 latencies for SQL connections to the cluster.
These metrics characterize the database connection latency which can affect the application performance, for example, by having slow startup times. |
Open SQL Sessions
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Connections |
sql.conns
|
Number of open SQL connections |
This metric shows the total number of SQL client connections across the cluster.
Refer to the Sessions page for more details on the sessions.
This metric also shows the distribution, or balancing, of connections across the cluster. Review Connection Pooling. |
Open SQL Transactions
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Open Transactions |
sql.txns.open
|
Number of currently open user SQL transactions |
This metric should roughly correspond to the number of cores * 4. If this metric is consistently larger, scale out the cluster. |
Transactions
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Begin |
sql.txn.begin.count
|
Number of SQL transaction BEGIN statements successfully executed |
This metric reflects workload volume by counting explicit transactions. Use this metric to determine whether explicit transactions can be refactored as implicit transactions (individual statements). |
Commits |
sql.txn.commit.count
|
Number of SQL transaction COMMIT statements successfully executed |
This metric shows the number of transactions that completed successfully. This metric can be used as a proxy to measure the number of successful explicit transactions. |
Rollbacks |
sql.txn.rollback.count
|
Number of SQL transaction ROLLBACK statements successfully executed |
This metric shows the number of orderly transaction rollbacks. A persistently high number of rollbacks may negatively impact the workload performance and needs to be investigated. |
Aborts |
sql.txn.abort.count
|
Number of SQL transaction abort errors |
This high-level metric reflects workload performance. A persistently high number of SQL transaction abort errors may negatively impact the workload performance and needs to be investigated. |
Transaction Restarts
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Write Too Old |
txn.restarts.writetooold
|
Number of restarts due to a concurrent writer committing first |
This metric is one measure of the impact of contention conflicts on workload performance. For guidance on contention conflicts, review transaction contention best practices and performance tuning recipes. Tens of restarts per minute may be a high value, a signal of an elevated degree of contention in the workload, which should be investigated. For the specific error, refer to the transaction retry error reference for more details. |
Write Too Old (multiple) |
txn.restarts.writetoooldmulti
|
|
This metric is one measure of the impact of contention conflicts on workload performance. For guidance on contention conflicts, review transaction contention best practices and performance tuning recipes. Tens of restarts per minute may be a high value, a signal of an elevated degree of contention in the workload, which should be investigated. For the specific error, refer to the transaction retry error reference for more details. |
Forwarded Timestamp |
txn.restarts.serializable
|
Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE |
This metric is one measure of the impact of contention conflicts on workload performance. For guidance on contention conflicts, review transaction contention best practices and performance tuning recipes. Tens of restarts per minute may be a high value, a signal of an elevated degree of contention in the workload, which should be investigated. For the specific error, refer to the transaction retry error reference for more details. |
Async Consensus Failure |
txn.restarts.asyncwritefailure
|
Number of restarts due to async consensus writes that failed to leave intents |
This metric is one measure of the impact of contention conflicts on workload performance. For guidance on contention conflicts, review transaction contention best practices and performance tuning recipes. Tens of restarts per minute may be a high value, a signal of an elevated degree of contention in the workload, which should be investigated. For the specific error, refer to the transaction retry error reference for more details. |
Read Within Uncertainty Interval |
txn.restarts.readwithinuncertainty
|
Number of restarts due to reading a new value within the uncertainty interval |
This metric is one measure of the impact of contention conflicts on workload performance. For guidance on contention conflicts, review transaction contention best practices and performance tuning recipes. Tens of restarts per minute may be a high value, a signal of an elevated degree of contention in the workload, which should be investigated. For the specific error, refer to the transaction retry error reference for more details. |
Aborted |
txn.restarts.txnaborted
|
Number of restarts due to an abort by a concurrent transaction (usually due to deadlock) |
This metric is one measure of the impact of contention conflicts on workload performance. For guidance on contention conflicts, review transaction contention best practices and performance tuning recipes. Tens of restarts per minute may be a high value, a signal of an elevated degree of contention in the workload, which should be investigated. For the specific error, refer to the transaction retry error reference for more details. |
Push Failure |
txn.restarts.txnpush
|
Number of restarts due to a transaction push failure |
This metric is one measure of the impact of contention conflicts on workload performance. For guidance on contention conflicts, review transaction contention best practices and performance tuning recipes. Tens of restarts per minute may be a high value, a signal of an elevated degree of contention in the workload, which should be investigated. For the specific error, refer to the transaction retry error reference for more details. |
Unknown |
txn.restarts.unknown
|
Number of restarts due to a unknown reasons |
The errors tracked by this metric are generally due to deadlocks. Deadlocks can often be prevented with a considered transaction design. Identify the conflicting transactions involved in the deadlocks. Then, if possible, redesign the business logic implementation prone to deadlocks. |
Transaction Latency
Short Name |
CockroachDB Metric Name |
Description |
Usage |
P90, P99 |
sql.txn.latency
|
Latency of SQL transactions |
Over the last minute, this cluster executed 90% or 99% of transactions within this time. This time does not include network latency between the cluster and client. These metrics provide an overview of the current SQL workload. |
Active SQL Statements
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Active Statements |
sql.statements.active
|
Number of currently active user SQL statements |
This high-level metric reflects workload volume. |
SQL Statement Errors
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Errors |
sql.failure.count
|
Number of statements resulting in a planning or runtime error |
This metric is a high-level indicator of workload and application degradation with query failures. Use the Insights page to find failed executions with their error code to troubleshoot or use application-level logs, if instrumented, to determine the cause of error. |
SQL Statement Contention
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Contention |
sql.distsql.contended_queries.count
|
Number of SQL queries that experienced contention |
This metric is incremented whenever there is a non-trivial amount of contention experienced by a statement whether read-write or write-write conflicts. Monitor this metric to correlate possible workload performance issues to contention conflicts.
|
Full Scans
Schema Changes
Short Name |
CockroachDB Metric Name |
Description |
Usage |
DDL Statements |
sql.ddl.count
|
Number of SQL DDL statements successfully executed |
This high-level metric reflects workload volume. Monitor this metric to identify abnormal application behavior or patterns over time. If abnormal patterns emerge, apply the metric's time range to the SQL Activity pages to investigate interesting outliers or patterns. For example, on the Transactions page and the Statements page, sort by the Execution Count column. To find problematic sessions, on the Sessions page, sort by the Transaction Count column. Find the sessions with high transaction counts and trace back to a user or application. |
Statistics Jobs
Short Name |
CockroachDB Metric Name |
Description |
Usage |
Auto Create Statistics Running |
jobs.auto_create_stats.currently_running
|
Number of auto_create_stats jobs currently running in Resume or OnFailOrCancel state |
This metric tracks the number of active automatically generated statistics jobs that could also be consuming resources. Ensure that foreground SQL traffic is not impacted by correlating this metric with SQL latency and query volume metrics. |
Create Statistics Running |
jobs.create_stats.currently_running
|
Number of create_stats jobs currently running in Resume or OnFailOrCancel state |
This metric tracks the number of active create statistics jobs that may be consuming resources. Ensure that foreground SQL traffic is not impacted by correlating this metric with SQL latency and query volume metrics. |
Auto Create Statistics Failed |
jobs.auto_create_stats.resume_failed
|
Number of auto_create_stats jobs which failed with a non-retriable error |
This metric is a high-level indicator that automatically generated table statistics is failing. Failed statistic creation can lead to the query optimizer running with stale statistics. Stale statistics can cause suboptimal query plans to be selected leading to poor query performance. |
Auto Create Statistics Paused |
jobs.auto_create_stats.currently_paused
|
Number of auto_create_stats jobs currently considered Paused |
This metric is a high-level indicator that automatically generated statistics jobs are paused which can lead to the query optimizer running with stale statistics. Stale statistics can cause suboptimal query plans to be selected leading to poor query performance. |
See also
Was this helpful?