In a newly revised paper, a research team from Simon Fraser University and Tencent conducts a comprehensive study on a fundamental open question in query optimization: Are We Ready For Learned Cardinality Estimation?
Research in the field of machine learning for database management systems (DBMS) is increasingly concerned with methods that could replace existing database components with learned models. One promising direction is to replace cardinality estimators — where cardinality refers to the number of unique values in a particular part of a database — with learned models. The study seeks to determine the production-readiness of such learned cardinality models.
The research team summarizes their main contributions:
- Are Learned Methods Ready For Static Environments? We propose a unified workload generator and collect four real-world benchmark datasets. We compare five new learned methods with nine traditional methods using the same datasets and workload in static environments.
- Are Learned Methods Ready For Dynamic Environments? We explore how each learned method performs by varying update rates on four real-world datasets.
- When Do Learned Methods Go Wrong? We vary correlation, skewness, and domain size, respectively, on a synthetic dataset, and try to understand when learned methods may go wrong.
- Research Opportunities. We identify two future research directions: i) control the cost of learned methods and ii) make learned methods trustworthy, and suggest a number of promising research opportunities.
The two main types of new learned methods for the cardinality estimation (CE) problem are regression methods and joint distribution methods. Regression methods model CE as a regression problem and aim to build a mapping between queries and the CE results via feature vectors via regression methods such as MSCN, LW-XGB, LW-NN, and DQM-Q. Joint distribution methods meanwhile model CE as a joint probability distribution estimation problem and aim to construct a joint distribution from the table to estimate the cardinality using methods such as Naru, DeepDB, and DQM-D.
In the new study, the researchers use q-error as their accuracy metric and measure the quality of the estimation results for five recently published learned methods: Naru, MSCN, LW-XGB/NN, DeepDB, and DQM. They conduct their experiments on four real-world datasets, Census, Forest, Power and DMV.
The team first compares the accuracy of learned methods with traditional methods such as Postgres, MySQL, DBMS-A, etc., then measures their training and inference times to evaluate their production readiness.
This exploration yields five main findings:
- New learned estimators can deliver more accurate prediction than traditional methods in general and among learned methods, Naru shows the most robust performance.
- In terms of training time, new learned methods can be slower than DBMS products in magnitudes except for LW-XGB.
- New learned estimators based on regression models (MSCN and LW-XGB/NN) can be competitive to database systems in inference time, while methods that model the joint distribution directly (Naru and DeepDB) require much more time.
- GPUs can speed up the training and inference time of some of the new learned estimators, however they cannot make them as quick as DBMS products and sometimes introduce overhead.
- Hyper-parameter tuning is an extra cost that cannot be ignored when adopting neural network-based estimators.
The researchers then explore how learned methods perform against DBMSs in dynamic environments, the trade-off between the number of updating epochs and accuracy, as well as to what extent GPUs can help.
Here, their main findings are:
- Learned methods cannot catch up with fast date updates. MSCN, LW-NN, Naru, and DeepDB return large errors in dynamic environments for different reasons.
- Within learned methods, there is no clear winner. Naru performs the best when date updates are infrequent, while LW-XGB performs the best in more dynamic environments.
- In terms of updating time, DeepDB is the fastest data-driven method and LW-XGB is the fastest query-driven method.
- There is a trade-off between updating time and accuracy for learned methods. It is not easy to balance this trade-off in practice, and more research on this topic is required.
- GPUs can, but do not necessarily, improve performance. It is important to design a good strategy to handle model updates in order to benefit from GPUs.
Finally, the team runs a micro-benchmark to observe how learned methods’ large errors change when the underlying dataset is altered; and identifies simple logical rules that are frequently violated by these learning models.
They summarize their findings on the micro-benchmark as:
- All new learned estimators tend to output larger errors on more correlated data, and the max q-error jumps quite dramatically when two columns are functional dependent.
- Different methods react differently with more skewed data or data with larger domain sizes. This might be due to the differences in the choice of models, input features, and loss functions.
- We propose five rules for cardinality estimators and find that all new learned models except for DeepDB violate these rules.
- The non-transparency of the models used in new learned estimators can be troublesome in terms of debuggability, explainability, predictability, and reproducibility when deployed in production.
The team proposes that the cost of learned estimators can be controlled by balancing the trade-off between accuracy and training and hyper-parameter tuning; and that it is also crucial to ensure that learned estimators are trustworthy.
The study concludes that although the new learned methods are more accurate than traditional methods, they are still not ready to be deployed in real DBMS products. This research direction however remains promising and worthy of further exploration.
The paper Are We Ready For Learned Cardinality Estimation? is on arXiv.
Author: Hecate He | Editor: Michael Sarazen, Chain Zhang
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.