Compared with traditional stand-alone databases, TiDB has the following advantages:
In terms of kernel design, TiDB Service divides the overall architecture into multiple modules, which communicate with each other to form a complete TiDB system. The corresponding architecture diagram is as follows:
Architecture diagram
TiDB Server: SQL layer, which exposes the connection endpoint of MySQL protocol, is responsible for accepting connections from clients, performing SQL parsing and optimization, and finally generating a distributed execution plan. The TiDB layer itself is stateless. In practice, multiple TiDB instances can be booted, and a unified access address is provided externally through load balance components (such as LVS, HAProxy or F5). Client connections can be evenly distributed on multiple TiDB instances to achieve load balance. TiDB Server itself does not store data, but only parses SQL and forwards the actual data read requests to the underlying storage node TiKV (or TiFlash).
PD (Placement Driver) Server: The meta information management module of the entire TiDB cluster, is responsible for storing the real-time data distribution of each TiKV node and the overall topology of the cluster, providing TiDB Dashboard control interface and assigning transaction IDs for distributed transactions. PD not only stores meta information, bu also issues data scheduling commands to specific TiKV nodes according to the real-time data distribution status reported by TiKV nodes, which can be said to be the “brain” of the entire cluster. Furthermore, PD itself is also composed of at least 3 nodes, with high availability. It is recommended to deploy an odd number of PD nodes.
Storage Node
我们的产品专家为您找到最合适的产品/解决⽅案
1v1线上咨询获取售前专业咨询
专业产品顾问,随时随地沟通