Even before design of NDBCLUSTER
began in 1996, it was evident that one of the major problems to be
encountered in building parallel databases would be communication
between the nodes in the network. For this reason,
NDBCLUSTER
was designed from the very
beginning to permit the use of a number of different data
transport mechanisms. In this Manual, we use the term
transporter for these.
The NDB Cluster codebase provides for four different transporters:
TCP/IP using 100 Mbps or gigabit Ethernet, as discussed in Section 22.3.3.10, “NDB Cluster TCP/IP Connections”.
Direct (machine-to-machine) TCP/IP; although this transporter uses the same TCP/IP protocol as mentioned in the previous item, it requires setting up the hardware differently and is configured differently as well. For this reason, it is considered a separate transport mechanism for NDB Cluster. See Section 22.3.3.11, “NDB Cluster TCP/IP Connections Using Direct Connections”, for details.
Shared memory (SHM). For more information about SHM, see Section 22.3.3.12, “NDB Cluster Shared-Memory Connections”.
Scalable Coherent Interface (SCI). For more information about SCI, see Section 22.3.3.13, “SCI Transport Connections in NDB Cluster”.
NoteUsing SCI transporters in NDB Cluster requires specialized hardware, software, and MySQL binaries not available with NDB 8.0.
Most users today employ TCP/IP over Ethernet because it is ubiquitous. TCP/IP is also by far the best-tested transporter for use with NDB Cluster.
We are working to make sure that communication with the ndbd process is made in “chunks” that are as large as possible because this benefits all types of data transmission.