GPU Direct SQL Execution
Overview
For the fast execution of SQL workloads, it needs to provide processors rapid data stream from storage or memory, in addition to processor's execution efficiency. Processor will run idle if data stream would not be delivered.
GPUDirect SQL Execution directly connects NVMe-SSD which enables high-speed I/O processing by direct attach to the PCIe bus and GPU device that is also attached on the same PCIe bus, and runs SQL workloads very high speed by supplying data stream close to the wired speed of the hardware.
Usually, PostgreSQL data blocks on the storage shall be once loaded to CPU/RAM through the PCIe bus, then, PostgreSQL runs WHERE-clause for filtering or JOIN/GROUP BY according to the query execution plan. Due to the characteristics of analytic workloads, the amount of result data set is much smaller than the source data set. For example, it is not rare case to read billions rows but output just hundreds rows after the aggregation operations with GROUP BY.
In the other words, we consume bandwidth of the PCIe bus to move junk data, however, we cannot determine whether rows are necessary or not prior to the evaluation by SQL workloads on CPU. So, it is not avoidable restriction in usual implementation.
GPU Direct SQL Execution changes the flow to read blocks from the storage sequentially. It directly loads data blocks to GPU using peer-to-peer DMA over PCIe bus, then runs SQL workloads on GPU device to reduce number of rows to be processed by CPU. In other words, it utilizes GPU as a pre-processor of SQL which locates in the middle of the storage and CPU/RAM for reduction of CPU's load, then tries to accelerate I/O processing in the results.
This feature internally uses the NVIDIA GPUDirect Storage module (nvidia-fs
) to coordinate P2P data transfer from NVME storage to GPU device memory.
So, this feature requires this Linux kernel module, in addition to PG-Strom as an extension of PostgreSQL.
Also note that this feature supports only NVME-SSD or NVME-oF remove devices. It does not support legacy storages like SAS or SATA-SSD. We have tested several NVMD-SSD models. You can refer 002: HW Validation List for your information.
System Setup
Driver Installation
The previous version of PG-Strom required its original Linux kernel module developed by HeteroDB for GPU-Direct SQL support, however, the version 3.0 revised the software design to use GPUDirect Storage provided by NVIDIA, as a part of CUDA Toolkit. The Linux kernel module for GPUDirect Storage (nvidia-fs
) is integrated into the CUDA Toolkit installation process and requires no additional configuration if you have set up your system as described in the Installation chapter of this manual.
You can check whether the required Linux kernel drivers are installed using the modinfo
command or lsmod
command.
$ modinfo nvidia-fs
filename: /lib/modules/5.14.0-427.18.1.el9_4.x86_64/extra/nvidia-fs.ko.xz
description: NVIDIA GPUDirect Storage
license: GPL v2
version: 2.20.5
rhelversion: 9.4
srcversion: 096A726CAEC0A059E24049E
depends:
retpoline: Y
name: nvidia_fs
vermagic: 5.14.0-427.18.1.el9_4.x86_64 SMP preempt mod_unload modversions
sig_id: PKCS#7
signer: DKMS module signing key
sig_key: 18:B4:AE:27:B8:7D:74:4F:C2:27:68:2A:EB:E0:6A:F0:84:B2:94:EE
sig_hashalgo: sha512
: :
$ lsmod | grep nvidia
nvidia_fs 323584 32
nvidia_uvm 6877184 4
nvidia 8822784 43 nvidia_uvm,nvidia_fs
drm 741376 2 drm_kms_helper,nvidia
Designing Tablespace
GPU Direct SQL Execution shall be invoked in the following case.
- The target table to be scanned locates on the partition being consist of NVMe-SSD.
/dev/nvmeXXXX
block device, or md-raid0 volume which consists of NVMe-SSDs only.
- The target table size is larger than
pg_strom.gpudirect_threshold
.- You can adjust this configuration. Its default is physical RAM size of the system plus 1/3 of
shared_buffers
configuration.
- You can adjust this configuration. Its default is physical RAM size of the system plus 1/3 of
Note
Striped read from multiple NVMe-SSD using md-raid0 requires the enterprise subscription provided by HeteroDB,Inc.
In order to deploy the tables on the partition consists of NVMe-SSD, you can use the tablespace function of PostgreSQL to specify particular tables or databases to place them on NVMe-SSD volume, in addition to construction of the entire database cluster on the NVMe-SSD volume.
For example, you can create a new tablespace below, if NVMe-SSD is mounted at /opt/nvme
.
CREATE TABLESPACE my_nvme LOCATION '/opt/nvme';
In order to create a new table on the tablespace, specify the TABLESPACE
option at the CREATE TABLE
command below.
CREATE TABLE my_table (...) TABLESPACE my_nvme;
Or, use ALTER DATABASE
command as follows, to change the default tablespace of the database.
Note that tablespace of the existing tables are not changed in thie case.
ALTER DATABASE my_database SET TABLESPACE my_nvme;
Operations
Distance between GPU and NVME-SSD
On selection of server hardware and installation of GPU and NVME-SSD, hardware configuration needs to pay attention to the distance between devices, to pull out maximum performance of the device.
NVIDIA GPUDirect RDMA, basis of the GPU Direct SQL mechanism, requires both of the edge devices of P2P DMA are connected on the same PCIe root complex. In the other words, unable to configure the P2P DMA traverses QPI between CPUs when NVME-SSD is attached on CPU1 and GPU is attached on CPU2 at dual socket system.
From standpoint of the performance, it is recommended to use dedicated PCIe-switch to connect both of the devices more than the PCIe controller built in CPU.
The photo below is a motherboard of HPC server. It has 8 of PCIe x16 slots, and each pair is linked to the other over the PCIe switch. The slots in the left-side of the photo are connected to CPU1, and right-side are connected to CPU2.
When a table on SSD-2 is scanned using GPU Direct SQL, the optimal GPU choice is GPU-2, and it may be able to use GPU1. However, we have to avoid to choose GPU-3 and GPU-4 due to the restriction of GPUDirect RDMA.
PG-Strom calculate logical distances on any pairs of GPU and NVME-SSD using PCIe bus topology information of the system on startup time.
It is displayed at the start up log. Each NVME-SSD determines the preferable GPU based on the distance, for example, GPU1
shall be used on scan of the /dev/nvme2
.
$ pg_ctl restart
:
LOG: PG-Strom: GPU0 NVIDIA A100-PCIE-40GB (108 SMs; 1410MHz, L2 40960kB), RAM 39.50GB (5120bits, 1.16GHz), PCI-E Bar1 64GB, CC 8.0
LOG: [0000:41:00:0] GPU0 (NVIDIA A100-PCIE-40GB; GPU-13943bfd-5b30-38f5-0473-78>
LOG: [0000:81:00:0] nvme0 (NGD-IN2500-080T4-C) --> GPU0 [dist=9]
LOG: [0000:82:00:0] nvme2 (INTEL SSDPF2KX038TZ) --> GPU0 [dist=9]
LOG: [0000:c2:00:0] nvme3 (INTEL SSDPF2KX038TZ) --> GPU0 [dist=9]
LOG: [0000:c6:00:0] nvme5 (Corsair MP600 CORE) --> GPU0 [dist=9]
LOG: [0000:c3:00:0] nvme4 (INTEL SSDPF2KX038TZ) --> GPU0 [dist=9]
LOG: [0000:c1:00:0] nvme1 (INTEL SSDPF2KX038TZ) --> GPU0 [dist=9]
LOG: [0000:c4:00:0] nvme6 (NGD-IN2500-080T4-C) --> GPU0 [dist=9]
Usually automatic configuration works well. In case when NVME-over-Fabric(RDMA) is used, unable to identify the location of nvme device on the PCIe-bus, so you need to configure the logical distance between NVME-SSD and GPU manually.
The example below shows the configuration of gpu2
for nvme1
, and gpu1
for nvme2
and nvme3
.
It shall be added to postgresql.conf
. Please note than manual configuration takes priority than the automatic configuration.
pg_strom.nvme_distance_map = 'nvme1=gpu2,nvme2=gpu1,nvme3=gpu1'
If the concept of distance on the PCI-E bus is not suitable, such as when running GPU-Direct SQL from a storage server connected via 100Gb Ethernet, other than a local NVME-SSD device, you can specify the directory where the storage is mounted, and the preferable GPU devices to be associated with. Below is a setting example.
pg_strom.nvme_distance_map = '/mnt/0=gpu0,/mnt/1=gpu1'
Controls using GUC parameters
There are two GPU parameters related to GPU Direct SQL Execution.
The first is pg_strom.gpudirect_enabled
that simply turn on/off the function of GPU Direct SQL Execution.
If off
, GPU Direct SQL Execution should not be used regardless of the table size or physical location. Default is on
.
The other one is pg_strom.gpudirect_threshold
which specifies the least table size to invoke GPU Direct SQL Execution.
PG-Strom will choose GPU Direct SQL Execution when target table is located on NVME-SSD volume (or md-raid0 volume which consists of NVME-SSD only), and the table size is larger than this parameter.
Its default configuration is 2GB
. In other words, for obviously small tables, priority is given to reading from PostgreSQL's buffer rather than GPU-Direct SQL.
Even if GPU Direct SQL Execution has advantages on a single table scan workload, usage of disk cache may work better on the second or later trial for the tables which are available to load onto the main memory.
On course, this assumption is not always right depending on the workload charasteristics.
Ensure usage of GPU Direct SQL Execution
EXPLAIN
command allows to ensure whether GPU Direct SQL Execution shall be used in the target query, or not.
In the example below, a scan on the lineorder
table by Custom Scan (GpuJoin)
shows NVMe-Strom: enabled
. In this case, GPU Direct SQL Execution shall be used to read from the lineorder
table.
# explain (costs off)
select sum(lo_revenue), d_year, p_brand1
from lineorder, date1, part, supplier
where lo_orderdate = d_datekey
and lo_partkey = p_partkey
and lo_suppkey = s_suppkey
and p_category = 'MFGR#12'
and s_region = 'AMERICA'
group by d_year, p_brand1
order by d_year, p_brand1;
QUERY PLAN
----------------------------------------------------------------------------------------------
GroupAggregate
Group Key: date1.d_year, part.p_brand1
-> Sort
Sort Key: date1.d_year, part.p_brand1
-> Custom Scan (GpuPreAgg)
Reduction: Local
GPU Projection: pgstrom.psum((lo_revenue)::double precision), d_year, p_brand1
Combined GpuJoin: enabled
-> Custom Scan (GpuJoin) on lineorder
GPU Projection: date1.d_year, part.p_brand1, lineorder.lo_revenue
Outer Scan: lineorder
Depth 1: GpuHashJoin (nrows 2406009600...97764190)
HashKeys: lineorder.lo_partkey
JoinQuals: (lineorder.lo_partkey = part.p_partkey)
KDS-Hash (size: 10.67MB)
Depth 2: GpuHashJoin (nrows 97764190...18544060)
HashKeys: lineorder.lo_suppkey
JoinQuals: (lineorder.lo_suppkey = supplier.s_suppkey)
KDS-Hash (size: 131.59MB)
Depth 3: GpuHashJoin (nrows 18544060...18544060)
HashKeys: lineorder.lo_orderdate
JoinQuals: (lineorder.lo_orderdate = date1.d_datekey)
KDS-Hash (size: 461.89KB)
NVMe-Strom: enabled
-> Custom Scan (GpuScan) on part
GPU Projection: p_brand1, p_partkey
GPU Filter: (p_category = 'MFGR#12'::bpchar)
-> Custom Scan (GpuScan) on supplier
GPU Projection: s_suppkey
GPU Filter: (s_region = 'AMERICA'::bpchar)
-> Seq Scan on date1
(31 rows)
Attension for visibility map
Right now, GPU routines of PG-Strom cannot run MVCC visibility checks per row, because only host code has a special data structure for visibility checks. It also leads a problem.
We cannot know which row is visible, or invisible at the time when PG-Strom requires P2P DMA for NVMe-SSD, because contents of the storage blocks are not yet loaded to CPU/RAM, and MVCC related attributes are written with individual records. PostgreSQL had similar problem when it supports IndexOnlyScan.
To address the problem, PostgreSQL has an infrastructure of visibility map which is a bunch of flags to indicate whether any records in a particular data block are visible from all the transactions. If associated bit is set, we can know the associated block has no invisible records without reading the block itself.
GPU Direct SQL Execution utilizes this infrastructure. It checks the visibility map first, then only "all-visible" blocks are required to read with P2P DMA.
VACUUM constructs visibility map, so you can enforce PostgreSQL to construct visibility map by explicit launch of VACUUM command.
VACUUM ANALYZE linerorder;