within the information enter pipeline of a machine studying mannequin operating on a GPU could be significantly irritating. In most workloads, the host (CPU) and the gadget (GPU) work in tandem: the CPU is accountable for making ready and feeding information, whereas the GPU handles the heavy lifting — executing the mannequin, performing backpropagation throughout coaching, and updating weights.
In a great scenario, we would like the GPU — the costliest element of our AI/ML infrastructure — to be extremely utilized. This results in quicker growth cycles, decrease coaching prices, and decreased latency in deployment. To attain this, the GPU should be repeatedly fed with enter information. Particularly, we wish to forestall the onset of “GPU hunger” — a scenario by which our most costly useful resource lays idle whereas it waits for enter information. Sadly, “GPU hunger” attributable to bottlenecks within the information enter pipeline is kind of widespread and might dramatically scale back system effectivity. As such, it’s vital for AI/ML builders to have dependable instruments and techniques for diagnosing and addressing such points.
This publish — the eighth in our sequence on the subject of PyTorch Mannequin Efficiency Evaluation and Optimization — introduces a easy caching technique for figuring out bottlenecks within the information enter pipeline. As in earlier posts, we goal to strengthen two key concepts:
- AI/ML builders should take duty for the runtime efficiency of their fashions.
- You do not want to be a CUDA or programs professional to implement important efficiency optimizations.
We’ll begin by outlining among the widespread causes of GPU hunger. Then we’ll introduce our caching-based technique for figuring out and analyzing enter pipeline efficiency points. We’ll shut by reviewing a set of sensible instruments, methods, and methods (TTTs) for overcoming efficiency bottlenecks within the information enter pipeline.
To facilitate our dialogue we are going to outline a toy PyTorch mannequin and an related information enter pipeline. The code that we are going to share is meant for demonstrative functions — please don’t depend on its correctness or optimality. Moreover, please don’t our point out of any software, or approach as an endorsement of its use.
A Toy PyTorch Mannequin
We outline a easy PyTorch-based picture classification mannequin mannequin:
undefined
We outline an artificial dataset with quite a few transformations — deliberately designed to incorporate a extreme enter pipeline bottleneck. For extra particulars on the dataset definition please see this publish.
import numpy as np
from PIL import Picture
from torchvision.datasets.imaginative and prescient import VisionDataset
import torchvision.transforms as T
class FakeDataset(VisionDataset):
def __init__(self, rework):
tremendous().__init__(root=None, rework=rework)
self.dimension = 10000
def __getitem__(self, index):
# create a random 1024x1024 picture
img = Picture.fromarray(np.random.randint(
low=0,
excessive=256,
dimension=(input_img_size[0], input_img_size[1], 3),
dtype=np.uint8
))
# create a random label
goal = np.random.randint(low=0, excessive=num_classes,
dtype=np.uint8).merchandise()
# Apply tranformations
img = self.rework(img)
return img, goal
def __len__(self):
return self.dimension
class RandomMask(torch.nn.Module):
def __init__(self, ratio=0.25):
tremendous().__init__()
self.ratio=ratio
def dilate_mask(self, masks):
# carry out 4 neighbor dilation on masks
from scipy.sign import convolve2d
dilated = convolve2d(masks, [[0, 1, 0],
[1, 1, 1],
[0, 1, 0]], mode='identical').astype(bool)
return dilated
def ahead(self, img):
masks = np.random.uniform(dimension=(img_size, img_size)) < self.ratio
dilated_mask = torch.unsqueeze(torch.tensor(self.dilate_mask(masks)),0)
dilated_mask = dilated_mask.increase(3,-1,-1)
img[dilated_mask] = 0.
return img
class ConvertColor(torch.nn.Module):
def __init__(self):
tremendous().__init__()
self.A=torch.tensor(
[[0.299, 0.587, 0.114],
[-0.16874, -0.33126, 0.5],
[0.5, -0.41869, -0.08131]]
)
self.b=torch.tensor([0.,128.,128.])
def ahead(self, img):
img = img.to(dtype=torch.get_default_dtype())
img = torch.matmul(self.A,img.view([3,-1])).view(img.form)
img = img + self.b[:,None,None]
return img
class Scale(object):
def __call__(self, img):
return img.to(dtype=torch.get_default_dtype()).div(255)
rework = T.Compose(
[T.PILToTensor(),
T.RandomCrop(img_size),
RandomMask(),
ConvertColor(),
Scale()])
train_set = FakeDataset(rework=rework)
train_loader = torch.utils.information.DataLoader(train_set, batch_size=256,
num_workers=4, pin_memory=True)
Subsequent, we outline the mannequin, loss perform, optimizer, coaching step, and coaching loop, which we wrap with a PyTorch Profiler context supervisor to seize efficiency information.
from statistics import imply, variance
from time import time
gadget = torch.gadget("cuda:0")
mannequin = Internet().cuda(gadget)
criterion = nn.CrossEntropyLoss().cuda(gadget)
optimizer = torch.optim.SGD(mannequin.parameters(), lr=0.001, momentum=0.9)
def train_step(mannequin, criterion, optimizer, inputs, labels):
outputs = mannequin(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad(set_to_none=True)
loss.backward()
optimizer.step()
mannequin.prepare()
t0 = time()
occasions = []
with torch.profiler.profile(
schedule=torch.profiler.schedule(wait=10, warmup=2, lively=10, repeat=1),
on_trace_ready=torch.profiler.tensorboard_trace_handler('/tmp/prof'),
record_shapes=True,
profile_memory=True,
with_stack=True
) as prof:
for step, information in enumerate(train_loader):
# copy information to gadget
inputs = information[0].to(gadget=gadget, non_blocking=True)
labels = information[1].to(gadget=gadget, non_blocking=True)
# run prepare step
train_step(mannequin, criterion, optimizer, inputs, labels)
prof.step()
occasions.append(time()-t0)
t0 = time()
if step >= 100:
break
print(f'common time: {imply(occasions[1:])}, variance: {variance(occasions[1:])}')
For our experiments, we use an Amazon EC2 g5.xlarge occasion (containing an NVIDIA A10G GPU and 4 vCPUs) operating a PyTorch (2.6) Deep Studying AMI (DLAMI). Operating our toy script on this atmosphere ends in a mean throughput of 0.89 steps per second, an underwhelming GPU utilization of twenty-two%, and within the following profiling hint:
As mentioned intimately in a earlier publish, the profiling hint exhibits a transparent sample of GPU hunger — the place the GPU spends most of its time ready for information from the PyTorch DataLoader. This means that there’s a efficiency bottleneck within the information enter pipeline, which prevents enter batches from being ready rapidly sufficient to maintain the GPU totally occupied. Importantly, enter pipeline efficiency points can stem from a wide range of sources. Within the case of our toy instance, the reason for the bottleneck is just not obvious from the hint captured above.
A quick word for readers/builders that (regardless of all of our lecturing) stay aversive to using PyTorch Profiler: The information caching-based approach we are going to focus on beneath will current an alternate means of figuring out GPU hunger — so don’t despair.
GPU Hunger — Discovering the Root Trigger
On this part, we briefly assessment widespread causes of efficiency bottlenecks on the enter information pipeline.
Recall, that in a typical mannequin execution move:
- Uncooked information is is loaded or streamed from storage (e.g., native RAM or disk, a distant community file system, or a cloud-based object retailer similar to Amazon S3 or Google Cloud Storage).
- It’s then preprocessed on the CPU.
- Lastly, the processed information is copied to the GPU for inference or coaching.
Correspondingly, bottlenecks can emerge at every of the next phases:
- Gradual information retrieval: There are a number of components that may restrict how rapidly uncooked information could be retrieved by the CPU, together with: the selection of storage backend (e.g., cloud storage vs. native SSD), the obtainable community bandwidth, the information format, and extra.
- CPU useful resource exhaustion or misuse: Preprocessing duties — similar to information augmentation, picture transformations, or decompression — could be CPU-intensive. When the quantity or complexity of those operations exceeds the obtainable CPU capability, or if the CPU assets are managed inefficiently (e.g., an in-optimal selection of variety of staff), a bottleneck can happen. It’s value noting that CPUs are additionally accountable for different model-related duties like loading GPU kernels, reminiscence administration, metric reporting, and extra.
- Host-to-device switch bottlenecks: As soon as information is processed, it should be transferred to the GPU. This may turn into a bottleneck if information batches are massive relative to the CPU-GPU reminiscence bandwidth, or if the reminiscence copying is carried out inefficiently (e.g., particular person samples are copied somewhat than full batches).
The Limitation of Efficiency Profilers
A typical method to establish information pipeline bottlenecks is through the use of a efficiency profiler. Partially 4 of this sequence, Fixing Bottlenecks on the Information Enter Pipeline with PyTorch Profiler and TensorBoard, we demonstrated how to do that utilizing PyTorch’s built-in profiler. Nevertheless, on condition that the enter information pipeline runs on the CPU, any Python profiler might be used.
The issue with this strategy is that we sometimes use a number of employee processes for information loading, making efficiency profiling significantly complicated. In our earlier publish, we overcame this by operating the data-loading and the mannequin execution in a single course of (i.e., we set the num_workers argument of the DataLoader constructor to zero). Nevertheless, it is a extremely intrusive configuration change that may have a big influence on the general efficiency of our mannequin.
The caching-based technique we current on this publish goals to pinpoint the supply of the efficiency bottleneck in a far much less intrusive method. Particularly, it would allow us to measure the mannequin efficiency with out altering the multi-worker data-loading habits.
Bottleneck Detection by way of Caching
On this part, we suggest a multi-step strategy for analyzing the efficiency of the enter information pipeline. We’ll exhibit how this technique could be utilized to our toy coaching workload to establish the causes of the GPU hunger.
Step 1: Cache a Batch on the System
We start by making a single enter batch, copying it to the GPU, after which measuring the runtime efficiency of the mannequin when iterating over simply that batch. This offers a theoretical higher certain on the mannequin’s throughput — i.e., the utmost throughput achievable when the GPU is just not data-starved.
Within the following code block we modify the coaching loop of our toy script in order that it runs on a single batch that’s cached on the GPU:
information = subsequent(iter(train_loader))
inputs = information[0].to(gadget=gadget, non_blocking=True)
labels = information[1].to(gadget=gadget, non_blocking=True)
t0 = time()
occasions = []
for step in vary(100):
train_step(mannequin, criterion, optimizer, inputs, labels)
occasions.append(time()-t0)
t0 = time()
The resultant common throughput is 3.45 steps per second — almost 4 occasions larger than our baseline outcome. Not solely does this affirm a big information pipeline bottleneck, nevertheless it additionally quantifies its influence.
Bonus Tip: Profile and Optimize with System-Cached Information
Operating a profiler on a single batch cached on the GPU isolates the mannequin execution from the enter pipeline. This helps you establish inefficiencies within the mannequin’s uncooked compute path. Ideally, GPU utilization right here ought to strategy 100%. In our case, utilization is round 95%, which is suitable.
Step 2: Cache a Batch on the Host (CPU)
Subsequent, we cache a single enter batch on the host (CPU) as an alternative of the gadget. Now, every step contains each a reminiscence copy from CPU to GPU and the mannequin execution.
Since PyTorch’s reminiscence pinning permits for asynchronous information transfers, we anticipate the host-to-device reminiscence copy for batch N+1 to overlap with the mannequin execution on batch N. Consequently, our expectation is that the throughput might be in the identical ballpark as within the device-cached case. If not, this could be a transparent indication of a bottleneck within the host to gadget reminiscence copy.
The next block of code accommodates our utility of this step to our toy mannequin:
information = subsequent(iter(train_loader))
t0 = time()
occasions = []
for step in vary(100):
inputs = information[0].to(gadget=gadget, non_blocking=True)
labels = information[1].to(gadget=gadget, non_blocking=True)
train_step(mannequin, criterion, optimizer, inputs, labels)
occasions.append(time()-t0)
t0 = time()
The resultant throughput following this modification is 3.33 steps per second — a minor drop from the earlier outcome — indicating that the host-to-device switch is just not a bottleneck. We have to maintain in search of the supply of our efficiency bottleneck.
Steps 3 and on: Cache at Intermediate Phases within the Information Pipeline
We proceed our search by “climbing” up the information enter pipeline, caching at numerous intermediate factors to pinpoint the bottleneck. The exact utility of this course of will differ primarily based on the small print of the pipeline. Suppose the pipeline could be damaged into Okay phases. If caching after stage N yields a considerably worse throughput when caching after stage N+1, we will deduce that that the inclusion of the processing of stage N+1 is what’s slowing us down.
Step 3a: Cache a Single Processed Pattern
Within the code block beneath, we modify our dataset to cache one totally processed pattern. This simulates a pipeline that features information collation and the CPU to GPU information copy.
class FakeDataset(VisionDataset):
def __init__(self, rework):
tremendous().__init__(root=None, rework=rework)
self.dimension = 10000
self.cache = None
def __getitem__(self, index):
if self.cache is None:
# create a random 1024x1024 picture
img = Picture.fromarray(np.random.randint(
low=0,
excessive=256,
dimension=(input_img_size[0], input_img_size[1], 3),
dtype=np.uint8
))
# create a random label
goal = np.random.randint(low=0, excessive=num_classes,
dtype=np.uint8).merchandise()
# Apply tranformations
img = self.rework(img)
self.cache = img, goal
return self.cache
The resultant throughput is 3.23 steps per second— nonetheless far larger than our baseline of 0.89. We nonetheless haven’t discovered the wrongdoer.
Step 3b: Cache Uncooked Information (Earlier than Transformation)
Subsequent, we modify the dataset in order to cache the uncooked information (e.g., unprocessed picture information). The enter information pipeline now contains the information transformations, information collation, and the CPU to GPU information copy.
class FakeDataset(VisionDataset):
def __init__(self, rework):
tremendous().__init__(root=None, rework=rework)
self.dimension = 10000
self.cache = None
def __getitem__(self, index):
if self.cache is None:
# create a random 1024x1024 picture
img = Picture.fromarray(np.random.randint(
low=0,
excessive=256,
dimension=(input_img_size[0], input_img_size[1], 3),
dtype=np.uint8
))
# create a random label
goal = np.random.randint(low=0, excessive=num_classes,
dtype=np.uint8).merchandise()
self.cache = img, goal
# Apply tranformations
img = self.rework(self.cache[0])
return img, self.cache[1]
This time, the throughput drops sharply — all the way in which right down to 1.72 steps per second. We’ve got discovered our first wrongdoer: the information transformation perform.
Interim Outcomes
Right here’s a abstract of the experiments thus far:

The outcomes level to a big slowdown launched by the information transformation step. The hole between the uncooked information caching outcome and the baseline additionally means that uncooked information loading could also be one other wrongdoer. Let’s start with the information processing bottleneck.
Optimizing the Information Transformation
We now proceed with our newfound discovery of a efficiency bottleneck within the information processing perform. The subsequent logical step can be to interrupt the rework perform into particular person parts and apply our caching approach to every one with the intention to derive extra perception into the exact sources of our GPU hunger. For the sake of brevity, we are going to skip forward and apply the information processing optimizations mentioned in our earlier publish, Fixing Bottlenecks on the Information Enter Pipeline with PyTorch Profiler and TensorBoard. Please see there for particulars.
Following the information transformation optimizations, the throughput of the cached uncooked information experiment shoots as much as 3.23. We’ve got eradicated the bottleneck within the information processing perform.
Nevertheless, our new baseline throughput (with out caching) turns into 1.28 steps per second, indicating that there stays a bottleneck within the uncooked information loading. That is just like the top outcome we reached in our earlier publish.

Optimizing Uncooked Information Loading
To resolve the remaining bottleneck, we simulate the optimization demonstrated partially 5 of this sequence, The best way to Optimize Your DL Information-Enter Pipeline with a Customized PyTorch Operator. We do that by decreasing the scale of our preliminary random picture from 1024×1024 to 256×256. Following, this modification the top to finish (un-cached) coaching step will increase to three.23. Drawback solved!
Vital Caveats
We conclude with a number of vital notes and caveats.
- A drop in throughput ensuing from inclusion of a sure data-processing step within the information pipeline, doesn’t essentially imply that it’s that particular step that requires optimization. It’s fully potential that it’s one other step CPU utilization close to the restrict, and the brand new step simply tipped it over.
- In case your enter information varies in dimension, throughput from a single cached information pattern or batch of samples might not mirror real-world efficiency.
- The identical caveat applies if the AI mannequin contains dynamic, data-dependent , options, e.g., if parts of the mannequin graph are depending on the enter information.
Suggestions, Tips, and Strategies for Addressing Bottlenecks on the Information Enter Pipeline
We conclude this publish with a listing of suggestions, methods, and methods for optimizing the information enter pipeline of PyTorch-based AI fashions. This listing is certainly not exhaustive — quite a few further optimizations exist relying in your particular use case and infrastructure. We divide the optimizations into three classes:
- Optimizing Uncooked Information Entry/Retrieval
- Optimizing Information Processing
- Optimizing Host-to-System Information Switch
Optimizing Uncooked Information Entry/Retrieval
Environment friendly information loading begins with quick and dependable entry to uncooked information. The next suggestions can assist:
- Select an occasion kind with ample community ingress bandwidth.
- Use a quick and cost-effective information storage resolution. Native SSDs are quick however costly. Cloud-based options like S3 provide scalability, however might introduce latency.
- Maximize storage community egress. Think about partitioning datasets in S3 or tuning parallel downloads to cut back throttling.
- Think about uncooked information compression. Compressing information can scale back switch time — however be careful for elevated CPU value throughout decompression.
- Group small samples into bigger information. This may scale back overhead related to opening and shutting many information.
- Use optimized information switch instruments. For instance, s5cmd can considerably outperforms AWS CLI for bulk S3 downloads.
- Tune information retrieval parameters. Adjusting chunk dimension or concurrency settings can significantly influence learn efficiency.
Addressing Information Processing Bottlenecks
- Tune the variety of information loading staff and prefetch issue.
- Each time potential, offload data-processing to the information preparation section.
- Select an occasion kind with an optimum CPU/GPU compute ratio.
- Optimize the order of transformations. For instance, making use of a crop earlier than blurring might be quicker blurring the total sized picture and solely then cropping.
- Leverage Python acceleration libraries. For instance, Numba and JAX can velocity up pure Python operations by way of JIT compilation.
- Create customized PyTorch CPU operators the place acceptable (e.g., see right here).
- Think about including auxiliary CPUs (information servers) — (e.g., see right here).
- Transfer GPU-friendly transforms to the GPU graph. Some transforms (e.g., normalization) could be achieved post-loading on the GPU for higher overlap.
- Tune OS-level thread and reminiscence configurations.
Optimizing the Host to System Information Copy
- Use reminiscence pinning and non-blocking information copies to prefetch information immediately onto the GPU. Additionally see the devoted CudaDataPrefetcher supplied by TorchTNT.
- Postpone int8 to float32 datatype conversions to the GPU to cut back reminiscence copy payload by an element of 4.
- In case your mannequin is utilizing decrease precision floats (e.g., fp16/bfloat16) solid the floats on the CPU to cut back payload by half.
- Postpone unpacking of one-hot vectors to the GPU — i.e., maintain them as label ids till the final potential second.
- When you have many binary values, think about using bitmasks to compress the payload. For instance, when you have 8 binary maps, contemplate compressing them right into a single uint8.
- In case your enter information is sparse, think about using sparse information representations.
- Keep away from pointless padding. Whereas zero-padding is a well-liked approach for coping with variable sized enter samples, it could possibly considerably improve the scale of the reminiscence copy. Think about different choices (e.g., see right here).
- Be sure to will not be copying information that you don’t really want on the GPU!!
Abstract
Whereas GPUs are thought of important for modern-day AI/ML growth they arrive at a steep value. When you’ve determined to make the mandatory funding into their acquisition, you’ll want to be certain they’re getting used as a lot as potential. The very last thing you need is to your GPU to take a seat idle, ready for enter information attributable to a preventable bottleneck elsewhere within the pipeline.
Sadly, such inefficiencies are all too widespread. On this publish, we launched a easy approach for diagnosing these points by iteratively caching information at totally different phases of the enter pipeline. By isolating the runtime influence of every pipeline element, this technique helps establish particular bottlenecks — whether or not in uncooked information loading, preprocessing, or host-to-device switch.
After all, the precise implementation will differ throughout initiatives and pipelines, however we hope this technique offers a helpful framework for diagnosing and resolving efficiency points in your individual AI/ML workflows.