Comments (3)
@AmitMY The Trainer applies the PyTorch autocast context manager over the forward and converts the inputs. Take a look at the error traceback, see the line
output = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers)
and then from there work out which tensors (output, weights of TransformerEncoder) mismatch the dtype. It's possible that the input tensor here is the output of the previous layer (e.g. PositionalEncoding) and the dtype mismatch needs to be fixed there.
If there is reason to believe something is not done right in Lightning, please provide a reproducible example. Thanks!
from lightning.
The reason I believe it is a problem with pytorch-lightning
is that using normal torch autocasting works fine:
def test_training_step_bfloat16_expected_loss_finite(self):
batch = MaskedTensor(torch.full((4, 3, *self.pose_dim), fill_value=2, dtype=torch.float))
model = self.model_setup()
with torch.autocast(device_type="cpu", dtype=torch.bfloat16):
loss = model.training_step(batch)
self.assertNotEqual(0, float(loss))
self.assertTrue(torch.isfinite(loss))
As for the input to the transformer: both in torch autocast and lightning, I see:
dtype in PositionalEncoding torch.bfloat16
dtype out PositionalEncoding torch.float32
If I remove that layer, it still crashes with the same error.
Minimal repro:
import math
import pytorch_lightning as pl
import torch
from torch import nn, Tensor
from torch.utils.data import DataLoader, IterableDataset
class PoseFSQAutoEncoder(nn.Module):
# pylint: disable=too-many-arguments
def __init__(self,
pose_dims: tuple = (178, 3),
hidden_dim=512,
nhead=16,
dim_feedforward=2048,
num_layers=6):
super().__init__()
self.encoder = nn.Sequential(
nn.Flatten(start_dim=2),
nn.Linear(math.prod(pose_dims), hidden_dim, bias=False),
nn.TransformerEncoder(
nn.TransformerEncoderLayer(d_model=hidden_dim, nhead=nhead,
dim_feedforward=dim_feedforward,
batch_first=True),
num_layers=num_layers
)
)
def forward(self, batch: Tensor):
return self.encoder(batch)
class AutoEncoderLightningWrapper(pl.LightningModule):
def __init__(self, model: PoseFSQAutoEncoder,
learning_rate: float = 3e-4,
warmup_steps: int = 10000):
super().__init__()
self.model = model
self.learning_rate = learning_rate
self.warmup_steps = warmup_steps
def forward(self, batch):
return self.model(batch)
def configure_optimizers(self):
# Optimizer taken from https://arxiv.org/pdf/2307.09288.pdf
return torch.optim.AdamW(self.parameters(),
lr=self.learning_rate,
betas=(0.9, 0.95),
eps=1e-5,
weight_decay=0.1)
def step(self, x: Tensor):
x_hat, indices = self(x)
# fake loss, for repro
return 0
def training_step(self, batch, *args, **kwargs):
loss, _ = self.step(batch)
return loss
def validation_step(self, batch, batch_idx, *args, **kwargs):
loss, prediction = self.step(batch)
return loss
class FakeDataset(IterableDataset):
def __iter__(self):
while True:
yield torch.randn(size=(10, 178, 3))
auto_encoder = PoseFSQAutoEncoder()
model = AutoEncoderLightningWrapper(auto_encoder)
train_dataset = DataLoader(FakeDataset(),
batch_size=2,
num_workers=0)
validation_dataset = DataLoader(FakeDataset(),
batch_size=2,
shuffle=False,
num_workers=0)
precision = "bf16-mixed"
trainer = pl.Trainer(max_steps=100000,
val_check_interval=100_000 // 2,
precision=precision,
)
trainer.fit(model, train_dataloaders=train_dataset, val_dataloaders=validation_dataset)
from lightning.
@AmitMY The error occurs during validation, so running training step under training conditions won't reveal the issue. Take a look at this PyTorch-only code snippet derived from your code, that shows that the transformer model has different behavior in eval mode:
# No lightning code involved below
import math
import torch
from torch import nn, Tensor
class PoseFSQAutoEncoder(nn.Module):
def __init__(self, pose_dims: tuple = (178, 3), hidden_dim=512, nhead=16, dim_feedforward=2048, num_layers=6):
super().__init__()
self.encoder = nn.Sequential(
nn.Flatten(start_dim=2),
nn.Linear(math.prod(pose_dims), hidden_dim, bias=False),
nn.TransformerEncoder(
nn.TransformerEncoderLayer(
d_model=hidden_dim, nhead=nhead, dim_feedforward=dim_feedforward, batch_first=True
),
num_layers=num_layers,
),
)
def forward(self, batch: Tensor):
return self.encoder(batch)
model = PoseFSQAutoEncoder()
batch = torch.randn(size=(2, 10, 178, 3))
model.eval() # <--- HERE: Different behavior .train() vs .eval()
with torch.no_grad():
with torch.autocast("cpu", dtype=torch.bfloat16):
model(batch)
As you can see, it produces the same error.
from lightning.
Related Issues (20)
- How to config WandbLogger with LightingCLI ? HOT 1
- enable loading `universal checkpointing` checkpoint in `DeepSpeedStrategy` HOT 1
- PowerSGD to FSDP Strategy
- Installing lightning 2.3.3 also installs numpy<3 HOT 6
- Using Stochastic Weight Averaging (SWA) and LearningRateFinder simultaneously can cause issues:
- Cannot pass `schedule` for `PyTorchProfiler` using `LightningCLI` HOT 6
- ModelCheckpoint save ckpts at the end of every epoch even in step-saving strategy
- training time increase epoch by epoch HOT 1
- `pkg_resources` Deprecation Warnings on import HOT 2
- module statistics has no attribute mean HOT 3
- Improve error message when object is passed to Trainer callbacks HOT 2
- Sometimes I get Dataset Errors when using the lightning module in a distributed manor
- Checkpoint silently not correctly restored. HOT 3
- dirpath isn't updated when logger chages dir after first run
- wandblogger : File handles cannot be properly released HOT 1
- Please allow automatic optimization for multiple optimizers again. HOT 2
- Sometimes error when logging model graph with `functional.interpolate` and `deterministic=True`
- Adding support for Python 12? HOT 2
- Pytorch FSDPStrategy saving checkpoint behavior work correctly?
- pl.TrainResult not found in 2.3.3 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lightning.