piqa.lpips#
Learned Perceptual Image Patch Similarity (LPIPS)
This module implements the LPIPS in PyTorch.
Original
https://github.com/richzhang/PerceptualSimilarity
References
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric (Zhang et al., 2018)
Functions#
Returns the official LPIPS weights for |
Classes#
Measures the LPIPS between an input and a target. |
|
Perceptual network that intercepts and returns the output of target layers within its foward pass. |
Descriptions#
- piqa.lpips.get_weights(network='alex', version='v0.1')#
Returns the official LPIPS weights for
network
.
- class piqa.lpips.Perceptual(layers, targets)#
Perceptual network that intercepts and returns the output of target layers within its foward pass.
- class piqa.lpips.LPIPS(network='alex', epsilon=1e-10, reduction='mean')#
Measures the LPIPS between an input and a target.
\[\text{LPIPS}(x, y) = \sum_{l \, \in \, \mathcal{F}} w_l \cdot \text{MSE}(\phi_l(x), \phi_l(y))\]where \(\phi_l\) represents the normalized output of an intermediate layer \(l\) in a perceptual network \(\mathcal{F}\) and \(w_l\) are the official weights of Zhang et al. (2018).
- Parameters:
Example
>>> criterion = LPIPS() >>> x = torch.rand(5, 3, 256, 256, requires_grad=True) >>> y = torch.rand(5, 3, 256, 256) >>> l = criterion(x, y) >>> l.shape torch.Size([]) >>> l.backward()