dgs.models.similarity.torchreid.TorchreidVisualSimilarity.forward¶
- TorchreidVisualSimilarity.forward(data: State, target: State) torch.Tensor [source]¶
Forward call of the torchreid model used to compute the similarities between visual embeddings.
Either load or compute the visual embeddings for the data and target using the model. The embeddings are tensors of respective shapes
[a x E]
and[b x E]
. Then use this modules’ metric to compute the similarity between the two embeddings.Notes
Torchreid expects images to have float values.
- Parameters:
data – A
State
containing the predicted embedding or the image crop. If a predicted embedding exists, it should be stored as ‘embedding’ in the State.self.get_data()
will then extract the embedding as tensor of shape:[a x E]
.target – A
State
containing either the target embedding or the image crop. If a predicted embedding exists, it should be stored as ‘embedding’ in the State.self.get_target()
is then used to extract embedding as tensor of shape[b x E]
.
- Returns:
A similarity matrix containing values describing the similarity between every current- and target-embedding. The similarity is a (Float)Tensor of shape
[a x b]
with values in[0..1]
. If the provided metric does not return a probability distribution, you might want to change the metric or set the ‘softmax’ parameter of this module, or within theDGSModule
if this is a submodule. Computing the softmax ensures better / correct behavior when combining this similarity with others. If requested, the softmax is computed along the -1 dimension, resulting in probability distributions for each value of the input data.