same_padding
- hybrid_learning.datasets.transforms.encoder.same_padding(kernel_size, hang_front=False)[source]
Calculate the left and right padding for mode
'same'
for each dim and concat.Mode
'same'
here meansConv(kernel_size)(Pad(padding)(x)).size() == x.size()
.Padding is distributed equally on both sides of a dimension. If unequal padding is needed in one dimension, by default (
hang_front==False
) the front gets padded by one pixel less than the back. To instead pad the front more, sethang_font==True
.Warning
Currently (Apr 2019),
torch.nn.ZeroPad2d
requires the padding in a special format: first width paddings, then height paddings. So the entries from kernel_size need to be swapped to obtain the correct padding as input fortorch.nn.ZeroPad2d
.- Parameters
- Returns
padding tuple as
(left dim0, right dim0, left dim1, right dim1, ...)
- Return type