In this post, we will explore the deep keypoint detection task, its architecture and then analyse how to lightweigh it.
Model goal and output
Within the First-Order Motion model, this keypoint detector network is part of the motion estimation module, where keypoints and local affine transformations predicted by this network will be used to model motion from one frame to another. The model does not require any keypoint annotations during training and is trained as part of the whole system. The keypoints learned are not specific to any structure and are often associated with highly moving parts.
In this report, we will explore the details of the keypoint detector network, its overall structure, each specific modules and layers.
The model input is an RGB image of size (256,256). The image is then passed through four large modules:
1) anti-alias interpolation to reduce the spatial dimension down to (64,64),
2) hourglass module to extract feature maps and reconstruct to,
3) post-process module where the reconstructed feature maps will be converted into normalised heatmaps and in turn keypoints, and
4) jacobian module where the keypoints will be used to learn the local affine transformation, i.e. focus prediction near the keypoints.
Below is a brief summary of running time of each module.
A standard U-Net was used for extracting features from the original image and reconstruct before converting to heatmaps. 5 downblocks make up the Encoder and 5 upblocks make up the Decoder. With feature maps from the encoder being concatenated to the corresponding feature maps in the decoder using residual connections.
Made up of 5 downblocks. Each downblock is a sequence of Conv2d -> BatchNorm -> Relu -> AvgPool2d. The number of channels doubles and the spatial dimensions reduce by a factor of two after every block.
"the first convolution layers have 32 filters and each subsequent convolution doubles the number of filters".
The majority of a forward pass in the encoder is spent on the convolutional layers, especially the ones in the final downblock, making the convolutional layers a candidate to be replaced for better efficiency.
Made up of 5 upblocks. Each upblock is a sequence of Interpolate -> Conv2d -> BatchNorm -> ReLU. The first upblock's input is the output of the encoder. Every other upblock's input is a concatenation of the previous upblock's output and the corresponding downblock output.
Note: all measurements reported are in ms.
Post-procesing (Heatmaps & Keypoints)
After the Hourglass module, there are two more steps to output keypoints: 1. generating K normalised heatmaps for K keypoints, and 2. converting those heatmaps to corresponding (x,y) coordinates.
We wish to get the coordinates of the keypoints from the heatmaps. In multiple landmark localisation works, the coordinates were usually extracted by applying argmax, i.e. taking the brightest point on the heatmap to be the keypoint. However, this process is non-differentiable and may lead to quantization error. Therefore, this paper employed the differentiable variant of argmax, or
The goal of this sub-module is to create normalised heatmaps from the feature maps generated by the Hourglass module, i.e. every element ranges from 0 to 1 and sum of all elements add up to 1. First, the output of size (35,64,64) from the hourglass module goes through another convolutional layer to finally generate 10 heatmaps of size (58,58), for 10 keypoints. As the output of the decoder is a concatenation of the input image and earlier decoder output, this convolution layer not only reduces the channel down to the number of keypoints, but it also acts as the last learning bit, with a larger kernel size for larger receptive field.
# a convolution layer to bring 35 channels down to 10self.kp = nn.Conv2d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=(7, 7), padding=pad)prediction = self.kp(feature_map)# normalised the heatmaps by applying softmaxfinal_shape = prediction.shape# reshape to (10, 3364) to perform softmaxheatmap = prediction.view(final_shape, final_shape, -1)heatmap = F.softmax(heatmap / self.temperature, dim=2)# bring back to (10, 58, 58)heatmap = heatmap.view(*final_shape)
Then, the heatmaps will be normalised using
softmax. This part does not have any trainable parameters. But there is one hyperparameter,
temperature for determining the smoothness of the distribution after applying softmax. If set
temperature > 1, will make the softmax distribution smoother. In this model, temperature was set to 0.1. Quoted by the author
Thanks to the use of a low temperature for softmax, we obtain sharper heatmaps and avoid uniform heatmaps that would lead to keypoints constantly located in the image center.
This part is to implement soft-argmax, a variant of argmax that is differentiable. Its purpose is to convert a heatmap to a (x,y) coordinate. The process is purely computatonal and there is no learning here.
One clear advantage of soft-argmax is that it is differentiable, thus allows end-to-end training. It also alleviates the problems of quantization error due to the mismatch in size between the input image and the heatmaps. Efficiency-wise, it does not add extra parameters and computationally negligible.
To focus on the movements surrounding keypoints, Jacobian matrices are computed to represent the scale and rotation of each keypoint.
It appears that the keypoints are semantically consistent across the images, such as the bright green keypoint is at between the eyes, the blue one is at the neck, and the dark blue point is at the left eyebrow, etc.