In previous blog, I have presented a survey on Geometry-Aware Neural Style Transfer (NST) methods and compare them with other standard NST methods. Furthermore, some weaknesses of current Geometry-Aware NST methods are also analyzed. In this blog, we continue to examine the effectiveness of style aware image translation in terms of different configs of hyper-parameters. Note that, all of the experiments are conducted based on the implementation of DST, NST, AdaIN, and FastDST.
IN this blog, all the images follow the structure: style image - content image with facial landmarks - style image with facial landmarks.
1. Result on various style images
In this experiment, I do style transfer using only one content image and various style images The style-weight is set to 0.5.
2. Result on various content images
In this experiment, I do style transfer using only one style image and various content images The style-weight is set to 0.5.
3. Result on various style-weights
In this experiment, I do style transfer using only one style image and onecontent image The style-weight is set in range: 5.0, 4.5, 4.0, 3.5, and so on to 0.5, 0.1 The lower style-weight, the more remaining content
The implementation based on AAMS Colab Demo: https://colab.research.google.com/drive/1mGxE3ng8SCYunBpMmiHLA7tdnWhc7iVl?usp=sharing
- Faster than DST, original NST
- Well keep semantic regions of content image (e.g. eyes, nose,...) (outperforms AdaIN, NST)
- Do arbitrary style transfer (ASMAGAN can not)
- Slower than AdaIN, ASMAGAN
- Sensitive with style image
Considered version: DST without deformation loss can improve performance and the sensitive property of AAMS. However, we must suffer from the low inference time.
"Attention-Aware Multi-Stroke Style Transfer - CVF Open Access." https://openaccess.thecvf.com/content_CVPR_2019/papers/Yao_Attention-Aware_Multi-Stroke_Style_Transfer_CVPR_2019_paper.pdf. Accessed 7 Oct. 2021.