Paper review/Skimming paper review (2) 썸네일형 리스트형 Transformer Interpretability Beyond Attention Visualization https://arxiv.org/abs/2012.09838 Transformer Interpretability Beyond Attention Visualization Self-attention techniques, and specifically Transformers, are dominating the field of text processing and are becoming increasingly popular in computer vision classification tasks. In order to visualize the parts of the image that led to a certain classifi arxiv.org TL;DR CNN based model과 다르게 Transformer.. Deformable Convolutional Networks https://arxiv.org/abs/1703.06211 Deformable Convolutional Networks Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs arxiv.org TL;DR Convolution 연산 시 고정된 크기의 커널을 이용해서 featureextraction 하는 과정이.. 이전 1 다음