An Implementation of the Transformer in Transformer paper by Han et al. for image classification, attention inside local patches. Transformer in Transformer uses pixel level attention paired with patch level attention for image classification, in TensorFlow.
Run the following to install:
pip install tnt-tensorflow
To install the package using Docker run the following:
docker pull ghcr.io/rishit-dagli/tnt:0.2.0
To install tnt-tensorflow
, along with tools you need to develop and test, run the following in your virtualenv:
git clone https://github.com/Rishit-dagli/Transformer-in-Transformer.git
# or clone your own fork
cd Transformer-in-Transformer
pip install -e .[dev]
To run rank and shape tests run the following:
pytest -v --disable-warnings --cov
import tensorflow as tf
from tnt import TNT
tnt = TNT(
image_size=256, # size of image
patch_dim=512, # dimension of patch token
pixel_dim=24, # dimension of pixel token
patch_size=16, # patch size
pixel_size=4, # pixel size
depth=5, # depth
num_classes=1000, # output number of classes
attn_dropout=0.1, # attention dropout
ff_dropout=0.1, # feedforward dropout
)
img = tf.random.uniform(shape=[5, 3, 256, 256])
logits = tnt(img) # (5, 1000)
An end to end training example for image classification on a dataset can be found in the training.ipynb notebook.
You can also run the example script with Docker.
git clone https://github.com/Rishit-dagli/Transformer-in-Transformer.git
cd Transformer-in-Transformer
docker run -it --rm \
--mount type=bind,source="$(pwd)"/example,target=/usr/src/tnt/docker_example \
ghcr.io/rishit-dagli/tnt:0.2.0 \
python docker_example/docker_example.py
The pre-trained model for TNT-S variant (reproducing the paper results, 81.4% top-1 accuracy and 95.7% top-5 accuracy on ImageNet-1K) can also be found paired with an example of inferencing with it.
Model | TensorFlow Hub | Inference Tutorial |
---|---|---|
bucket | tfhub.dev |
Awesome! If you want to contribute to this project, you're always welcome! See Contributing Guidelines. You can also take a look at open issues for getting more information about current or upcoming tasks.
Have any questions, doubts or want to present your opinions, views? You're always welcome. You can start discussions.
@misc{han2021transformer,
title={Transformer in Transformer},
author={Kai Han and An Xiao and Enhua Wu and Jianyuan Guo and Chunjing Xu and Yunhe Wang},
year={2021},
eprint={2103.00112},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Copyright 2020 Rishit Dagli
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.