# Convolutional Vision Transformer (CvT)

## Overview

CvT モデルは、Haping Wu、Bin Xiao、Noel Codella、Mengchen Liu、Xiyang Dai、Lu Yuan、Lei Zhang によって [CvT: Introduction Convolutions to Vision Transformers](https://huggingface.co/papers/2103.15808) で提案されました。畳み込みビジョン トランスフォーマー (CvT) は、ViT に畳み込みを導入して両方の設計の長所を引き出すことにより、[ビジョン トランスフォーマー (ViT)](vit) のパフォーマンスと効率を向上させます。

論文の要約は次のとおりです。

*この論文では、ビジョン トランスフォーマー (ViT) を改善する、畳み込みビジョン トランスフォーマー (CvT) と呼ばれる新しいアーキテクチャを紹介します。
ViT に畳み込みを導入して両方の設計の長所を引き出すことで、パフォーマンスと効率を向上させます。これは次のようにして実現されます。
2 つの主要な変更: 新しい畳み込みトークンの埋め込みを含むトランスフォーマーの階層と、畳み込みトランスフォーマー
畳み込み射影を利用したブロック。これらの変更により、畳み込みニューラル ネットワーク (CNN) の望ましい特性が導入されます。
トランスフォーマーの利点 (動的な注意力、
グローバルなコンテキストとより良い一般化)。私たちは広範な実験を実施することで CvT を検証し、このアプローチが達成できることを示しています。
ImageNet-1k 上の他のビジョン トランスフォーマーや ResNet よりも、パラメータが少なく、FLOP が低い、最先端のパフォーマンスを実現します。加えて、
より大きなデータセット (例: ImageNet-22k) で事前トレーニングし、下流のタスクに合わせて微調整すると、パフォーマンスの向上が維持されます。事前トレーニング済み
ImageNet-22k、当社の CvT-W24 は、ImageNet-1k val set で 87.7\% というトップ 1 の精度を獲得しています。最後に、私たちの結果は、位置エンコーディングが、
既存のビジョン トランスフォーマーの重要なコンポーネントであるこのコンポーネントは、モデルでは安全に削除できるため、高解像度のビジョン タスクの設計が簡素化されます。*

このモデルは [anugunj](https://huggingface.co/anugunj) によって提供されました。元のコードは [ここ](https://github.com/microsoft/CvT) にあります。

## Usage tips

- CvT モデルは通常の Vision Transformer ですが、畳み込みでトレーニングされています。 ImageNet-1K および CIFAR-100 で微調整すると、[オリジナル モデル (ViT)](vit) よりも優れたパフォーマンスを発揮します。
- カスタム データの微調整だけでなく推論に関するデモ ノートブックも [ここ](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) で確認できます (`ViTFeatureExtractor を置き換えるだけで済みます) ` による [AutoImageProcessor](/docs/transformers/v5.8.0/ja/model_doc/auto#transformers.AutoImageProcessor) および `ViTForImageClassification` による [CvtForImageClassification](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtForImageClassification))。
- 利用可能なチェックポイントは、(1) [ImageNet-22k](http://www.image-net.org/) (1,400 万の画像と 22,000 のクラスのコレクション) でのみ事前トレーニングされている、(2) も問題ありません。 ImageNet-22k で調整、または (3) [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/) (ILSVRC 2012 とも呼ばれるコレクション) でも微調整130万の
  画像と 1,000 クラス)。

## Resources

CvT を始めるのに役立つ公式 Hugging Face およびコミュニティ (🌎 で示される) リソースのリスト。

- [CvtForImageClassification](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtForImageClassification) は、この [サンプル スクリプト](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) および [ノートブック](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)。
- 参照: [画像分類タスク ガイド](../tasks/image_classification)

ここに含めるリソースの送信に興味がある場合は、お気軽にプル リクエストを開いてください。審査させていただきます。リソースは、既存のリソースを複製するのではなく、何か新しいものを示すことが理想的です。

## CvtConfig[[transformers.CvtConfig]]

#### transformers.CvtConfig[[transformers.CvtConfig]]

[Source](https://github.com/huggingface/transformers/blob/v5.8.0/src/transformers/models/cvt/configuration_cvt.py#L24)

This is the configuration class to store the configuration of a CvtModel. It is used to instantiate a Cvt
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13)

Configuration objects inherit from [PreTrainedConfig](/docs/transformers/v5.8.0/ja/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the
documentation from [PreTrainedConfig](/docs/transformers/v5.8.0/ja/main_classes/configuration#transformers.PreTrainedConfig) for more information.

Example:

```python
>>> from transformers import CvtConfig, CvtModel

>>> # Initializing a Cvt msft/cvt style configuration
>>> configuration = CvtConfig()

>>> # Initializing a model (with random weights) from the msft/cvt style configuration
>>> model = CvtModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
```

**Parameters:**

num_channels (`int`, *optional*, defaults to `3`) : The number of input channels.

patch_sizes (`Union[list[int], tuple[int, ...]]`, *optional*, defaults to `(7, 3, 3)`) : Patch size at each stage of the model.

patch_stride (`list[int]`, *optional*, defaults to `[4, 2, 2]`) : The stride size of each encoder's patch embedding.

patch_padding (`list[int]`, *optional*, defaults to `[2, 1, 1]`) : The padding size of each encoder's patch embedding.

embed_dim (`Union[list[int], tuple[int, ...]]`, *optional*, defaults to `(64, 192, 384)`) : Dimensionality of the embeddings and hidden states.

num_heads (`Union[list[int], tuple[int, ...]]`, *optional*, defaults to `(1, 3, 6)`) : Number of attention heads for each attention layer in the Transformer decoder.

depth (`list[int]`, *optional*, defaults to `[1, 2, 10]`) : The number of layers in each encoder block.

mlp_ratio (`Union[list[float], tuple[float, ...]]`, *optional*, defaults to `(4.0, 4.0, 4.0)`) : Ratio of the MLP hidden dim to the embedding dim.

attention_drop_rate (`list[float]`, *optional*, defaults to `[0.0, 0.0, 0.0]`) : The dropout ratio for the attention probabilities.

drop_rate (`list[float]`, *optional*, defaults to `[0.0, 0.0, 0.0]`) : The dropout ratio for the patch embeddings probabilities.

drop_path_rate (`Union[list[float], tuple[float, ...]]`, *optional*, defaults to `(0.0, 0.0, 0.1)`) : Drop path rate for the patch fusion.

qkv_bias (`Union[list[bool], tuple[bool, ...]]`, *optional*, defaults to `(True, True, True)`) : Whether to add a bias to the queries, keys and values.

cls_token (`list[bool]`, *optional*, defaults to `[False, False, True]`) : Whether or not to add a classification token to the output of each of the last 3 stages.

qkv_projection_method (`list[string]`, *optional*, defaults to ["dw_bn", "dw_bn", "dw_bn"]`) : The projection method for query, key and value Default is depth-wise convolutions with batch norm. For Linear projection use "avg".

kernel_qkv (`list[int]`, *optional*, defaults to `[3, 3, 3]`) : The kernel size for query, key and value in attention layer

padding_kv (`list[int]`, *optional*, defaults to `[1, 1, 1]`) : The padding size for key and value in attention layer

stride_kv (`list[int]`, *optional*, defaults to `[2, 2, 2]`) : The stride size for key and value in attention layer

padding_q (`list[int]`, *optional*, defaults to `[1, 1, 1]`) : The padding size for query in attention layer

stride_q (`list[int]`, *optional*, defaults to `[1, 1, 1]`) : The stride size for query in attention layer

initializer_range (`float`, *optional*, defaults to `0.02`) : The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

layer_norm_eps (`float`, *optional*, defaults to `1e-12`) : The epsilon used by the layer normalization layers.

## CvtModel[[transformers.CvtModel]]

#### transformers.CvtModel[[transformers.CvtModel]]

[Source](https://github.com/huggingface/transformers/blob/v5.8.0/src/transformers/models/cvt/modeling_cvt.py#L511)

The bare Cvt Model outputting raw hidden-states without any specific head on top.

This model inherits from [PreTrainedModel](/docs/transformers/v5.8.0/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.

forwardtransformers.CvtModel.forwardhttps://github.com/huggingface/transformers/blob/v5.8.0/src/transformers/models/cvt/modeling_cvt.py#L522[{"name": "pixel_values", "val": ": torch.Tensor | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]- **pixel_values** (`torch.Tensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) --
  The tensors corresponding to the input images. Pixel values can be obtained using
  [ConvNextImageProcessor](/docs/transformers/v5.8.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor). See `ConvNextImageProcessor.__call__()` for details (`processor_class` uses
  [ConvNextImageProcessor](/docs/transformers/v5.8.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor) for processing images).
- **output_hidden_states** (`bool`, *optional*) --
  Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
  more detail.
- **return_dict** (`bool`, *optional*) --
  Whether or not to return a [ModelOutput](/docs/transformers/v5.8.0/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.0`BaseModelOutputWithCLSToken` or `tuple(torch.FloatTensor)`A `BaseModelOutputWithCLSToken` or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([CvtConfig](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtConfig)) and inputs.
The [CvtModel](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtModel) forward method, overrides the `__call__` special method.

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

- **last_hidden_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*, defaults to `None`) -- Sequence of hidden-states at the output of the last layer of the model.
- **cls_token_value** (`torch.FloatTensor` of shape `(batch_size, 1, hidden_size)`) -- Classification token at the output of the last layer of the model.
- **hidden_states** (`tuple[torch.FloatTensor, ...]`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
  one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.

  Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

**Parameters:**

config ([CvtModel](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtModel)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/v5.8.0/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.

add_pooling_layer (`bool`, *optional*, defaults to `True`) : Whether to add a pooling layer

**Returns:**

``BaseModelOutputWithCLSToken` or `tuple(torch.FloatTensor)``

A `BaseModelOutputWithCLSToken` or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([CvtConfig](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtConfig)) and inputs.

## CvtForImageClassification[[transformers.CvtForImageClassification]]

#### transformers.CvtForImageClassification[[transformers.CvtForImageClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.8.0/src/transformers/models/cvt/modeling_cvt.py#L561)

Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.

This model inherits from [PreTrainedModel](/docs/transformers/v5.8.0/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.

forwardtransformers.CvtForImageClassification.forwardhttps://github.com/huggingface/transformers/blob/v5.8.0/src/transformers/models/cvt/modeling_cvt.py#L576[{"name": "pixel_values", "val": ": torch.Tensor | None = None"}, {"name": "labels", "val": ": torch.Tensor | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]- **pixel_values** (`torch.Tensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) --
  The tensors corresponding to the input images. Pixel values can be obtained using
  [ConvNextImageProcessor](/docs/transformers/v5.8.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor). See `ConvNextImageProcessor.__call__()` for details (`processor_class` uses
  [ConvNextImageProcessor](/docs/transformers/v5.8.0/ja/model_doc/convnext#transformers.ConvNextImageProcessor) for processing images).
- **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) --
  Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
  config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
  `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
- **output_hidden_states** (`bool`, *optional*) --
  Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
  more detail.
- **return_dict** (`bool`, *optional*) --
  Whether or not to return a [ModelOutput](/docs/transformers/v5.8.0/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.0[ImageClassifierOutputWithNoAttention](/docs/transformers/v5.8.0/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutputWithNoAttention) or `tuple(torch.FloatTensor)`A [ImageClassifierOutputWithNoAttention](/docs/transformers/v5.8.0/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutputWithNoAttention) or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([CvtConfig](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtConfig)) and inputs.
The [CvtForImageClassification](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtForImageClassification) forward method, overrides the `__call__` special method.

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

- **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification (or regression if config.num_labels==1) loss.
- **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`) -- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
  one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also
  called feature maps) of the model at the output of each stage.

Example:

```python
>>> from transformers import AutoImageProcessor, CvtForImageClassification
>>> import torch
>>> from datasets import load_dataset

>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]

>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13")
>>> model = CvtForImageClassification.from_pretrained("microsoft/cvt-13")

>>> inputs = image_processor(image, return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
...
```

**Parameters:**

config ([CvtForImageClassification](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtForImageClassification)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/v5.8.0/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.

**Returns:**

`[ImageClassifierOutputWithNoAttention](/docs/transformers/v5.8.0/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutputWithNoAttention) or `tuple(torch.FloatTensor)``

A [ImageClassifierOutputWithNoAttention](/docs/transformers/v5.8.0/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutputWithNoAttention) or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([CvtConfig](/docs/transformers/v5.8.0/ja/model_doc/cvt#transformers.CvtConfig)) and inputs.

