Skip to content

Commit

Permalink
documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
DavidLandup0 committed Aug 23, 2023
1 parent b32e0cf commit 743a3bb
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 15 deletions.
14 changes: 7 additions & 7 deletions keras_cv/layers/hierarchical_transformer_encoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,14 +42,14 @@ class HierarchicalTransformerEncoder(keras.layers.Layer):
Due to the residual addition the input dimensionality has to be
equal to the output dimensionality.
num_heads: integer, the number of heads for the
`SegFormerMultiheadAttention` layer
drop_prob: float, default 0.0, the probability of dropping a random
sample using the `DropPath` layer.
layer_norm_epsilon: float, default 1e-06, the epsilon for
`LayerNormalization` layers
sr_ratio: integer, default 1, the ratio to use within
`SegFormerMultiheadAttention` layer.
drop_prob: float, the probability of dropping a random
sample using the `DropPath` layer. Defaults to `0.0`.
layer_norm_epsilon: float, the epsilon for
`LayerNormalization` layers. Defaults to `1e-06`
sr_ratio: integer, the ratio to use within
`SegFormerMultiheadAttention`. If set to > 1, a `Conv2D`
layer is used to reduce the length of the sequence.
layer is used to reduce the length of the sequence. Defaults to `1`.
Basic usage:
Expand Down
10 changes: 6 additions & 4 deletions keras_cv/layers/overlapping_patching_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,12 @@ def __init__(self, project_dim=32, patch_size=7, stride=4, **kwargs):
- [Ported from the TensorFlow implementation from DeepVision](https://github.com/DavidLandup0/deepvision/blob/main/deepvision/layers/hierarchical_transformer_encoder.py) # noqa: E501
Args:
project_dim: integer, default 32, the dimensionality of the projection
patch_size: integer, default 7, the size of the patches to encode
stride: integer, default 4, the stride to use for the patching before
projection
project_dim: integer, the dimensionality of the projection.
Defaults to `32`.
patch_size: integer, the size of the patches to encode.
Defaults to `7`.
stride: integer, the stride to use for the patching before
projection. Defaults to 5`.
Basic usage:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@ def __init__(
values.
num_classes: int, the number of classes for the detection model,
including the background class.
projection_filters: int, default 256, number of filters in the
projection_filters: int, number of filters in the
convolution layer projecting the concatenated features into
a segmentation map.
a segmentation map. Defaults to `256`.
Examples:
Expand Down
4 changes: 2 additions & 2 deletions keras_cv/models/segmentation/segformer/segformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ class SegFormer(Task):
values.
num_classes: int, the number of classes for the detection model,
including the background class.
projection_filters: int, default 256, number of filters in the
projection_filters: int, number of filters in the
convolution layer projecting the concatenated features into
a segmentation map.
a segmentation map. Defaults to 256`.
Examples:
Expand Down

0 comments on commit 743a3bb

Please sign in to comment.