Skip to content

Commit

Permalink
Fix anchors to use references at end of file
Browse files Browse the repository at this point in the history
It seems from [Jekyll
docs](https://jekyllrb.com/docs/liquid/tags/#linking-to-posts) that there is
no way to add anchors to link tags _inside_ a Markdown link.  Instead, use
references at the end of the file, where they seem to work.
  • Loading branch information
arielshaqed committed Aug 2, 2023
1 parent d8e4226 commit 180e585
Show file tree
Hide file tree
Showing 6 changed files with 19 additions and 8 deletions.
4 changes: 2 additions & 2 deletions docs/enterprise/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ has_toc: false
# lakeFS Enterprise

lakeFS Enterprise is an enterprise-ready lakeFS solution that provides a support SLA and additional features to the open-source version of lakeFS. The additional features are:
* [RBAC]({{ site.baseurl }}/reference/rbac.html)
* [SSO]({{ site.baseurl }}/enterprise/sso.html)
* [RBAC]({% link reference/rbac.md})
* [SSO]({% link enterprise/sso.html})
* Support SLA

4 changes: 3 additions & 1 deletion docs/enterprise/sso.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ auth:

In order for Fluffy to work, the following values must be configured. Update (or override) the following attributes in the chart's `values.yaml` file.
1. Replace `lakefsConfig.friendly_name_claim_name` with the right claim name.
1. Replace `lakefsConfig.default_initial_groups` with desired claim name (See [pre-configured]({{ site.baseurl }}/reference/rbac.md#preconfigured-groups) groups for enterprise)
1. Replace `lakefsConfig.default_initial_groups` with desired claim name (See [pre-configured][rbac-preconfigured] groups for enterprise)
2. Replace `fluffyConfig.auth.logout_redirect_url` with your full OIDC logout URL (e.g `https://oidc-provider-url.com/logout/path`)
3. Replace `fluffyConfig.auth.oidc.url` with your OIDC provider URL (e.g `https://oidc-provider-url.com`)
4. Replace `fluffyConfig.auth.oidc.logout_endpoint_query_parameters` with parameters you'd like to pass to the OIDC provider for logout.
Expand Down Expand Up @@ -218,3 +218,5 @@ Notes:
* Change the `ingress.hosts[0]` from `lakefs.company.com` to a real host (usually same as lakeFS), also update additional references in the file (note: URL path after host if provided should stay unchanged).
* Update the `ingress` configuration with other optional fields if used
* Fluffy docker image: replace the `fluffy.image.privateRegistry.secretToken` with real token to dockerhub for the fluffy docker image.

[rbac-preconfigured]: {% link reference/rbac.md %}#preconfigured-groups
4 changes: 3 additions & 1 deletion docs/howto/deploy/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Connect to your EC2 instance using SSH:
blockstore:
type: s3
```
1. [Download the binary]({% link index.md %}#downloads}) to the EC2 instance.
1. [Download the binary][downloads] to the EC2 instance.
1. Run the `lakefs` binary on the EC2 instance:

```sh
Expand Down Expand Up @@ -278,3 +278,5 @@ lakeFS can authenticate with your AWS account using an AWS user, using an access
```

{% include_relative includes/setup.md %}

[downloads]: {% link index.md %}#downloads
4 changes: 3 additions & 1 deletion docs/howto/deploy/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ lakeFS stores metadata in a database for its versioning engine. This is done via
1. Create a new container in the database and select type
`partitionKey` as the Partition key (case sensitive).
1. Pass the endpoint, database name and container name to lakeFS as
described in the [configuration guide]({% link reference/configuration.md %}#example--azure-blob-storage).
described in the [configuration guide][config-reference-azure-block].
You can either pass the CosmosDB's account read-write key to lakeFS, or
use a managed identity to authenticate to CosmosDB, as described
[earlier](#identity-based-credentials).
Expand Down Expand Up @@ -293,3 +293,5 @@ Checkout Nginx [documentation](https://kubernetes.github.io/ingress-nginx/user-g


{% include_relative includes/setup.md %}

[config-reference-azure-block]: {% link reference/configuration.md %}#example--azure-blob-storage
7 changes: 5 additions & 2 deletions docs/howto/import.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ the following policy needs to be attached to the lakeFS S3 service-account to al

</div>
<div markdown="1" id="azure-storage">
See [Azure deployment]({% link howto/deploy/azure.md %}#storage-account-credentials) on limitations when using account credentials.
See [Azure deployment][deploy-azure-storage-account-creds] on limitations when using account credentials.

#### Azure Data Lake Gen2

Expand Down Expand Up @@ -190,6 +190,9 @@ Another way of getting existing data into a lakeFS repository is by copying it.

To copy data into lakeFS you can use the following tools:

1. The `lakectl` command line tool - see the [reference]({% link reference/cli.md %}#lakectl-fs-upload) to learn more about using it to copy local data into lakeFS. Using `lakectl fs upload --recursive` you can upload multiple objects together from a given directory.
1. The `lakectl` command line tool - see the [reference][lakectl-fs-upload] to learn more about using it to copy local data into lakeFS. Using `lakectl fs upload --recursive` you can upload multiple objects together from a given directory.
1. Using [rclone](./copying.md#using-rclone)
1. Using Hadoop's [DistCp](./copying.md#using-distcp)

[deploy-azure-storage-account-creds]: {% link howto/deploy/azure.md %}#storage-account-credentials
[lakectl-fs-upload]: {% link reference/cli.md %}#lakectl-fs-upload
4 changes: 3 additions & 1 deletion docs/understand/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ lakeFS uses a copy-on-write mechanism to avoid data duplication. For example, cr
We are extremely responsive on our Slack channel, and we make sure to prioritize the most pressing issues for the community. For SLA-based support, please contact us at [[email protected]](mailto:[email protected]).

### 4. Do you collect data from your active installations?
We collect anonymous usage statistics to understand the patterns of use and to detect product gaps we may have so we can fix them. This is optional and may be turned off by setting `stats.enabled` to `false`. See the [configuration reference]({% link reference/configuration %}#reference) for more details.
We collect anonymous usage statistics to understand the patterns of use and to detect product gaps we may have so we can fix them. This is optional and may be turned off by setting `stats.enabled` to `false`. See the [configuration reference][config-ref] for more details.


The data we gather is limited to the following:
Expand All @@ -40,3 +40,5 @@ The [Axolotl](https://en.wikipedia.org/wiki/Axolotl){:target="_blank"} – a spe
<small>
[copyright](https://en.wikipedia.org/wiki/Axolotl#/media/File:AxolotlBE.jpg)
</small>

[config-ref]: {% link reference/configuration %}#reference

0 comments on commit 180e585

Please sign in to comment.