Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing typos that sneaked into Hyx's docs #98

Merged
merged 1 commit into from
Nov 9, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/components/bulkhead.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@

The cloud engineering world loves ocean/ship analogies a lot.
Kubernetes, docker, containers, helm, pods, harbor, spinnaker, werf, shipwright, etc.
One word from that vocabulary was actually reserved by resiliency engineering as well. It's bulkhead.
One word from that vocabulary was actually reserved by resiliency engineering as well. It's a bulkhead.

Bulkhead (a.k.a. bulwark) can be viewed a virtual room of certain capacity. That capacity is your resources that you allow to be used at the same time to process that action.
Bulkhead (a.k.a. bulwark) can be viewed as a virtual room of certain capacity. That capacity is your resources that you allow to be used at the same time to process that action.
You can define multiple bulkheads per different functionality in your microservice.
That will ensure that **one part of functionality won't be working at the expense of another**.

Expand Down Expand Up @@ -42,7 +42,7 @@ Hence, the bulkhead is essentially a **concurrency limiting mechanism**. In turn

## Adaptive Limiting

Concurrency is possible to limit adaptively based on statistics around latency of completed requestes and some latency objective.
Concurrency is possible to limit adaptively based on statistics around latency of completed requests and some latency objective.

!!! note
Hyx doesn't provide ARC implementation at this moment. [Let us know](../../faq/#missing-a-feature) if this is useful for you.
Expand Down
2 changes: 1 addition & 1 deletion docs/components/circuit_breakers.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Circuit breakers are implemented as state machines. The following states are sup

## Usage

The breakers come into two flavours:
The breakers come into two flavors:

=== "decorator"

Expand Down
2 changes: 1 addition & 1 deletion docs/components/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,4 +99,4 @@ There are two more ways to think about the solution:
This is a way to decouple resiliency pieces from the application logic.
People usually use [Envoy-based sidecars](https://www.envoyproxy.io/) to achieve this.
- Asynchronous Event-driven Communication. Modern event/message queues deployed in highly available setup can help to improve resiliency as well.
They can provide resiliency patterns built-in into their transport protocol and some unique ways to organize communication in the system. [Apache Kafka](https://kafka.apache.org/) could be an example of such queue.
They can provide resiliency patterns built-in into their transport protocol and some unique ways to organize communication in the system. [Apache Kafka](https://kafka.apache.org/) could be an example of such a queue.
20 changes: 10 additions & 10 deletions docs/components/rate_limiter.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ By state rate limiter can be grouped as:

## Use Cases

* Protect the whole system from DoS by limiting rate of incoming requests to public API in <abbr title="a component, microservice or proxy that sits in front of all microservice API">Gateway</abbr>.
* Protect the whole system from DoS by limiting the rate of incoming requests to public API in <abbr title="a component, microservice or proxy that sits in front of all microservice API">Gateway</abbr>.
* Limit rate of requesting of external API or legacy systems on the client side
* Apply rate limiting in private API to avoid friendly-fire DoS by misbehaving peer microservices
* Ensure fair distribution of resources between API users
Expand All @@ -32,7 +32,7 @@ By state rate limiter can be grouped as:
### Static Rate Limiters

In static rate limiters, you define the rate explicitly during the rate limiter configuration (e.g. 100 req/sec).
Then limiters employ disparate algorithms to count and enforce that limits.
Then limiters employ disparate algorithms to count and enforce those limits.

The rate value is usually determined by load testing of the microservice.

Expand All @@ -41,7 +41,7 @@ The rate value is usually determined by load testing of the microservice.
This rate limiter is based on the token bucket algorithm.
In this approach, we have a notion of a bucket with tokens.
If the bucket has some tokens, a new request takes one out to come through.
Otherwise, the request fails due reaching the limit.
Otherwise, the request fails due to reaching the limit.

The bucket gets replenished with new tokens with a constant rate that is equal to `1/request rate`.

Expand Down Expand Up @@ -75,12 +75,12 @@ In this situation, you can apply Adaptive Request Concurrency (a dynamic form of
### Local/In-memory Rate Limiters

The simplest form of the rate limiting state is a state stored in-memory.
In this case, all instances of a microservice will have an own local state of requests served in a time window.
In this case, all instances of a microservice will have its own local state of requests served in a time window.

This is a simple, straightforward, database- and dependency less way to quickly introduce rate limiting.
This is a simple, straightforward, database- and dependency-less way to quickly introduce rate limiting.
At the same, that simplicity comes with the following specifics.

As you scale microservice instance number up, the allowed rate limiting effectively grow as well.
As you scale microservice instance numbers up, the allowed rate limiting effectively grow as well.
For example, if you have specified to handle 10 req/sec for one microservice instance, then:

* 1 instance handles 10 req/sec
Expand All @@ -90,7 +90,7 @@ For example, if you have specified to handle 10 req/sec for one microservice ins
This may look odd, but it's still useful and efficient as you don't need to introduce an external database
to store your state, and you ensure that each particular instance is not going to be overloaded.

Another issue with this approach is that you miss the state on instance redeploying.
Another issue with this approach is that you miss the state of instance redeploying.

If this behavior is not intended, or you have a well-specified SLA on your request rate,
you should look at more [complex distributed state](#distributed-rate-limiters).
Expand Down Expand Up @@ -122,18 +122,18 @@ Rate limiting rarely makes sense to apply on the global level.
In that case, all requests would fall under the same shard.

In practice, it makes sense to shard rate limits on different levels.
For example, rate limits are often sharded by `user_id`, so each user has own rate quote.
For example, rate limits are often sharded by `user_id`, so each user has their own rate quote.

Another popular way to shard limits is based on request routes.
In this case, rate sharding can help to prioritize and separate traffic that microservice handles.

A similar way to shard is based on read/write operations or based on more/less resource-consuming API

This can be seen as a some form of [bulkhead](./bulkhead.md).
This can be seen as a form of [bulkhead](./bulkhead.md).

### Rate Limit Public API

Public API is part of the system that are exposed to traffic sources outside your cluster like UI application, SDKs, etc.
Public API is part of the system that is exposed to traffic sources outside your cluster like UI application, SDKs, etc.
This type of API is usually the most loaded and under the hood they trigger requests to other components.

If Public API has no rate limiting, this is the number one way to put your system down,
Expand Down
6 changes: 3 additions & 3 deletions docs/components/retry.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ The `list[float]` and `tuple[float, ...]` backoffs are just aliases for the `int
### Exponential Backoff

Exponential backoff is one of the most popular backoff strategies.
It produces delays that growth rapidly. That gives the faulty functionality more and more time to recover on each retry.
It delays that growth rapidly. That gives the faulty functionality more and more time to recover on each retry.

Hyx implements the Capped Exponential Backoff that allows to specify the `max_delay_secs` bound:

Expand Down Expand Up @@ -127,7 +127,7 @@ It was authored by [the Polly community](https://github.com/App-vNext/Polly/issu

### Custom Backoffs

In the Hyx design, backoffs are just iterators that returns float numbers and can go on infinitely.
In the Hyx design, backoffs are just iterators that return float numbers and can go on infinitely.

Here is how the factorial backoff could be implemented:

Expand Down Expand Up @@ -278,4 +278,4 @@ If we had a deeper request chain with more retries on the way,
all of them would multiply and create even worse load on the system.

The general rule of thumb here is to retry in the component that is directly above the failed one.
In this case, it would be okay to retry on the `orders` level only.
In this case, it would be okay to retry on the `orders` level only.
10 changes: 5 additions & 5 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,16 @@ FAQ

## No Sync Support?

That's right. Hyx was only intended to support asyncio-based applications.
The sync support would require to rethink implemented primitives in terms of multithreading.
So that will require a bit of effort. As of now, there is no plans to work on that.
That's right. Hyx was only intended to support asyncio-based applications.
The sync support would require rethinking implemented primitives in terms of multithreading.
So that will require a bit of effort. As of now, there are no plans to work on that.
However, that might change based on the community feedback and demand.

## Missing a Feature?

Let us know about your needs by creation a request or commenting on existing one in [Github](https://github.com/roma-glushko/hyx/discussions/categories/polls).
Let us know about your needs by creating a request or commenting on an existing one in [Github](https://github.com/roma-glushko/hyx/discussions/categories/polls).
This will greatly help us to prioritize work in the project.

## Have not found your answer?

Feel free to create a discussion in [Github](https://github.com/roma-glushko/hyx/discussions/categories/q-a).
Feel free to create a discussion in [Github](https://github.com/roma-glushko/hyx/discussions/categories/q-a).
2 changes: 1 addition & 1 deletion docs/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This may give some ideas on what to expect from the project and what we might mi
### Goals

* Provide the baseline implementation for all general reliability components
* Init a documentation. Document the components
* Init documentation. Document the components
* Implement project's infrastructure

## M1: Observability
Expand Down
Loading