Kubernetes World

  • Login
Trending now

Edge Stack 1.12.3 & Emissary-Ingress 1.12.3 security release | by Richard Li | Apr, 2021 ...

Kubernetes — Conceitos básicos e agentes | by Rafael de Mattos | Apr, 2021 | ...

High Availability in Kubernetes. Kubernetes is already designed to… | by Gaurav Kaushik | Apr, ...

Draining & Uncordoning in a Kubernetes cluster | by Gaurav Kaushik | Apr, 2021 | ...

Backup & Restore data in ‘etcd’. Backing up your kubernetes cluster data… | by Gaurav ...

Rook v1.6: Storage Enhancements. Rook v1.6: Storage Enhancements | by Sébastien Han | Apr, 2021 ...

Guide for upgrading AWS EKS Cluster with Zero downtime | by Manoj Bhagwat | Apr, ...

Running Containers in Kubernetes with Pods and Deployments | by Manning Publications | Apr, 2021 ...

CronJob In Kubernetes. Introduction | by Maciej | Apr, 2021 | Medium

Kubernetes Cluster Architecture. A Kubernetes cluster is a set of node… | by Hacksheets | ...

Home » Technology » Load Shedding with NGINX using adaptive concurrency control — Part 1 | by Vikas Kumar | Feb, 2021 | OLX Group Engineering

Load Shedding with NGINX using adaptive concurrency control — Part 1 | by Vikas Kumar | Feb, 2021 | OLX Group Engineering tech.olx.com

1
admin 2 months ago in Technology

Author’s Note: This is a two-part series about how we implemented and operationalized load shedding in our services using adaptive concurrency control with NGINX. Part 1 sets the context/background and part 2 focusses on the implementation. I wrote about this technique in much detail last year (Part 1, 2, 3, and 4). Some of the content here is adopted from these posts. Although I have tried to make this post self-sufficient, if you want further deep-dive on this topic, I’d recommend going through the other four posts and their attached reference material. Here is the link to part 2. When we talk about resilience in a microservice architecture, it’s highly likely that circuit breaking will come up. Popularised by Michael Nygard in his excellent book Release It! and Netflix’s Hystrix library, it has, perhaps, become the most discussed technique for making services more resilient when calling other services. Though there are many different implementations of circuit breakers, the one most popular is akin to client-side throttling — when a caller (client) of a service detects degradation in responses from the service i.e. errors or increased latencies, it throttles the calls to the service until it recovers. This is more about being a good citizen — if the service you are calling is degraded, sending more requests will only make it worse, so it’s better to give it some time to recover.

  • Facebook
  • Twitter
  • Pinterest
  • Google+
Report Story

Related Stories

  1. Edge Stack 1.12.3 & Emissary-Ingress 1.12.3 security release | by...
  2. Kubernetes — Conceitos básicos e agentes | by Rafael de...
  3. High Availability in Kubernetes. Kubernetes is already designed to… |...
  4. Draining & Uncordoning in a Kubernetes cluster | by Gaurav...
  5. Backup & Restore data in ‘etcd’. Backing up your kubernetes...
  6. Rook v1.6: Storage Enhancements. Rook v1.6: Storage Enhancements | by...
  7. Guide for upgrading AWS EKS Cluster with Zero downtime |...
  8. Running Containers in Kubernetes with Pods and Deployments | by...
Tags : circuit-breakerkubernetesload-sheddingmicroservicesnginx

Get weekly Kubernetes news directly to your inbox

Loading
Copyright © 2021 Kubernetes World.
Login Register

Login

Lost Password
Oops! Sorry, registration is disabled.

Load Shedding with NGINX using adaptive concurrency control — Part 1 | by Vikas Kumar | Feb, 2021 | OLX...

admin 2 months ago in

Author’s Note: This is a two-part series about how we implemented and operationalized load shedding in our services using adaptive concurrency control with NGINX. Part 1 sets the context/background and part 2 focusses on the implementation. I wrote about this technique in much detail last year (Part 1, 2, 3, and 4). Some of the content here is adopted from these posts. Although I have tried to make this post self-sufficient, if you want further deep-dive on this topic, I’d recommend going through the other four posts and their attached reference material. Here is the link to part 2. When we talk about resilience in a microservice architecture, it’s highly likely that circuit breaking will come up. Popularised by Michael Nygard in his excellent book Release It! and Netflix’s Hystrix library, it has, perhaps, become the most discussed technique for making services more resilient when calling other services. Though there are many different implementations of circuit breakers, the one most popular is akin to client-side throttling — when a caller (client) of a service detects degradation in responses from the service i.e. errors or increased latencies, it throttles the calls to the service until it recovers. This is more about being a good citizen — if the service you are calling is degraded, sending more requests will only make it worse, so it’s better to give it some time to recover.

  • Facebook
  • Twitter
  • Pinterest
  • Google+
Report Story

Related Stories

  1. Edge Stack 1.12.3 & Emissary-Ingress 1.12.3 security release | by...
  2. Kubernetes — Conceitos básicos e agentes | by Rafael de...
  3. High Availability in Kubernetes. Kubernetes is already designed to… |...
  4. Draining & Uncordoning in a Kubernetes cluster | by Gaurav...
  5. Backup & Restore data in ‘etcd’. Backing up your kubernetes...
  6. Rook v1.6: Storage Enhancements. Rook v1.6: Storage Enhancements | by...
  7. Guide for upgrading AWS EKS Cluster with Zero downtime |...
  8. Running Containers in Kubernetes with Pods and Deployments | by...