Load Shedding with NGINX using adaptive concurrency control — Part 1 | by Vikas Kumar | Feb, 2021 | OLX Group Engineering tech.olx.com
Author’s Note: This is a two-part series about how we implemented and operationalized load shedding in our services using adaptive concurrency control with NGINX. Part 1 sets the context/background and part 2 focusses on the implementation. I wrote about this technique in much detail last year (Part 1, 2, 3, and 4). Some of the content here is adopted from these posts. Although I have tried to make this post self-sufficient, if you want further deep-dive on this topic, I’d recommend going through the other four posts and their attached reference material. Here is the link to part 2. When we talk about resilience in a microservice architecture, it’s highly likely that circuit breaking will come up. Popularised by Michael Nygard in his excellent book Release It! and Netflix’s Hystrix library, it has, perhaps, become the most discussed technique for making services more resilient when calling other services. Though there are many different implementations of circuit breakers, the one most popular is akin to client-side throttling — when a caller (client) of a service detects degradation in responses from the service i.e. errors or increased latencies, it throttles the calls to the service until it recovers. This is more about being a good citizen — if the service you are calling is degraded, sending more requests will only make it worse, so it’s better to give it some time to recover.
Report Story