The humble micro-service gets a lot of love, says Zach Arnold dev-ops engineering manager. In theory, micro-services are great with absolutely no drawbacks, offering loose coupling, independent management and “all the other stuff that we say we love,” he adds.
In his experience working with them at financing startup Ygrene Energy Fund, that love doesn’t exactly come for free. He tallies up the costs for adding network hops, complex debugging scenarios, authorization and authentication, version coordination and a management burden for third-party dependencies, especially when security patches come in for other frameworks.
“I’m not leading a revolution in micro-services, I’m just hoping that maybe one less thing becomes a problem for people,” he says in a talk at KubeCon + CloudNativeCon.
Specifically, by employing a network tool like Istio to handle request caching. Right now the service-mesh project handles request routing, retries, fault tolerance, authentication and authorization but it doesn’t handle request caching — yet.
Currently, Istio acts a harness for Envoy. Istio uses an extended version of the Envoy proxy, a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh. A team is at work building eCache: a multi-backend HTTP cache for Envoy, check out their efforts here.
Once this work is completed, it will be upstreamed into Istio and configurable using the same Policy DSL and will likely also offer support for TTLs, L1, L2 caching and warming.
The GIT repository for eBay’s Envoy caching interface to ATS (Apache Traffic Server)’s cache back end.
Varnish, the de facto standard for HTTP caching in OSS.
The caching system for mod_pagespeed [blog] [code] is one implementation of an open-source multi-cache infrastructure.
Casper: Caching HTTP proxy for internal Yelp API calls.