S
M
T
W
T
F
S
Published on June 16, 2024

Forward Proxy and Reverse Proxy

Communication between servers works so seamlessly these days that it is often easy to overlook the many intermediary systems that exist in between, namely forward proxies and reverse proxies. A forward proxy is a layer between a group of client machines, often within the same network, and the rest of the Internet. Traffic going out to the Internet will pass through this forward proxy first. In other words, the forward proxy acts as a middleman who intercepts the outgoing requests and forwards them to the destination server on the client’s behalf. A forward proxy has the benefits of protecting the clients’ IPs, bypassing browsing restrictions established by institutions and governments, and can block access to certain resources with filtering rules. A reverse proxy is a layer between a group of servers and the Internet. Similarly, the servers and the reverse proxy are often within the same network. All incoming traffic bound to the server must pass through the reverse proxy first. The reverse proxy also provides the benefits of protecting the servers’ IPs, acts as a load balancer that distributes traffic amongst the servers, caches static content, and handles TLS/SSL handshakes. It is not uncommon for both forward proxies and reverse proxies to be used at the same time. Also, there could be multiple layers of reverse proxies between the Internet and the servers.

Forward ProxyLoad BalancerLoad Balancer CacheReverse ProxyTech
Published on November 13, 2023

API Gateway

API Gateway is a vital component to scaling and securing modern distributed systems. It sits between the client and a suite of backend services and serves as a single point of entry to an application. Some major API Gateway providers include AWS API Gateway, Azure API Management, and Google API Gateway. They tend to come with features such as request routing, load balancing, authentication and authorization, rate limiting, caching, and logging right out of the box. Upon receiving a request from the client, an API Gateway will be able to forward the request to the appropriate backend service based on a predefined set of rules. Load balancer comes standard with an API Gateway and helps distribute traffic across multiple machines. Distribution policy can be configured to use round robin, sticky round robin, weighted round robin, IP/URL hashing, least connections, and least latency. See Exploring Different Types of Load Balancers for more details. API Gateway can also serve as a gatekeeper through authentication and authorization. Implementation can vary and depends on the authentication provider. Rate limiter is an important API Gateway feature to help prevent abuse against the backend services. Rate limiting policy can be configured to use token bucket, leaking bucket, fixed window counter, sliding window log, and sliding window counter. Some API Gateway offers caching features to help reduce load on the backend services and improve performance. Logging is another feature that comes with API Gateway. It enables usage tracking and troubleshooting to gain better insight into the system. These are just some of the features provided by an API Gateway. Implementation may vary between providers.

API GatewayAWS API GatewayAzure API ManagementGoogle API GatewayLoad BalancerLoad Balancer CacheRate LimiterTech
Published on November 8, 2023

Exploring Different Types of Cache Systems

Caching is a common technique in modern distributed systems to enhance performance and reduce response time. The general idea is to reuse previously computed values and prevent subsequent server or database hits. A distributed system can have multiple caching points starting with browser cache, CDN, load balancer cache, distributed cache, and database cache. The following techniques assume that data within the cache are not stale. Browser caching can cache HTTP responses and facilitate faster data retrieval. The browser cache should shave off significantly from the response time. Enable browser cache by adding an expiration policy in the response HTTP headers. Web assets such as images, videos, and documents are perfect content for caching because they do not change as often. Web assets are typically cached in content delivery networks (CDN) that are geographically distributed to be as close to the request origin as possible to reduce response time. Content can be personalized through edge nodes as well. Load balancer caching can help reduce stress on the servers and improve response time. Depending on their implementation, load balancers can be configured to respond with cached results for subsequent requests with the same parameters. Distributed caches such as Redis are in-memory key-value stores with high read and write performance. One common application of distributed cache is inverted indexing for full document search. Lastly, depending on implementation, database may have caching functionality such as bufferpool and views to help improve response time. Bufferpool caches query results in allocated memory for future retrieval. Similarly, views cache precomputed query results to help reduce latency.

Browser CacheCacheContent Delivery NetworkDatabase CacheDistributed CacheDistributed SystemsLoad BalancerLoad Balancer CacheTech