S
M
T
W
T
F
S
Published on December 7, 2023

Redis Use Cases

Redis is a well-known in-memory key-value store typically used as a cache system. However, there are many other use cases for Redis. It is important to note that since Redis is an in-memory database, all data will be lost if the Redis server restarts or crashes. For this reason, Redis provides the option to persist data to disc. While Redis allows data persistence to disk, it’s not the most efficient solution for recovering from crashes. Maintaining separate replicas for promotion as the primary instance offers faster recovery. As a cache, Redis enables efficient retrieval of frequently accessed data. This will reduce the load on the database and improve the response time of the application. Redis is also used as a session store. Normally, session data is persisted with the instance by which the user logs in. This means that the user is logged in to that instance only. This is not stateless and makes horizontal scaling very difficult. Redis enables decoupling session data from each instance and removing the need for each machine to remember the session state information. A simple rate limiter can also be implemented using Redis. At a very high level, this is done by mapping user IP to a counter with an expiration policy. If the current count exceeds the allowed threshold, then the request is blocked until the current count falls below the allowed threshold again. Lastly, Redis can also serve as a distributed lock to protect mutable resources. Suppose there are two clients, A and B, who wish to modify some common resources at the same time, client B can lock the resource by setting a key in Redis. This prevents client A from accessing the said resource until client B releases the key by deleting it from Redis. These are a few examples of what Redis could be used for. Redis’s diverse capabilities and ease of use make it a valuable tool for a wide range of applications.

CacheDistributed LockRate LimiterRedisSession StoreTech
Published on November 8, 2023

Exploring Different Types of Cache Systems

Caching is a common technique in modern distributed systems to enhance performance and reduce response time. The general idea is to reuse previously computed values and prevent subsequent server or database hits. A distributed system can have multiple caching points starting with browser cache, CDN, load balancer cache, distributed cache, and database cache. The following techniques assume that data within the cache are not stale. Browser caching can cache HTTP responses and facilitate faster data retrieval. The browser cache should shave off significantly from the response time. Enable browser cache by adding an expiration policy in the response HTTP headers. Web assets such as images, videos, and documents are perfect content for caching because they do not change as often. Web assets are typically cached in content delivery networks (CDN) that are geographically distributed to be as close to the request origin as possible to reduce response time. Content can be personalized through edge nodes as well. Load balancer caching can help reduce stress on the servers and improve response time. Depending on their implementation, load balancers can be configured to respond with cached results for subsequent requests with the same parameters. Distributed caches such as Redis are in-memory key-value stores with high read and write performance. One common application of distributed cache is inverted indexing for full document search. Lastly, depending on implementation, database may have caching functionality such as bufferpool and views to help improve response time. Bufferpool caches query results in allocated memory for future retrieval. Similarly, views cache precomputed query results to help reduce latency.

Browser CacheCacheContent Delivery NetworkDatabase CacheDistributed CacheDistributed SystemsLoad BalancerLoad Balancer CacheTech
Published on November 1, 2023

Improving API Performance

Suppose you noticed that the latency of your API service is slowly creeping up in line with the increase in traffic. You have added additional compute to the load balancing pool to help distribute the load, but it may be time to explore some optimization at the code level. This article will explore 5 of infinite techniques on how to increase the performance of an API service. They are caching, minimizing N + 1 queries, paginating large results, data compression, and avoiding synchronous logging. Caching Starting with caching, which usually lives between the middle tier and the database. The idea is to store the results of expensive computations to be reused at a later request. This can help reduce the number of database hits for frequently accessed endpoints with the same parameters. Minimize N+1 Queries Minimizing N + 1 queries against the database can significantly improve API performance. Often time this problem appears in hierarchical data where you might query for data at one level, then make another query for each of the results. For example, this could mean one query to get a list of posts, and then another query for each of the posts to retrieve a list of comments per post. Pagination Instead of returning the full dataset per query, consider paginating the results and returning a subset of the full dataset. This will improve query time on the data layer, processing time in the middle tier, and network load. Data Compression Data compression can help reduce the size of the response payload and the amount of data being transferred over the network. The client will need to decompress the response payload before using it. Similarly, this will help reduce network load. Avoid Synchronous Logging Lastly, avoid synchronous logging in favor of fire and forget. Synchronous logging will add to the round trip time of an API request. The time it takes to write one log entry is insignificant, it can add up in a high throughput system, especially if the request has multiple points of logging. These are five examples of how to improve an API’s performance. Keep in mind that premature optimization can lead to unnecessary complexity.

CacheCode OptimizationData CompressionLatency OptimizationN+1 QueriesPaginationTech