S
M
T
W
T
F
S
Published on June 16, 2024

Forward Proxy and Reverse Proxy

Communication between servers works so seamlessly these days that it is often easy to overlook the many intermediary systems that exist in between, namely forward proxies and reverse proxies. A forward proxy is a layer between a group of client machines, often within the same network, and the rest of the Internet. Traffic going out to the Internet will pass through this forward proxy first. In other words, the forward proxy acts as a middleman who intercepts the outgoing requests and forwards them to the destination server on the client’s behalf. A forward proxy has the benefits of protecting the clients’ IPs, bypassing browsing restrictions established by institutions and governments, and can block access to certain resources with filtering rules. A reverse proxy is a layer between a group of servers and the Internet. Similarly, the servers and the reverse proxy are often within the same network. All incoming traffic bound to the server must pass through the reverse proxy first. The reverse proxy also provides the benefits of protecting the servers’ IPs, acts as a load balancer that distributes traffic amongst the servers, caches static content, and handles TLS/SSL handshakes. It is not uncommon for both forward proxies and reverse proxies to be used at the same time. Also, there could be multiple layers of reverse proxies between the Internet and the servers.

Forward ProxyLoad BalancerLoad Balancer CacheReverse ProxyTech
Published on June 10, 2024

Single Sign-on (SSO)

Single sign-on (SSO) is an authentication scheme that enables users to access multiple applications and services using a single identity. SSO is built upon a concept called federated identity, which allows the sharing of identity information across trusted and independent systems. The two protocols widely used to enable SSO are SAML and OIDC. Security Assertion Markup Language (SAML) is a protocol that allows users to authenticate against multiple applications with one set of login credentials, thereby facilitating SSO. User authentication is handled by a centralized identity provider, where the authentication status is maintained and shared. Read more at SAML 2.0 Authentication. Similar to SAML, OpenID Connect (OIDC) is a protocol often used as an identity layer in conjunction with OAuth 2.0. OIDC enables SSO by allowing users to authenticate against multiple applications using credentials from trusted identity providers, such as Google or Facebook. The respective identity providers handle user authentication and share the authenticated state with interested applications. Read more at Oauth 2.0 + OIDC Authentication.

AuthenticationOIDCSAMLSSOTech
Published on February 29, 2024

SAML 2.0 Authentication

SAML stands for security assertion markup language. It is a widely used open standard for authentication and authourization between different parties. The advantages of adopting SAML/SSO include single source of identity and enable consistent authentication across services. Centralizing authentication simplifies granting and revoking access across different services. The parties involved in the SAML authentication flow include the principal, the service provider (SP), and the identity provider (IdP). The principal is the user trying to authenticate and tends to have metadata associated to them. Service providers are the services requesting authentication and identity information about the principal. Lastly, identity providers serve as the source of identity information and authentication decision. The relationship between the three can be thought of as IdP authenticates principals and returns identity information to SP. Before authentication can happen, both SP and IdP need to coordinate and exchange setup information. IdP will need to provide the SSO endpoint and public key to the SP for identity decryption upon successful authentication. In exchange, the SP will need to provide IdP an assertion consumer service (ACS) URL. The ACS URL is where the IdP will POST the SAML assertion to along with redirecting the principal to a defined target resource after authentication. The SAML authentication flow starts with the user (the principal) clicking on an "Continue with SSO" button on the SP's landing page. The SP will create and serialize an XML document known as the AuthnRequest, and appends to the IdP SSO endpoint as query parameter before redirecting the principal to the IdP SSO endpoint. The principal will then enter their credentials for authentication. Upon successful authentication, the IdP encrypts the principal's identity information with its private key. It then redirects the principal back to the ACS URL, passing along the encrypted identity information as part of a SAML payload. The SP will receive and decrypt the payload using the corresponding public key before processing the SAML by creating or configuring user session based on SAML. Sequence Diagram saml-flow-sequence-diagram

AuthenticationIdentity ProviderPrincipalSAMLService ProviderSSOTechUser Authentication
Published on December 7, 2023

Redis Use Cases

Redis is a well-known in-memory key-value store typically used as a cache system. However, there are many other use cases for Redis. It is important to note that since Redis is an in-memory database, all data will be lost if the Redis server restarts or crashes. For this reason, Redis provides the option to persist data to disc. While Redis allows data persistence to disk, it’s not the most efficient solution for recovering from crashes. Maintaining separate replicas for promotion as the primary instance offers faster recovery. As a cache, Redis enables efficient retrieval of frequently accessed data. This will reduce the load on the database and improve the response time of the application. Redis is also used as a session store. Normally, session data is persisted with the instance by which the user logs in. This means that the user is logged in to that instance only. This is not stateless and makes horizontal scaling very difficult. Redis enables decoupling session data from each instance and removing the need for each machine to remember the session state information. A simple rate limiter can also be implemented using Redis. At a very high level, this is done by mapping user IP to a counter with an expiration policy. If the current count exceeds the allowed threshold, then the request is blocked until the current count falls below the allowed threshold again. Lastly, Redis can also serve as a distributed lock to protect mutable resources. Suppose there are two clients, A and B, who wish to modify some common resources at the same time, client B can lock the resource by setting a key in Redis. This prevents client A from accessing the said resource until client B releases the key by deleting it from Redis. These are a few examples of what Redis could be used for. Redis’s diverse capabilities and ease of use make it a valuable tool for a wide range of applications.

CacheDistributed LockRate LimiterRedisSession StoreTech
Published on December 3, 2023

Slice and Dice a Binary Tree

Hierarchical data are ubiquitous. This makes understanding the techniques on how to process them all the more important. The three standard algorithms are pre-order, in-order, and post-order traversals. This post will look at the mentioned techniques and a few others that will group hierarchical data by rows and columns. We will also explore the top, bottom, right, and left views of a tree. PreOrder, InOrder, and PostOrder Traversals Starting with the standard depth-first traversals, we have pre-order, in-order, and post-order as follows. In depth-first traversal, the algorithm is encouraged to go as far into one branch as possible before backtracking. GroupByDepth Next, we have level-order or group-by-depth. If we only care about printing the node values, then the traditional level-order traversal will do the job. To group the nodes by level, any of the standard depth-first traversals can be used in conjunction with a map of level to list of nodes. A level tracker is also used to track the depth as the algorithm proceeds each subtree. GroupByColumn Vertical-order, or group-by-column, follows a similar implementation to level-order except for one key difference. Instead of tracking depth during the recursive calls, we will add one if we go to the right subtree, and subtract one if we go to the left subtree. Top, Bottom, Left, and Right Views Lastly, getting the right side view of a tree can be done by using one of the depth-first traversals and leveraging the level data to either put the value into a map if it is the first time the level is visited or override the value if the level has been visited before. As the depth-first traversal moves from left to right, the value at the level is overridden until the right-most values remain. Getting the left side view can be accomplished by reversing the traversal order. Getting the top or bottom view relies on the vertical level rather than the depth info.

Binary TreeBinary Tree TraversalDS/ATechTree Algorithms
Published on November 30, 2023

Capacity Planning

Envelope calculation is a technique used to check and validate a design idea. The two numbers we want to arrive at are queries per second (QPS) and storage. It is important to note that absolute accuracy is not important and the goal with envelope calculation is to get within the order of magnitude. Queries per second (QPS) is the number of requests we can expect per second. This can help inform the number of servers needed to support the QPS. In general, QPS can be broken down into daily active users (DAU), DAU\_subset, content\_rate, peak\_scale\_factor, and seconds per day. DAU is usually easy to obtain using historical traffic data. DAU\_partition estimates the subset of DAU that this feature is designed for. For example, only 20% of Twitter’s DAU actually tweets. The content\_rate estimates the average amount of content produced or consumed per DAU. Using the given example, there may be 2 tweets per DAU. The peak\_scale\_factor is a multiplier that estimates what the QPS could be during peak traffic hours. Lastly, there are 86400 seconds in a day, but 1e5 can be used to make calculation easier. Putting it all together, the formula looks like: QPS = DAU \* DAU\_subset(%) \* content\_rate \* peak\_scale\_factor \* 1e5 Storage is another number to calculate, which will help inform the storage capacity need for the particular feature being designed. Storage can be broken down into DAU, DAU\_subset, content\_subset, content\_size, retention, copies, and days per year. DAU and DAU\_subset are the same as QPS. The content\_size estimates the average content size being stored. Related to content, it is also important to consider what portion of all content contains the content type of interest and partition appropriately. Retention and copies estimate how long the data will be kept and the number of copies that should be kept. Lastly, there are 365 days in a year, but we will round this to 4e2 for easy calculation. Putting it all together, the formula looks like: Storage = DAU \* DAU\_subset(%) \* content\_subset(%) \* content\_size \* retention(yr) \* copies \* 4e2 Having QPS and Storage will help guide the decisions during system design such as the number of servers, database shards, load balancing, and etc.

Daily Active User (DAU)Envelope CalculationQueries Per Second (QPS)Storage CapacitySystem DesignTech
Published on November 22, 2023

Oauth 2.0 + OIDC Authentication

Sharing data between services in the past requires username and password authentication. Once authenticated, the client service will have full access to all the user’s data on the server. This is no longer considered the best practice for exposing username and password credentials, along with the lack of control over how much data to share. Oauth 2.0 is an authorization framework that leverages OpenID Connect (OIDC) for authentication. Resource sharing using Oauth 2.0 usually includes three or more parties, the resource owner, resource server, and resource client. Although not required, the resource server can also take on the role of an authorization server. The resource-sharing process starts with the resource owner requesting resources via the resource client. The client will invoke the authentication flow with the authorization server (the resource server in this case). This redirects the user to the authorization server where they are presented with a login page. The owner can review what resource is being shared upon successful authentication. Once the owner grants the permission, the authentication server will issue an auth token, which the client will use to request an access token. The client can now access resources on the server within the defined scope using the access token. Some other notable features of Oauth 2.0 include the ability for the owner to revoke or set expiration date on the access token. Sequence Diagram oidc-oauth2-sequence-diagram

Access TokenAPI AuthenticationAuthenticationOAuth 2.0SSOTechToken-Based Authentication
Published on November 14, 2023

Evaluate an Expression

Evaluate an arithmetic expression written in Reverse Polish Notation and Standard Notation. See the following expressions that yields the same result. Standard Notation: 7 + 3 \* 4 - 5 Reverse Polish Notation: 3 4 \* 7 + 5 - Reverse Polish Notation (Postfix) Evaluate an arithmetic expression in Reverse Polish Notation. The idea is to iterate the array from left to right and push numbers as operands into a stack. There should always be at least two operands in the stack whenever an operator is encountered. The operator indicates the type of operation that should be applied to the top two operands. To apply an operation, two operands are removed from the stack, and the result is pushed back into the stack as a new operand for future calculation. After processing the given expression, the operands stack should be reduced to a single number representing the solution. Standard Notation (Infix) Evaluate an arithmetic expression in Standard Notation. One of the challenges in solving this problem is respecting the order of operations, which is multiplication, division, addition, and subtraction. The intuition is to first resolve all multiplication and division on the first pass. The second pass can be processed from left to right. Similar to the approach for Reverse Polish Notation, a stack can be used to store the operands or the products and quotients from the first pass. This way, the stack will only contain operands that need to be summed up at the end. The time complexity for both problems is O(n) because we need to do processing for each element of the array or string. The space complexity for both problem is O(n) in the case where the operands stack contains as operands as the expression.

DS/AInfix NotationPostfix NotationReverse Polish ExpressionStackStandard NotationTech
Published on November 13, 2023

API Gateway

API Gateway is a vital component to scaling and securing modern distributed systems. It sits between the client and a suite of backend services and serves as a single point of entry to an application. Some major API Gateway providers include AWS API Gateway, Azure API Management, and Google API Gateway. They tend to come with features such as request routing, load balancing, authentication and authorization, rate limiting, caching, and logging right out of the box. Upon receiving a request from the client, an API Gateway will be able to forward the request to the appropriate backend service based on a predefined set of rules. Load balancer comes standard with an API Gateway and helps distribute traffic across multiple machines. Distribution policy can be configured to use round robin, sticky round robin, weighted round robin, IP/URL hashing, least connections, and least latency. See Exploring Different Types of Load Balancers for more details. API Gateway can also serve as a gatekeeper through authentication and authorization. Implementation can vary and depends on the authentication provider. Rate limiter is an important API Gateway feature to help prevent abuse against the backend services. Rate limiting policy can be configured to use token bucket, leaking bucket, fixed window counter, sliding window log, and sliding window counter. Some API Gateway offers caching features to help reduce load on the backend services and improve performance. Logging is another feature that comes with API Gateway. It enables usage tracking and troubleshooting to gain better insight into the system. These are just some of the features provided by an API Gateway. Implementation may vary between providers.

API GatewayAWS API GatewayAzure API ManagementGoogle API GatewayLoad BalancerLoad Balancer CacheRate LimiterTech
Published on November 8, 2023

Exploring Different Types of Cache Systems

Caching is a common technique in modern distributed systems to enhance performance and reduce response time. The general idea is to reuse previously computed values and prevent subsequent server or database hits. A distributed system can have multiple caching points starting with browser cache, CDN, load balancer cache, distributed cache, and database cache. The following techniques assume that data within the cache are not stale. Browser caching can cache HTTP responses and facilitate faster data retrieval. The browser cache should shave off significantly from the response time. Enable browser cache by adding an expiration policy in the response HTTP headers. Web assets such as images, videos, and documents are perfect content for caching because they do not change as often. Web assets are typically cached in content delivery networks (CDN) that are geographically distributed to be as close to the request origin as possible to reduce response time. Content can be personalized through edge nodes as well. Load balancer caching can help reduce stress on the servers and improve response time. Depending on their implementation, load balancers can be configured to respond with cached results for subsequent requests with the same parameters. Distributed caches such as Redis are in-memory key-value stores with high read and write performance. One common application of distributed cache is inverted indexing for full document search. Lastly, depending on implementation, database may have caching functionality such as bufferpool and views to help improve response time. Bufferpool caches query results in allocated memory for future retrieval. Similarly, views cache precomputed query results to help reduce latency.

Browser CacheCacheContent Delivery NetworkDatabase CacheDistributed CacheDistributed SystemsLoad BalancerLoad Balancer CacheTech
Avatar
Chau

Hello everyone! I am excited to share a passion project I've been working on. You can explore it at https://ngokchau.info/symbols/AAPL?d=4. Feel free to replace "AAPL" with any other symbol to instantly access their Financial Snapshot!

Published on November 2, 2023

Convert a Sorted Array into a Binary Search Tree

Given a sorted array, convert it into a binary search tree. The intuition behind this problem is to recognize that the midpoint of the current range is used to construct the current node. The left child node will be constructed with the midpoint of the left half of the current range. Similarly, the right child node will be constructed using the midpoint of the right half of the current range. We can use the divide-and-conquer technique in solving this problem. At each iteration, we instantiate a new node with the value of the midpoint of the current range. This is recursively applied to the left half and the right half to construct the left and right child, respectively. One important thing to keep in mind is that the range will change with each iteration. The time complexity for this problem is O(n) because we need to do processing for each element of the array. The space complexity for this problem is O(log(n)) because the problem yield a balanced tree where the recursive call stack will be as deep as the height of the tree.

Binary Search TreeBinary TreeDivide and ConquerDS/ATechTree Algorithms
Published on November 1, 2023

Improving API Performance

Suppose you noticed that the latency of your API service is slowly creeping up in line with the increase in traffic. You have added additional compute to the load balancing pool to help distribute the load, but it may be time to explore some optimization at the code level. This article will explore 5 of infinite techniques on how to increase the performance of an API service. They are caching, minimizing N + 1 queries, paginating large results, data compression, and avoiding synchronous logging. Caching Starting with caching, which usually lives between the middle tier and the database. The idea is to store the results of expensive computations to be reused at a later request. This can help reduce the number of database hits for frequently accessed endpoints with the same parameters. Minimize N+1 Queries Minimizing N + 1 queries against the database can significantly improve API performance. Often time this problem appears in hierarchical data where you might query for data at one level, then make another query for each of the results. For example, this could mean one query to get a list of posts, and then another query for each of the posts to retrieve a list of comments per post. Pagination Instead of returning the full dataset per query, consider paginating the results and returning a subset of the full dataset. This will improve query time on the data layer, processing time in the middle tier, and network load. Data Compression Data compression can help reduce the size of the response payload and the amount of data being transferred over the network. The client will need to decompress the response payload before using it. Similarly, this will help reduce network load. Avoid Synchronous Logging Lastly, avoid synchronous logging in favor of fire and forget. Synchronous logging will add to the round trip time of an API request. The time it takes to write one log entry is insignificant, it can add up in a high throughput system, especially if the request has multiple points of logging. These are five examples of how to improve an API’s performance. Keep in mind that premature optimization can lead to unnecessary complexity.

CacheCode OptimizationData CompressionLatency OptimizationN+1 QueriesPaginationTech
Published on October 27, 2023

The Largest Value of Each Row of a Binary Tree

Given a binary tree, find the max value of each level of a Binary Tree. The high level approach to solving this problem involves traversing the tree using one of the three common traversal techniques: pre-order, in-order, or post-order. The depth of each node is tracked during traversal, with the root node being the 0th level. Lastly, a map that associates depth with the maximum value encountered at that level is maintained. Starting at the root, the algorithm checks if the depth of the current node exists as a key in the map. If the current depth does not exist as a key in the map, it means that the current level has not been processed yet. In this case, map the current node's value to its depth, indicating that it's the maximum value seen at that level. If the map already contains the current depth as a key, compare the maximum value seen so far for that level with the value of the current node. If the current node's value is greater, update the map's value for that depth to reflect the new maximum. Once the current node has been processed, continue to traverse to the left and right subtrees and repeat the process. The time complexity for this problem is O(n) because all nodes are processed. The space complexity for this problem depends on the shape of the tree. If the tree is relatively balanced, then the space complexity is close to O(log(n)). If the tree is extremely skewed, then the space complexity is closer to O(n).

Binary TreeBinary Tree TraversalData StructuresDepth-First TraversalDS/ATechTree Algorithms
Published on October 24, 2023

Solvency Ratios

Solvency ratios are financial metrics that assess a company's ability to service its long-term debt and interest. Solvency ratios are also known as financial leverage ratios that compare the company's debt to its assets, equity, or earnings. In other words, how many times the company can cover its debt given its assets, equity, or earnings. Unlike liquidity ratios, solvency ratios have a longer term outlook. Solvency ratios vary from industry to industry, making them less effective when comparing companies between different industries. However, it can be used to highlight anomalies between peers within the same sector. The four common solvency ratios are Equity Ratio, Debt-to-Equity Ratio, Debt-to-Assets Ratio, and Interest Coverage Ratio. Equity Ratio Also known as Equity-to-Assets Ratio, shows how much a company is funded by equity as opposed to debt. A higher Equity Ratio indicates that a larger portion of the company's assets is funded by shareholders' equity rather than debt, which is favorable for the company's financial health. A lower Equity Ratio suggests that the company relies on debt to finance its business. The Equity Ratio formula is: Equity Ratio = Shareholders' Equity / Total Assets Debt-to-Equity Ratio Debt-to-Equity Ratio (D/E) indicates how much of a company's equity is financed by creditors. Debt-to-Equity Ratio also shows how much of the company's debt can be covered should it liquidates its equity. A lower Debt-to-Equity Ratio is favorable for the company's financial health. The formula for Debt-to-Equity Ratio is: Debt-to-Equity Ratio = Total Debt / Shareholders' Equity Debt-to-Assets Ratio The Debt-to-Assets Ratios assesses a company's total debt to its total assets. It shows how leveraged a company is and is an indication of how much the company is funded by debt relative to its assets. A lower Debt-to-Assets Ratio is favorable for the company's financial health. The formula for Debt-to-Assets Ratio is: Debt-to-Assets Ratio = Total Debt / Total Assets Interest Coverage Ratio This ratio evaluates how many times a company can cover its current interest payments with its available earnings. It represents the company's safety margin for paying its interest on debt over a specific period. A higher Interest Coverage Ratio indicates a greater capacity to service its interest obligation. The formula for Interest Coverage Ratio is: Interest Coverage Ratio = EBIT / Interest Expense

Debt-to-Assets RatioDebt-to-Equity RatioEquity RatioFinanceInterest Coverage RatioSolvency Ratios
Published on October 18, 2023

CAP Theorem

The CAP Theorem is an important consideration when designing a distributed system. CAP stands for Consistency, Availability, and Partition Tolerance. The Consistency property states that all users must see the same data at the same time, regardless of the node to which they are connected. The Availability property ensures that all requests receive a response, even when some nodes are offline. Lastly, the Partition Tolerance property stipulates that a system should continue functioning despite network partitions causing communication loss between nodes. The CAP Theorem asserts that a distributed system cannot simultaneously support all three properties; it must sacrifice one to support the remaining two. This implies that there are three possible types of distributed systems: consistent-partition-tolerant (CP), availability-partition-tolerant (AP), and consistent-availability (CA). In practice, network partition is assumed to occur at some point, so distributed systems must be partition-tolerant. Since Partition Tolerance cannot be sacrificed, CA systems do not exist in practice. A CP system is consistent and partition-tolerant, where data inconsistency is unacceptable. After data is updated, the changes are replicated across all nodes with minimal delay in between. It's important to note that the consistency in CAP does not refer to strong consistency but to eventual consistency, which means that data will eventually be replicated to all nodes. This gap is generally small and considered acceptable. An AP system offers high availability and is partition-tolerant. Data inconsistency is acceptable within an AP system. The primary goal of an AP system is to ensure the successful completion of all requests. This does not necessarily mean that the requests were fulfilled correctly, as some requests may have been processed with stale data. Businesses with AP systems may need to address these discrepancies after the fact.

AP SystemsAvailabilityCAP TheoremCA SystemsConsistencyCP SystemsPartition ToleranceTech
Published on October 16, 2023

Get All Paths of a Binary Tree

Given a binary tree, return all root-to-leaf paths of the tree. This problem can be solved by applying depth-first traversal and processing each level in the pre-order manner. A path is represented as a stack of nodes. A new node is pushed to the current path stack at every node. If the current node is a leaf node, then a new instance of the current path stack is added to the collection of paths. If the current node is not a leaf node, proceed to the next level of the tree. One important note to keep in mind is that arrays (conceptually stack) are passed as reference. This means that extra care must be taken to ensure that the current path stack only contains the path up to that level and nothing more. There are a few approaches to doing this. One way is to keep track of the height of the current level being processed and use that height as an index to modify the value of the array. Once a leaf node is reached, add the subarray from 0 to the height to the collections of path. This approach may not work if the stack implementation does not permit access via index. The second technique is to instantiate a new stack at every level to avoid the unintended pass-by-reference problem. This approach may be memory intensive if the binary tree is large. The last approach is to pop the most recent node once the left and right children are processed. The time complexity of this problem is O(n) because all nodes are processed.

Binary TreeBinary Tree TraversalData StructuresDepth-First TraversalDS/ATechTree Algorithms
Published on October 15, 2023

Liquidity Ratios

Liquidity ratio is a class of financial metrics used to measure a company's ability to pay off its short-term liabilities, usually due within a year. Another way to think about liquidity ratio is how easily and efficiently a company can convert its assets into cash so it can service its short-term debts. A higher liquidity ratio suggests that the company is able to cover its short-term obligation. A low liquidity ratio would suggest otherwise. Liquidity ratios should not be used in isolation and are best when combined with other factors. When used as internal analysis, liquidity ratios can be used by comparing prior periods to current operations, assuming the same accounting method is used in that timeframe. When used as an external analysis, liquidity ratios can be used to compare different companies within an industry. It is important to keep in mind that it may not be effective to compare the liquidity ratios between companies across different industries, sizes, or geographical locations. The three common liquidity ratios are Current Ratio, Quick Ratio, and cash ratio in the order of least to most conservative. Current Ratio Current Ratio measures a company's ability to pay off its current liabilities. This is the least conservative liquidity ratio as it includes all current assets. The formula for Current Ratio is: Current Ratio = CA / CL CA = Current Assets CL = Current Liabilities Quick Ratio Quick Ratio measures a company's ability to meet its short-term obligations with its most liquid assets. This means excluding inventory from current assets because inventory is les liquid than cash. Quick Ratio is also known as the Acid-Test Ratio. The formula for Quick Ratio is: Quick Ratio = (C + MS + AR) / CL Where C = Cash or Cash Equivalents, MS = Marketable Securities, AR = Accounts Receivable, CL = Current Liabilities Another way to look at Quick Ratio is: Quick Ratio = (CA - I - E\_prepaid) / CL Where CA = Current Assets, I = Inventory, E\_prepaid = Prepaid Expenses, CL = Current Liabilities Cash Ratio Cash Ratio is the most conservative liquidity ratio and considers only cash and cash equivalents in relation to current liabilities. The formula for Cash Ratio is: Cash Ratio = C / CL Where C = Cash or Cash Equivalents, CL = Current Liabilities

Acid-Test RatioCash RatioCurrent AssetsCurrent LiabilitiesCurrent RatioFinanceLiquidity RatiosQuick Ratio
Published on October 12, 2023

Exploring Different Types of Load Balancers

Load balancers are crucial components for enabling large-scale web applications. Some key features of a load balancer include distribution of traffic, health checks, session persistence, etc. There are two classes of load balancer: static and dynamic. Static load balancers are rule-based and do not adapt to changing server conditions, whereas dynamic load balancers continuously monitor server metrics and adjust the load distribution based on these metrics. Static load balancing methods include Round Robin, Sticky Round Robin, Weighted Round Robin, and IP/URL Hash. Dynamic load balancers include Least Connections and Least Time. Round Robin is the simplest approach in traffic distribution. The idea is to evenly distribute requests amongst all the servers regardless of server metrics. The down side to this approach is that server can become overloaded if they are not monitored closely. Sticky Round Robin is a variation of Round Robin. In this approach, the load balancer will attempt to send subsequent requests from the same user to the same server. Related data are grouped and processed together, improving processing performance. Uneven load can easily occur in this approach because newly arrived requests are assigned randomly. Weighted Round Robin allows users to configure the weight of each server within the server pool. The load balancer will distribute traffic to each server in proportion to its assigned weight. The downside is that server weights need to be configured manually, which can make this method less adaptable in a rapidly changing environment. IP/URL Hash load balancing is similar to Sticky Round Robin, as it maps user IP addresses or requested URLs to specific servers. Again, related data are grouped and processed together, improving processing performance. Selecting a proper hash algorithm can be a barrier to entry. Least Connections is a dynamic approach to load balancing, where incoming requests are routed to the server with the fewest active connections, effectively assigning requests to servers with the most remaining capacity. This approach requires that each server track its on going connections. Traffic can unintentionally be piled onto a single server. Least Time is another dynamic approach to load balancing, where the load balancer routes traffic to the server with the lowest latency or fastest response time. This involves continuously measuring the latency from each server, leading to increased overhead and complexity. Each of the mentioned methods has its trade-offs in terms of capabilities, constraints, and performance. It is important to consider traffic pattern when choosing and fine-tuning a load balancing approach.

Dynamic Load BalancingIP/URL HashLeast ConnectionsLeast TimeLoad BalancerRound RobinStatic Load BalancingSticky Round RobinTechWeighted Round Robin
Published on October 9, 2023

Determine if Binary Tree Contains a Path with Given Sum

Given a binary tree and a target value, determine if there exists a root-to-leaf path that sums up to the given target. This problem can be solved by applying depth-first traversal and processing each level recursively in a pre-order manner. This means that at each level, we want to subtract the node value from the target value of the current level. If the difference is 0, and the current node has no children, then the current path is one of the possible root-to-leaf paths that sum up to the target. Otherwise, continue to the next node and repeat the process. The time complexity for this problem depends on the shape of the tree. If the tree is relatively balanced, then the time complexity is close to O(log(n)). If the tree is extremely skewed, then the time complexity is closer to O(n).

Binary TreeBinary Tree TraversalData StructuresDepth-First TraversalDS/ATechTree Algorithms
Published on October 6, 2023

Username and Password Authentication Using JWT

JWT (JSON Web Token) is a self-contained token that securely encodes information between parties as a JSON object. One of JWT's primary use cases is for authentication in web applications or APIs. User authentication using JWT begins with the client submitting their username and password. Assuming the user exists and the provided credentials are valid, the next step is to issue an access token and a refresh token by signing the username along with metadata, such as an expiration date, using a server private key. The client can then commence using the access token as part of their requests for protected resources. The resource access flow starts with the client attaching their access token in the "Authorization" header. The server verifies the access token with its corresponding server private key and permits the request to proceed if the provided access token is valid. Access tokens have a short lifespan. Once they expire, the client must use the refresh token, which has a much longer lifespan, to obtain a new access token. The refresh token was originally set in the browser's cookie during the initial authentication and is accessible to the server with every request. When the client wishes to refresh their access token, the server verifies their refresh token. Upon successful verification, the server signs a new access token by encrypting the username with a new expiration date using the server private key.

Access TokenAPI AuthenticationAuthenticationJSON Web TokenJWTRefresh TokenTechToken-Based Authentication
Published on October 2, 2023

Binary Tree Pre-order Traversal

Given a binary tree, print the nodes in pre-order format. Recursive Approach Pre-order traversal is a technique to systematically process all the nodes in a binary tree. The three steps included in a pre-order traversal are process the current node, move to the left child and recursively apply the pre-order process on the left subtree, and the move to the right child where the pre-order process is recursively applied on the right subtree. The time complexity for pre-order traversal is O(n) where n is the number of nodes on the tree. this is because pre-order traversal processes each node with in the binary tree. Iterative Approach The iterative approach will accomplish the same goal. The general idea is to read the top item and push the children nodes onto a stack at every level. This process is repeatedly applied to the stack as long as it is not empty. The time complexity of the iterative approach is also O(n). Since the iterative approach explicitly uses a stack, space complexity would be O(h) where h is the height of the tree.

Binary TreeBinary Tree TraversalData StructuresDepth-First TraversalDS/ATechTree Algorithms
Published on September 29, 2023

TLS and SSL

Transport Layer Security (TLS) and Secure Socket Layer (SSL) are protocols that provide secure communication over the Internet by encrypting the messages in transit. TLS is the more modern version of the two. There are four phases to establishing a secure connection. They are TCP handshake, TLS handshake, key exchange, and data exchange. During the TCP handshake, both parties establish the basic TCP connection starting with the client initiating a "TCP sync" message. The server would acknowledge with a "TCP sync + ack" message, to which the TCP connection is completed where the client acknowledges with a "TCP ack" message. After the successful TCP handshake, the next step is to establish the TLS handshake. In this step, the client and server agree on which TLS version and cipher suite to use. The server will issue a certificate upon agreement, which includes the server's public key that the client will use to encrypt data for transmission. The client will then generate and encrypt a sessio key using the server's public key. The server will decrypt the message using its respective private key to get the client's session key. The session key is used by both client and server to encrypt and decrypt all subsequent data exchange. This is known as symmetric key encryption.

CertificateDevOpsSSLTechTLS