Discover how we transform IT and strengthen the security of the top companies in the market.
Integrated security to detect, prevent, and respond to threats.
Continuity and recovery to keep your business always running.
Compliance and security culture to elevate your company’s cyber maturity.
Take control of your company’s IT with integrated and secure management tools.
Hybrid and integrated infrastructure to support the evolution of your business.
Use hybrid cloud with the security of having the support of one of the most important players in the market.
Specialized services to operate and evolve your IT efficiently.
Provide your company with Private Network solutions that only an end-to-end integrator can offer.
Outsource efficiently, maintaining control over everything your company needs.
Handle payment and invoice issuance with credibility, efficiency, and data security.
Articles, events, and information to go beyond and dive deep into each technology. Be inspired to transform your company.
Learn about technological innovations and how they can benefit your company.
The search for high performance has long perturbed anyone who has attempted to create a system, whether involving software or hardware. After all, a high-performance system gives us the capacity to perform more work, which generally translates into greater gains. Achieving this result is not a simple task and usually involves seeking the state of the art in what we do.
In 1965, the co-founder of Fairchild Semiconductor and Intel, Gordon Moore, observed that every year, integrated circuits were becoming increasingly complex, doubling the number of transistors, and suggested that this growth would continue for the next 10 years. Later, in 1975, he revised this estimate and established a trend that the quantity would double every 2 years. Although he did not use any empirical evidence for this claim, we realize that he was correct — at least until now — to the point where we consider this statement as “Moore’s Law.” The difference is that, perhaps, we can no longer consider transistors within a single integrated circuit.
By utilizing more transistors in modern processors, we have managed to increase processing capacity, moving from 8 bits to 16, 32, and currently 64 bits simultaneously, making calculation operations more efficient. Parallel to this, we also made efforts to increase the frequency at which processors work to perform more operations per unit of time. But this has a physical limit and, at some point, these increases began to no longer make sense. Increasing the clock excessively turns any filament into an antenna and energy dissipates as electromagnetic waves or even as heat; increasing the number of transistors can mean greater distances, making communication between them slower.
The solution to this was horizontal scalability, and two initiatives were created in this regard. One was the sharing of increasingly larger and more complex pipelines for the simultaneous processing of two or more independent tasks, which came to be called Hyperthreading. The other was the encapsulation of more than one CPU on the same chip, allowing multiple processes to be executed independently and simultaneously, giving rise to multi-core processors.
Following this same line of reasoning, the concept of horizontal scalability expanded to what we call distributed processing, where we do not need to be restricted to a single circuit or equipment. Processing is divided into parts that can be processed independently and simultaneously to obtain a composite result in less time.
We might initially have the impression that this model works well for any situation; however, one word is determinant for the success of this architecture: “INDEPENDENCE.” When multiple processes are independent, they can be executed sequentially or in parallel without their results being affected. However, not everything we do behaves this way. To cite an example, imagine a customer with a card that offers $100 of credit. He can purchase a product for $20 or another for $90, but he cannot perform both simultaneously. Although the purchases are independent of each other, the card balance makes them related, requiring a lock mechanism to serialize the operations. Now, imagine the complexity of implementing this lock in a distributed environment. A purchase made anywhere in the world needs to be authorized by a single point of balance control, which is the card-issuing bank, making the performance of this system not exactly parallel.
The adoption of measures to obtain High Availability is generally based on redundancy, which is nothing more than using more than one component to execute the “same” task, so that in the absence of one component, the result can be obtained from the other. Controlling the synchronism between these redundant components requires effort and, as a consequence, we may have a reduction in processing capacity.
Using a pool of machines as a way to multiply processing capacity can work as long as what is done on one machine is independent of what is done on the others, thus avoiding the use of lock mechanisms that lead to system contention. When we use Oracle RAC (Real Application Cluster), for example, we have two or more servers (nodes) with access to the same data, capable of executing tasks independently. But when one of these nodes makes changes to the shared data, the cache of this information used by the other nodes needs to be invalidated to avoid data inconsistencies. This necessity makes it so the nodes constantly need to communicate to “pass the baton” to the node that has the right to alter data. If this “baton” is not with us, we need to negotiate this transfer before proceeding with the alteration.
But then, how to use this High Availability environment with High Performance? As we saw above, the secret is independence.
We can have, for example, a RAC serving two different applications, so that one primarily uses node 1 and the other primarily uses node 2, ensuring that one application never depends on data altered by the other node. This reduces access to the “Global Cache” since, most of the time, the baton is with the node where the application runs, and the database does not need to “consult” the other nodes in the cluster.
Another way to reduce contention, although it does not reduce processing time, is to make systems stop adopting synchronous behavior, where we wait for the completion of one task to proceed with the next. By adopting asynchronous behavior, we can initiate multiple tasks without the need to wait for the completion of each one. Obviously, this entails greater system complexity and makes the average response time of a task longer, but it allows us to execute more tasks simultaneously, which, in the end, corresponds to higher overall performance. Still, this paradigm shift requires task independence, as a dependent task cannot be started without the completion of the task it depends on.
As we have seen, horizontal scalability can help increase the performance of a system. But this does not necessarily mean that the redundancy used to obtain high availability can be considered “horizontal scalability.” Creating systems with fewer dependencies eliminates points of failure and reduces contention. Furthermore, simpler models are easier to maintain, consume fewer computational resources, and generally yield better results.
We can imagine High Availability and High Performance as two independent goals, and we can easily choose how close we want to be to one or the other. However, with intelligence, we can create systems that shorten this distance, allowing us to approach both objectives simultaneously.
Our team of experts is ready to support your company with solutions that enhance performance and security.