Edge UOL

    Discover how we transform IT and strengthen the security of the top companies in the market.

    Who we are Careers News

    Cyber Defenseseta

    Integrated security to detect, prevent, and respond to threats.

      Security Operations Center (SOC) Brand Protection | CTI Incident Response Web Application Protection (WAF) Firewall as a Service (FWaaS) Network Access Security Vulnerability Management Patch Management Endpoint Protection Pentest

    Cyber Resilienceseta

    Continuity and recovery to keep your business always running.

      Disaster Recovery as a Service (DRaaS) Anti-ransomware Data Protection Secure Desktops Access Management Data Loss Prevention (DLP)

    Cyber Governanceseta

    Compliance and security culture to elevate your company’s cyber maturity.

      Governance, Risk and Compliance Consulting Security Awareness & Training CIS Controls Maturity Assessment

    ManageEngineseta

    Take control of your company’s IT with integrated and secure management tools.

      Identity and Management Access Service Management Unified Endpoint and Security Management IT Operations Management Security Event Management Analytics

    Hybrid Cloud & Infrastructureseta

    Hybrid and integrated infrastructure to support the evolution of your business.

      Hybrid Cloud | Private Cloud Hosting | Colocation Network Segmentation & Integration

    Edge VMware Cloudseta

    Use hybrid cloud with the security of having the support of one of the most important players in the market.

      Disaster Recovery as a Service (DRaaS) Secure Desktops Edge Computing Network Segmentation & Integration

    IT Servicesseta

    Specialized services to operate and evolve your IT efficiently.

      Cloud Services Intelligent Monitoring and Observability Database, Operating Systems and Network Management ITSM and IT Governance Integration and DevSecOps SAP Basis Consulting Squads

    Private Networksseta

    Provide your company with Private Network solutions that only an end-to-end integrator can offer.

      Consulting Network Management Private Network Implementation (4G and 5G)

    Hosting and Colocationseta

    Outsource efficiently, maintaining control over everything your company needs.

      Colocation

    Payment Solutionsseta

    Handle payment and invoice issuance with credibility, efficiency, and data security.

      BPag - Payment gateway Notanet - Invoice issuing platform
Partners Cases

    Tech Insights

    Tech Insights seta

    Articles, events, and information to go beyond and dive deep into each technology. Be inspired to transform your company.

    Articles E-books Events Web series

    Tech Universe seta

    Learn about technological innovations and how they can benefit your company.

Contact Us EN
  • EN - Inglês
  • BR - Português (Brazil)
Article/

High Availability vs. High Performance – The Myth of Horizontal Scalability

October 13th, 2022
Infrastructure Services IT Services Managed Services
By Arthur E. F. Heinrich
High Availability vs. High Performance – The Myth of Horizontal Scalability

The search for high performance has long perturbed anyone who has attempted to create a system, whether involving software or hardware. After all, a high-performance system gives us the capacity to perform more work, which generally translates into greater gains. Achieving this result is not a simple task and usually involves seeking the state of the art in what we do.

In 1965, the co-founder of Fairchild Semiconductor and Intel, Gordon Moore, observed that every year, integrated circuits were becoming increasingly complex, doubling the number of transistors, and suggested that this growth would continue for the next 10 years. Later, in 1975, he revised this estimate and established a trend that the quantity would double every 2 years. Although he did not use any empirical evidence for this claim, we realize that he was correct — at least until now — to the point where we consider this statement as “Moore’s Law.” The difference is that, perhaps, we can no longer consider transistors within a single integrated circuit.

By utilizing more transistors in modern processors, we have managed to increase processing capacity, moving from 8 bits to 16, 32, and currently 64 bits simultaneously, making calculation operations more efficient. Parallel to this, we also made efforts to increase the frequency at which processors work to perform more operations per unit of time. But this has a physical limit and, at some point, these increases began to no longer make sense. Increasing the clock excessively turns any filament into an antenna and energy dissipates as electromagnetic waves or even as heat; increasing the number of transistors can mean greater distances, making communication between them slower.

The solution to this was horizontal scalability, and two initiatives were created in this regard. One was the sharing of increasingly larger and more complex pipelines for the simultaneous processing of two or more independent tasks, which came to be called Hyperthreading. The other was the encapsulation of more than one CPU on the same chip, allowing multiple processes to be executed independently and simultaneously, giving rise to multi-core processors.

Following this same line of reasoning, the concept of horizontal scalability expanded to what we call distributed processing, where we do not need to be restricted to a single circuit or equipment. Processing is divided into parts that can be processed independently and simultaneously to obtain a composite result in less time.

We might initially have the impression that this model works well for any situation; however, one word is determinant for the success of this architecture: “INDEPENDENCE.” When multiple processes are independent, they can be executed sequentially or in parallel without their results being affected. However, not everything we do behaves this way. To cite an example, imagine a customer with a card that offers $100 of credit. He can purchase a product for $20 or another for $90, but he cannot perform both simultaneously. Although the purchases are independent of each other, the card balance makes them related, requiring a lock mechanism to serialize the operations. Now, imagine the complexity of implementing this lock in a distributed environment. A purchase made anywhere in the world needs to be authorized by a single point of balance control, which is the card-issuing bank, making the performance of this system not exactly parallel.

The adoption of measures to obtain High Availability is generally based on redundancy, which is nothing more than using more than one component to execute the “same” task, so that in the absence of one component, the result can be obtained from the other. Controlling the synchronism between these redundant components requires effort and, as a consequence, we may have a reduction in processing capacity.

Using a pool of machines as a way to multiply processing capacity can work as long as what is done on one machine is independent of what is done on the others, thus avoiding the use of lock mechanisms that lead to system contention. When we use Oracle RAC (Real Application Cluster), for example, we have two or more servers (nodes) with access to the same data, capable of executing tasks independently. But when one of these nodes makes changes to the shared data, the cache of this information used by the other nodes needs to be invalidated to avoid data inconsistencies. This necessity makes it so the nodes constantly need to communicate to “pass the baton” to the node that has the right to alter data. If this “baton” is not with us, we need to negotiate this transfer before proceeding with the alteration.

But then, how to use this High Availability environment with High Performance? As we saw above, the secret is independence.

We can have, for example, a RAC serving two different applications, so that one primarily uses node 1 and the other primarily uses node 2, ensuring that one application never depends on data altered by the other node. This reduces access to the “Global Cache” since, most of the time, the baton is with the node where the application runs, and the database does not need to “consult” the other nodes in the cluster.

Another way to reduce contention, although it does not reduce processing time, is to make systems stop adopting synchronous behavior, where we wait for the completion of one task to proceed with the next. By adopting asynchronous behavior, we can initiate multiple tasks without the need to wait for the completion of each one. Obviously, this entails greater system complexity and makes the average response time of a task longer, but it allows us to execute more tasks simultaneously, which, in the end, corresponds to higher overall performance. Still, this paradigm shift requires task independence, as a dependent task cannot be started without the completion of the task it depends on.

As we have seen, horizontal scalability can help increase the performance of a system. But this does not necessarily mean that the redundancy used to obtain high availability can be considered “horizontal scalability.” Creating systems with fewer dependencies eliminates points of failure and reduces contention. Furthermore, simpler models are easier to maintain, consume fewer computational resources, and generally yield better results.

We can imagine High Availability and High Performance as two independent goals, and we can easily choose how close we want to be to one or the other. However, with intelligence, we can create systems that shorten this distance, allowing us to approach both objectives simultaneously.

Tags:
Edge ComputingPerformanceTechnology

Related

Business

The importance of active listening in pre-sales

Adrielle Santana
Infrastructure Services IT Services Managed Services

AI Operations: The Real Transformation of IT Services for Business

Leonardo Schumacher
Infrastructure Services Managed Services

Cloud Yes, but with Governance

Leonardo Schumacher
Hybrid Cloud & Infrastructure Partners

The Evolution of VMware Virtualized Networks

Fernando Henrique

Get in touch

Our team of experts is ready to support your company with solutions that enhance performance and security.

Contact usseta
Logo Edge UOL

Edge UOL

Who we are Careers News

Partners

Case Studies

Solutions

Cyber Defense Cyber Resilience Cyber Governance Hybrid Cloud & Infrastructure IT Services Payment Solutions

Tech Universe

Cybersecurity Cloud Computing Payment Gateway ITSM and IT Governance Autonomous Operations Digital Transformation

Tech Insights

Articles E-books Events Web series

Contact Us

Grupo UOL
Privacy Policy
Terms of use
Information security
Quality management policy
Accessibility
facebook Edge UOL linkedin Edge UOL youtube Edge UOL instagram Edge UOL
© Edge UOL - 2021 - 2026 - All rights reserved
Logo LVT