OpenSearch Cluster Topologies for Cost-Saving Autoscaling

The indexing rates of many clusters follow some sort of fluctuating pattern - be it day/night, weekday/weekend, or any sort of duality when the cluster changes from being active to less active. In these cases how does one scale the cluster? Could they be wasting resources during the night while activity is low?

Many big data technologies such as OpenSearch have a wonderful feature - they scale! This is key to maintaining a production cluster, as we may increase the cluster's capacity by simply adding more nodes on the fly. However, when the capacity is not being utilized we may wish to reduce costs by reducing the number of nodes (scaling-in).

Sadly, scaling OpenSearch is not straightforward, let alone easy to automate. This results in all sorts of innovative cluster topologies to reduce wasted resources based on the specific (continuously changing!) use case of the cluster.

In this talk, you will learn about OpenSearch's architecture, highlighting the main reasons scaling clusters is hard. Evaluate state-of-the-art topologies to achieve primitive basic autoscaling, and dive into how the opensearch-project may be able to support this in the future.

If you manage a production cluster and are concerned about the cost, particularly if you think autoscaling could help you but are unsure if you want to go down that road. Or, if you want to be that "well actually..." person when people mention autoscaling OpenSearch, then this talk is for you!


Speaker

Amitai Stern

Engineering Manager @Logz.io, Managing Observability Data Storage of Petabyte Scale, OpenSearch Leadership Committee Member and Contributor

Amitai Stern is a software engineer, the team lead of the Telemetry Storage squad at Logz.io, and a member of the OpenSearch Technical Steering Committee. Amitai works on Big data, observability, and SaaS projects. He is a contributor to the OpenSearch open-source project and has led the successful initiative at Logz.io to upgrade to OpenSearch, and reduce costs in managing petabytes of customer data.

Read more
Find Amitai Stern at:

Date

Tuesday Nov 19 / 11:45AM PST ( 50 minutes )

Location

Pacific DEKJ

Topics

Architecture Observability Platform Engineering OpenSearch

Share

From the same track

Session Platform Engineering

Beyond Durability: Enhancing Database Resilience and Reducing the Entropy Using Write-Ahead Logging at Netflix

Tuesday Nov 19 / 10:35AM PST

In modern database systems, durability guarantees are crucial but often insufficient in scenarios involving extended system outages or data corruption.

Speaker image - Prudhviraj Karumanchi

Prudhviraj Karumanchi

Staff Software Engineer at Data Platform @Netflix, Building Large-Scale Distributed Storage Systems and Cloud Services, Previously @Oracle, @NetApp, and @EMC/Dell

Speaker image - Vidhya Arvind

Vidhya Arvind

Staff Software Engineer @Netflix Data Platform, Founding Member of Data Abstractions at Netflix, Previously @Box and @Verizon

Session

Stream and Batch Processing Convergence in Apache Flink

Tuesday Nov 19 / 02:45PM PST

The idea of executing streaming and batch jobs with one engine has been there for a while. People always say batch is a special case of streaming. Conceptually, it is.

Speaker image - Jiangjie (Becket) Qin

Jiangjie (Becket) Qin

Principal Staff Software Engineer @LinkedIn, Data Infra Engineer, PMC Member of Apache Kafka & Apache Flink, Previously @Alibaba and @IBM

Session Data Pipelines

Efficient Incremental Processing with Netflix Maestro and Apache Iceberg

Tuesday Nov 19 / 03:55PM PST

Incremental processing, an approach that processes only new or updated data in workflows, substantially reduces compute resource costs and execution time, leading to fewer potential failures and less need for manual intervention.

Speaker image - Jun He

Jun He

Staff Software Engineer @Netflix, Managing and Automating Large-Scale Data/ML Workflows, Previously @Airbnb and @Hulu

Session

Stream All the Things — Patterns of Effective Data Stream Processing

Tuesday Nov 19 / 01:35PM PST

Data streaming is a really difficult problem. Despite 10+ years of attempting to simplify it, teams building real-time data pipelines can spend up to 80% of their time optimizing it or fixing downstream output by handling bad data at the lake.

Speaker image - Adi Polak

Adi Polak

Director, Advocacy and Developer Experience Engineering @Confluent

Session

Unconference: Shift-Left Data Architecture

Tuesday Nov 19 / 05:05PM PST