The indexing rates of many clusters follow some sort of fluctuating pattern - be it day/night, weekday/weekend, or any sort of duality when the cluster changes from being active to less active. In these cases how does one scale the cluster? Could they be wasting resources during the night while activity is low?
Many big data technologies such as OpenSearch have a wonderful feature - they scale! This is key to maintaining a production cluster, as we may increase the cluster's capacity by simply adding more nodes on the fly. However, when the capacity is not being utilized we may wish to reduce costs by reducing the number of nodes (scaling-in).
Sadly, scaling OpenSearch is not straightforward, let alone easy to automate. This results in all sorts of innovative cluster topologies to reduce wasted resources based on the specific (continuously changing!) use case of the cluster.
In this talk, you will learn about OpenSearch's architecture, highlighting the main reasons scaling clusters is hard. Evaluate state-of-the-art topologies to achieve primitive basic autoscaling, and dive into how the opensearch-project may be able to support this in the future.
If you manage a production cluster and are concerned about the cost, particularly if you think autoscaling could help you but are unsure if you want to go down that road. Or, if you want to be that "well actually..." person when people mention autoscaling OpenSearch, then this talk is for you!
Speaker
Amitai Stern
Engineering Manager @Logz.io, Managing Observability Data Storage of Petabyte Scale, OpenSearch Leadership Committee Member and Contributor
Amitai Stern is a software engineer, the team lead of the Telemetry Storage squad at Logz.io, and a member of the OpenSearch Technical Steering Committee. Amitai works on Big data, observability, and SaaS projects. He is a contributor to the OpenSearch open-source project and has led the successful initiative at Logz.io to upgrade to OpenSearch, and reduce costs in managing petabytes of customer data.