November 19, 2021

Measuring Customer Happiness Across 1,000s of Independent Databases

Brett Carrington, Vice President, Middleware Engineering

At Goldman Sachs, Middleware Engineering builds and operates a broad database platform as a service. We deliver a large, heterogeneous service offering comprising of thousands of databases across a dozen different database types to unlock the potential of all our engineering teams. While our footprint has grown tremendously over the years, we always do our best to deliver the solutions that our customers need. This size, diversity and customer focus are some of our key strengths.

On the other hand, these strengths are also a challenge for our engineers. Because of the breadth and diversity of our clients, there was no single perspective of service quality that could span each of our different offerings. Our intuition about the customer experience could only scale so far without concrete data, and so we set about curating trustworthy and timely service quality data that we could use to drive significant decisions about our whole platform.

Naturally, we already had excellent observability of technical details like CPU and memory usage, storage performance and replication throughput. These were interesting, but not satisfying to us in isolation, as none of these technical metrics really captured the essence of "are our customers happy?". While these data points are useful, they don't always correlate naturally with customer happiness. For instance, a database under heavy load might have high CPU and memory usage but the customer can be perfectly happy with its throughput. Conversely, a customer might be quite upset when the database load is low - perhaps a network problem has made it inaccessible!

We found inspiration in the Site Reliability Engineering (SRE) discipline which has been adopted by others at Goldman Sachs and across the wider industry. We were inspired to identify Service Level Indicators (SLIs) for our database platform. An SLI is a thoughtfully defined measurement of some aspect of a service offering which, ideally, correlates very strongly to an aspect of our customers' happiness. We decided to start with an Availability SLI: Our customers are happy when their applications can reliably use their databases as intended.

Implementing our Availability SLI was no simple task. We had to measure something meaningful about using a database. The implementation had to scale to thousands of databases and several different database technologies. We also needed to account for our globally distributed network: our customers operate their businesses from nearly anywhere. In addition, we had to be highly reliable - more reliable than the databases we were monitoring. Failures in our measurement implementation had to have as small a blast radius as possible, if we were to trust the collected data.

Our first task was to define what Availability actually meant for our SLI. There are many different use cases for databases. Some database users expect to perform low-latency, high-speed transaction processing. Other users may be interested in performing complex, analytical queries on a regular basis or driven by urgent business needs. There's no perfect answer to "available" that fits every use case. We can get pretty close though. We know that the ability to accept user connections and perform straightforward read and write operations are fundamental to database health. A database is certainly unhealthy if it cannot perform these basic tasks.

This led us to the idea of an Availability Prober. A prober is merely a tool that acts like a synthetic user and reports if it could achieve a task or not. Our database prober could attempt to perform read and write operations on a database and confirm success or failure. We could then count the number of successful probes and compare this to the number of attempted probes to calculate any database's Availability SLI.

This presented a number of interesting engineering challenges. While we run many different kinds of databases, we didn't want to write a lot of different kinds of probers to match. We wanted a single prober that we could operate anywhere our users could operate. A good proportion of our users use Java and JDBC (Java Database Connectivity) for their database interaction. All of our relational databases have a JDBC driver available. Java and JDBC seemed like the perfect choice given our expected scale. JDBC is the standard Java API for interacting with any relational database backend. In theory, a standard API with well-supported drivers for each of our database types would be a great productivity boost. We could write the prober once and only vary which driver we loaded or what specific commands we wanted to send to the database. However, as we began to implement this idea, the JDBC abstraction presented some unique challenges.

One challenge is that JDBC is a blocking, synchronous API. A blocking, synchronous API call could potentially bring an entire thread of execution to a screeching halt. Our prober has to be able to monitor thousands of databases - we couldn't let one misbehaving database impact the prober's measurements of other databases.

JDBC does try to provide some flexibility here. There are various ways to specify network-level timeouts, login-level timeouts, and even statement-level timeouts. However, there is no holistic ability to define a timeout for the complete sequence of operations we want our prober to perform. We also found that different JDBC driver vendors had varying levels of support for timeouts so that there was no "perfect" solution.

The naïve solution to this problem is to orchestrate supervision of each probe. One thread of execution can perform the probe while a distinct thread supervises its progress. If the probe succeeds, the supervisor can record a success. If the probe fails or fails to terminate after a deadline is reached, then the supervisor can record a probe failure and deal with any cleanup necessary. This approach is fairly complex and difficult to scale. A simple implementation requires at least 2N threads to monitor N databases and managing the shared state between probe-thread and supervisor-thread is an unenviable task.

Our approach was to use Reactive Programming techniques - a naturally asynchronous programming paradigm - to bridge the synchronous JDBC world to our prober's requirements. Reactive Programming provides an abstraction that lets us model the prober as a sequence of events and reaction to them. Our prober is modeled as a stream of timer events, ticking every Y seconds. Our reaction is to start a new probe in response to the timer and also to start a timeout to handle when that probe does not complete in time. We used the RxJava framework to express this simplified version of the core probe loop:

Prober p = new Prober(database);
Flowable.interval(probeInterval, TimeUnit.MILLISECONDS)
        .flatMapCompletable( tick ->
                                     p.probe()
                                      .timeout(timeoutMilliseconds, TimeUnit.MILLISECONDS, timeoutScheduler)
                                      .doOnSuccess(result -> supervisor.recordSuccess(result))
                                      .doOnError(err -> supervisor.recordFailure(err))
                                      .onErrorComplete()
                           )

On line 1, a new Prober is created. This encapsulates everything we need to know to connect to, authenticate to, and probe a specific database target. On line 2, we ask RxJava to fire a timer event every probeInterval milliseconds. On line 3, we're reacting to a stream of those timer events by computing a "completable" result. A Completable is a Reactive abstraction that eventually completes a task successfully or unsuccessfully. This is exactly what our database probes should do. Lines 4-8 define how we respond to each of the event ticks from Flowable.interval.  Line 4 starts the database probe while line 5 composes in a supervisor to cancel the probe after a timeout. The timeoutScheduler is shared across many database targets and is smart enough to use only a small number of threads to schedule timeouts for hundreds of targets. Lines 6 and 7 deal with the probe (or timeout) outcomes and record the results for our SLI. Line 8, onErrorComplete(), makes sure errors are not treated fatally as otherwise they would cancel the flow of ticks from Flowable.interval.

Our production implementation differs slightly in some ways. For example, we use a modified implementation of Flowable.interval that adds a little jitter to spread all of the prober's operations across the entire wall clock and minimize the thundering herd problem.

Although this can seem complex at first glance, the RxJava library and the reactive programming paradigm really paid off. The more efficient use of threads and easier event-driven programming model delivered a very low overhead prober which is easy to run at scale for very low cost. Our prober regularly performs the equivalent of many hundreds of probes per second from a modest dual-core virtual machine with less than a 2GB heap. This is despite spending relatively little engineering time on performance or memory optimization.

Of course, there was still plenty for us to learn on our journey. Probers can end up confronting some really interesting failure modes and we quickly learned there are subtle nuances to consider when managing probe failures. One specific example: a database can become so unresponsive that a probe cannot be reliably cancelled. In these rare cases, the prober might open a new connection to the database when the time comes for the next probe. This could lead to more and more connections adding further pressure to an already degraded database instance. The prober imposes a hard limit on its database connection use to prevent this kind of runaway degradation. There are also some optimizations to reuse connections - but also periodically open new ones just to be certain we're measuring the entire "connect, authentication, read and write" use case in a meaningful way. This "light touch" keeps the prober's overhead on the customer database as low as possible while still providing meaningful availability information.

In production, we run multiple probers. These probers are sharded by database technology, as well as by region and data center. This assures us that, should there be an unexpected bug in any database driver or any unplanned physical infrastructure outage, a meaningful subset of probers will still remain operational. Rollouts of the prober are performed automatically on a shard-by-shard basis. Rollouts are automatically paused if the newly deployed probers do not appear healthy, so that there are never any interruptions in our observations.

A globally distributed and highly available time-series database is used to collect and store each prober's findings. This allows us to calculate the Availability SLI for any particular database instance or even aggregate availability SLIs across our entire platform. We are also able to use these SLIs as sources for alerts so that our engineers benefit from meaningful alerts with a very high signal-to-noise ratio. Putting it all together, we can plot a near real-time status signal, the SLI as measured over a 4-week / 28-day period, and even estimate round-trip latencies to a specific database instance from the prober for any database instance.

Our experience building the Prober has been incredibly valuable. Our objective was to understand our customer's happiness through data and in that way it has been a huge success. It taught us a lot about some of the unique and subtly complex ways databases can fail and improved our problem detection and analysis capabilities. It has also unlocked a new data-driven approach to setting engineering reliability goals: the Service Level Objective or SLO.


See https://www.gs.com/disclaimer/global_email for important risk disclosures, conflicts of interest, and other terms and conditions relating to this blog and your reliance on information contained in it.

Certain solutions and Institutional Services described herein are provided via our Marquee platform. The Marquee platform is for institutional and professional clients only. This site is for informational purposes only and does not constitute an offer to provide the Marquee platform services described, nor an offer to sell, or the solicitation of an offer to buy, any security. Some of the services and products described herein may not be available in certain jurisdictions or to certain types of clients. Please contact your Goldman Sachs sales representative with any questions. Any data or market information presented on the site is solely for illustrative purposes. There is no representation that any transaction can or could have been effected on such terms or at such prices. Please see https://www.goldmansachs.com/disclaimer/sec-div-disclaimers-for-electronic-comms.html for additional information.
Transaction Banking services are offered by Goldman Sachs Bank USA (“GS Bank”). GS Bank is a New York State chartered bank, a member of the Federal Reserve System and a Member FDIC.
GS DAP™ is owned and operated by Goldman Sachs. This site is for informational purposes only and does not constitute an offer to provide, or the solicitation of an offer to provide access to or use of GS DAP™. Any subsequent commitment by Goldman Sachs to provide access to and / or use of GS DAP™ would be subject to various conditions, including, amongst others, (i) satisfactory determination and legal review of the structure of any potential product or activity, (ii) receipt of all internal and external approvals (including potentially regulatory approvals); (iii) execution of any relevant documentation in a form satisfactory to Goldman Sachs; and (iv) completion of any relevant system / technology / platform build or adaptation required or desired to support the structure of any potential product or activity.
Mosaic is a service mark of Goldman Sachs & Co. LLC. This service is made available in the United States by Goldman Sachs & Co. LLC and outside of the United States by Goldman Sachs International, or its local affiliates in accordance with applicable law and regulations. Goldman Sachs International and Goldman Sachs & Co. LLC are the distributors of the Goldman Sachs Funds. Depending upon the jurisdiction in which you are located, transactions in non-Goldman Sachs money market funds are affected by either Goldman Sachs & Co. LLC, a member of FINRA, SIPC and NYSE, or Goldman Sachs International. For additional information contact your Goldman Sachs representative. Goldman Sachs & Co. LLC, Goldman Sachs International, Goldman Sachs Liquidity Solutions, Goldman Sachs Asset Management, L.P., and the Goldman Sachs funds available through Goldman Sachs Liquidity Solutions and other affiliated entities, are under the common control of the Goldman Sachs Group, Inc.
© 2024 Goldman Sachs. All rights reserved.