TickTock Performance Evaluation: asynchronous vs synchronous writes

1. Introduction

TickTock is an open source Time Series DataBase (TSDB) for DevOps, Internet of Things (IoT), and financial data. We have presented three performance evaluation reports about TickTock.

  1. Performance Evaluation of TickTock (A new Time Series DB) — RaspberryPI Edition
  2. Performance comparison: InfluxDB, TimescaleDB, IoTDB, TDEngine, OpenTSDB v.s. TickTock
  3. Performance evaluation: scalability in terms of CPU number

TickTock supports both asynchronous (TCP & UDP) and synchronous (HTTP) writes.

  • Asynchronous write requests will be returned immediately without waiting for server responses. The write requests might fail to be processed by TickTock even though they were received by TickTock server. The support protocols are TCP and UDP (disabled by default).
  • Synchronous write requests will not be returned until TickTock server finishes applying the requests, successfully or failed. The support protocol is HTTP.

It is clear that they have different semantics. In our previous performance evaluations, we used TCP for writes by default. It is very fast, below 1 millisecond at P99. But its P999 response time is about 3.5 seconds. In this report, we would like to understand how much performance difference between asynchronous (TCP) and synchronous (HTTP) writes.

A few quick notes:

  • We use docker to run TSDBs.
  • We use IoTDB benchmark.
  • We use a mixed read-write workload scenario (Read : write = 1 : 9).

Also please be advised that performance is not the only aspect to compare different TSDBs though it may be a very important one, if not the most among all. You may have to consider other aspects like ease-of-use, APIs adoption, reliability, community supports, and costs etc. This report only focuses on performance.

2. IoTDB Benchmark

We use IoTDB-benchmark for performance evaluation. IoTDB-benchmark is developed by THULAB, Tsinghua University, Beijing. Please refer to the same description in another performance evaluation.

3. Experiment Settings

3.1. Hardware

We run our experiments on an Ideapad Gaming laptop with the following specification:

  • CPU: AMD Ryzen5 5600H, 6 core 12 hyper-thread, 1.3MHz
  • Memory: 20GB DDR4 3200MHz
  • Disk: 1T 5400 RPM HDD
  • OS: Ubuntu 20.04.3 LTS

We run TickTock in a docker container with 2 vCPU and 4GB memory.

docker run -d --privileged --name ticktock -h ticktock -p 6182:6182 -p 6181:6181 --cpuset-cpus 0-1 -m 4g ytyou/ticktock:0.4.0-beta --tsdb.timestamp.resolution millisecond --tcp.buffer.size 10mb –http.listener.count 10 –tcp.listener.count 10 –http.responders.per.listener 1 –tcp.responders.per.listener 1

To avoid network congestion under high ingestion rate, we run the benchmark on the same laptop instead of on a separated machine.

3.2. IoTDB Benchmark Settings

We use a mixed read-write scenario. The read:write ratio is 1:9. According to our experience, in DevOps, TSDBs mostly handle writes sent by machines being monitored. Query workload is relatively low, even lower than 10%.

As explained above, IoTDB benchmark simulates a wind-farm with a number of devices and multiple sensors (sensor number is also write batch size) in each device, sent by a number of clients. In our experiments, we use 200 clients. Each client is bound to 1 device with 10 sensors. So there are 2000 metrics (200 devices * 10 sensors).

We will compare TCP and HTTP writes in two scenarios.

  • The first one is to let benchmark clients send requests (TCP or HTTP) continually (OP_INTERVAL = 0 in IoTDB benchmark configuration). This setup will fully saturate TickTock server.
  • The second scenario is to add 10 milliseconds between consecutive operations (OP_INTERVAL = 10). Thus, TickTock server are not saturated but nearly.

4. scenario: CPU fully saturated

In this scenario benchmark clients send requests continually without sleeping in between requests. The below figures show the 2-vCPU TickTock docker were already fully saturated at 200%. Memory and IOs still had capacities.

4.1. Throughput comparison

The figure above shows that TCP writes have slightly higher throughput than HTTP writes (12.8% higher, or more specifically, 3.43M vs 3.03M data point per second).

Read throughputs are similar. Note that read requests must use HTTP protocol in both TCP and HTTP write tests. When writes use TCP, read throughput is 12.8% higher than that of HTTP writes.

4.2. Response time comparison

4.2.1. Write response time comparison

Let’s look at write response time data above. Note that one write operation has batches of data points. In our tests, there are 200 data points per operation.

TCP write requests have much smaller response time than HTTP. TCP write requests are responded within 1.62 millisecond at P99, while it is 21.59 milliseconds for the P99 HTTP response time. But P999 is a quite different case. The P999 TCP write response time spiked to 3590 milliseconds from 1.62 milliseconds at P99. But the P999 HTTP response time was 38.53 milliseconds, only 16.94 milliseconds increased from its P99.

We think the high P999 TCP response time is due to the nature of the asynchronous protocol used for writes. Once the server’s network buffer is full, balancing between connections become crucial. Any imbalance will cause certain connections to be blocked longer than usual.

4.2.2. Read response time comparison

We also compared read request response time when writes were TCP or HTTP. Please take a look at figures below.

The figure above is for precise_point read requests. In general, read response time in both cases are closed, at least at the same order of magnitude. Reads in the test case of TCP writes were faster than reads in HTTP writes below P95. But P99 and P999 are on the contrary. TCP were slower than HTTP. Read response time in the test case of TCP growed faster than the other.

Other read operations had very similar patterns. We list their figures for your reference.

5. scenario: CPU not saturated

The section above shows that TCP P999 writes spiked up to 3590 milliseconds from 1.62 milliseconds at P99 when CPUs were fully saturated. We tried to bring down CPU usage a little bit by sleeping 10 milliseconds between consecutive operations. After all, no one would run servers are fully saturation status for long. The below figure shows CPU usages were about 190% to 195%, very closed to saturation already. Let’s see how TickTock behaved.

5.1. Throughput comparison

Interestingly both TCP and HTTP write throughputs at 190% CPU usage were even better than their respective roles at 200% CPU usage. We think there were thrashing effects when CPUs were fully saturated. So when we brought down CPU usage a little bit, the throughput was even better.

Read throughputs were in very similar pattern to write throughputs. Please see the figure above.

5.2. Response time comparison


We first look at how TCP writes behave if CPU is not saturated. The above figure shows that the P999 TCP write response time was down to 1 millisecond. It is a significant improvement compared with 3590 milliseconds if CPUs were fully saturated. P99 was down to 0.34 millisecond from 1.62 milliseconds. Other percentile data were closed.

Let’s put all TCP and HTTP write response time data in two cases (190% and 200% CPU) together below. HTTP response time also improved if CPU was not saturated. Compared HTTP with TCP, TCP write requests were much faster than HTTP write requests. It is understandable since TCP write is asynchronous and HTTP write is synchronous in TickTock.

5.2.2. Precise point

Precise point query: select v1... from data where time=? and device in ?

In both TCP and HTTP write test cases, Precise_point read operations when CPUs were not saturated were responded faster than when CPUs were fully saturated, at all levels of percentile. It is completely understandable. Other read operations have similar pattern as PRECISE_POINT. For simplicity, we skip their figures.

6. Conclusion

TickTock supports both asynchronous (TCP and UDP) and synchronous (HTTP) writes which have different semantics. We test asynchronous (TCP) against synchronous (HTTP) writes. It is worth noting that we verified there wasn’t single failed writes in both asynchronous and synchronous writes tests.

  1. Throughput:
  • Asynchronous (TCP) writes and the corresponding reads have better throughput (12.8%) than HTTP.

2. Response time:

  • Asynchronous (TCP) writes have much faster response time (about 1 millisecond at P999) while synchronous write response time is about 20 milliseconds.
  • Reads in case of asynchronous (TCP) writes also have better responser time than reads in case of synchronous (HTTP) writes, but they are closed.

3. CPU fully saturated v.s. CPU not saturated:

  • Write: If CPUs are fully saturated, the P999 response time of asynchronous (TCP) writes is very bad (about 3.5 seconds). But it is improved significantly to just 1 millisecond even if there is only 5% CPU available. Synchronous (HTTP) writes in case of fully saturated CPUs are slightly slower than in case of not saturated CPUs.
  • Reads: Reads in case of fully saturated CPUs were slower than reads in case of not fully saturated CPUs, in both asynchronous and synchronous write test cases. But the difference is not much.

We suggest to use asynchronous writes (TCP) as a best practice.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store