Turtle
A C++17-based lightweight high-performance network library
Install / Use
/learn @YukunJ/TurtleREADME
<a href="https://github.com/YukunJ/Turtle/blob/main/LICENSE"><img src="https://badgen.net/github/license/YukunJ/Turtle?color=orange" alt="license"></a>
<a href="https://github.com/YukunJ/Turtle"><img src="https://img.shields.io/badge/Language-C++-red.svg"></a>
<a href="https://github.com/YukunJ/Turtle"><img src="https://badgen.net/badge/OS Support/Linux/cyan?list=1" alt="os"></a>
<a href="https://github.com/YukunJ/Turtle"><img src="https://badgen.net/badge/Database/MySQL/white?list=1" alt="os"></a>
<a href="https://github.com/YukunJ/Turtle/stargazers"><img src="https://badgen.net/github/stars/YukunJ/Turtle?color=yellow" alt="stars"></a>
<a href="https://github.com/YukunJ/Turtle/network/members"><img src="https://badgen.net/github/forks/YukunJ/Turtle?color=black" alt="forks"></a>
TURTLE
Turtle is a C++17-based lightweight network library for web server on Linux. It abstracts the tedious manipulations on the socket into elegant and reusable classes. It allows a fast server side setup where the custom business logic could be specified for each client TCP connection in the form of a callback function. It now supports HTTP GET/HEAD request and response as well.
For any question, feel free to raise issue or pull request or drop me an email here.
Highlight
- Set non-blocking socket and edge-trigger handling mode to support high concurrency workload.
- Adopt the 'one reactor per thread' philosophy by Shuo Chen with thread pool management.
- Achieve low coupling and high extensible framework.
- Allow users to build custom server by only providing 2 callback functions.
- Support HTTP GET/HEAD request & response.
- Support dynamic CGI request & response.
- Support Caching mechanism.
- Support MySQL Database interaction.
- Support Timer to kill inactive client connections to save system resources.
- Support asynchronous consumer-producer logging.
- Unit test coverage by Catch2 framework.
System Diagram
<img src="image/system_architecture_en.png" alt="System Architecture New" height="450">The above system architecture diagram briefly shows how the Turtle framework works on a high level.
- The basic unit is a Connection which contains a Socket and a Buffer for bytes in-and-out. Users register a callback function for each connection.
- The system starts with an Acceptor, which contains one acceptor connection. It builds connection for each new client, and distribute the workload to one of the Loopers.
- Each Poller is associated with exactly one Looper. It does nothing but epoll, and returns a collection of event-ready connections back to the Looper.
- The Looper is the main brain of the system. It registers new client connection into the Poller, and upon the Poller returns back event-ready connections, it fetches their callback functions and execute them.
- The ThreadPool manages how many Loopers are there in the system to avoid over-subscription.
- Optionally there exists a Cache layer using LRU policy with tunable storage size parameters.
The Turtle core network part is around 1000 lines of code, and the HTTP+CGI module is another 700 lines.
Docker
If you are not a Linux system but still want to try out the Turtle on Linux for fun, we provide a Vagrant File to provision the Linux Docker. Notice as of current, Turtle is compatible with Linux and MacOS for build.
-
Install Vagrant and Docker. For macOS, you may use homebrew to install Vagrant but do not use homebrew to install Docker. Instead, download Docker Desktop from the link above.
-
Start the Docker application in the background
-
Drag out the
Vagrantfileand place it in parallel with theTurtleproject folder. For example, consider the following file structure:
/Turtle_Wrapper
- /Turtle
- /Vagrantfile
-
cdto theTurtle_Wrapperfolder and run commandvagrant up --provider=docker. This step should take a few minutes to build up the environment and install all the necessary tool chains. -
Enter the docker environment by
vagrant ssh developer -
cdto the directory/vagrant/Turtle. This directory is in sync with the original./Turtlefolder. You may modify the source code and its effect will be propagated to the docker's folder as well. -
Follow the steps in next section to build up the project.
Build
You may build the project using CMake.
Once you are at the root directory of this project, execute the followings:
// Setup environment (Linux)
$ sh setup/setup.sh
$ sudo systemctl start mysql
$ sudo mysql < setup/setup.sql // setup the default mysql role for testing
// Build, multiple build options
$ mkdir build
$ cd build
$ cmake .. // default is with logging, no timer
$ cmake -DLOG_LEVEL=NOLOG .. // no logging
$ cmake -DTIMER=3000 .. // enable timer expiration of 3000 milliseconds
$ make
// Format & Style Check & Line Count
$ make format
$ make cpplint
$ make linecount
Performance Benchmark
To test the performance of Turtle server under high concurrency, we adopt Webbench as the stress testing tool.
The source code of the Webbench is stored under the ./webbench directory along with a simple testing shell script.
We fully automated the process so that you can execute the benchmark test in one command:
$ make benchmark
# in Linux the above command will
# 1. build the webbench tool
# 2. run the http server in the background at default 20080 port, serving the dummy index file
# 3. launch the webbench testing with 10500 concurrent clients for 5 seconds
# 4. report back the result to the console
# 5. harvest the background http server process and exit
We performed benchmark testing on an Amazon AWS EC2 instance. The details are as follows:
- Hardware: m5.2xlarge instance on Ubuntu 20.04 LTS with 8 vCPUs, 32 GiB memory, 50 GiB root storage volume. (Be careful that vCPU is not real CPU core, by experiment
std::hardware_concurrency() = 2in this case) - QPS:
- 62.3k (no logging, no timer)
- 52.5k (logging, no timer)
- 36.5k (no logging, timer)
- 29.8k (logging, timer)
We see that the asynchronous logging only brings a minor runtime performance penalty on the system, while the timer functionality comes with certain performance hit since it requires synchronization.
In order to gain a better sense of comparative performance, we benchmarked a few other leading popular C++ network webserver on the Internet with the best configuration to our knowledge in order to be fair.
To reiterate, by no means should we judge different libraries only on benchmark testing of limited scope and possible misconfiguration by the unfamiliar like us. It's solely for the purpose for getting the magnitude right.
All the benchmarks statistics listed below are performed on the same hardware and transferring same dummy index file with 10500 concurrent clients using webbench tool.
- TinyWebServer: best QPS = 38.5k
# we run the TinyWebServer with the configuration of:
# 1. listener fd and connection fd mode: -m 1 LT + ET | -m 3 ET + ET
# 2. 8 threads as on a 8 vCPUs instance
# 3. turn off logging
# 4. -a 0 Proactor | -a 1 Reactor
# 5. compiler optimization level set to -O3
# Proactor LT + ET
$ ./server -m 1 -t 8 -c 1 -a 0
$ QPS is 38.5k
# Proactor ET + ET
$ ./server -m 3 -t 8 -c 1 -a 0
$ QPS is 38.2k
# Reactor LT + ET
$ ./server -m 1 -t 8 -c 1 -a 1
$ QPS is 26.7k
# Reactor ET + ET
$ ./server -m 3 -t 8 -c 1 -a 1
$ QPS is 25.6k
- Muduo: best QPS = 48.3k
# We use the 'muduo/net/http/tests/HttpServer_test.cc' as the test program
# set it to run in the benchmark mode with 8 threads in the pool
# with most of the logging disabled
- libevent: single-thread best QPS = 29.0k
We use the sample script here for easy testing.
# Notice this testing is running libevent http server with single-thread using I/O multiplexing
# it's already very performant and has not fully utilized the underlying 8-core hardware.
# we might try test it with pthread and work queue under the multi-thread setting
# but it's too much work for now for the benchmark purpose
API Style
The classes in the Turtle library are designed with the focus of decoupling firmly in mind. Most of the components can be taken out alone or a few together and used independently, especially those components in the network core module.
Let's take an example from the most basic Socket class, assuming that we just want to borrow the Turtle library to avoid the cumbersome steps of socket establishment. First, let's take a look at the main interface of the Socket class:
/**
* This Socket class abstracts the operations on a socket file descriptor
* It can be used to build client or server
* and is compatible with both Ipv4 and Ipv6
* */
class Socket {
public:
Socket() noexcept;
auto GetFd() const noexcept -> int;
/* client: one step, directly connect */
void Connect(NetAddress &server_address);
/* server: three steps, bind + listen + accept */
void Bind(NetAddress &server_address, bool set_reusable = true);
/* enter listen mode */
void
Related Skills
feishu-drive
353.1k|
things-mac
353.1kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
clawhub
353.1kUse the ClawHub CLI to search, install, update, and publish agent skills from clawhub.com
postkit
PostgreSQL-native identity, configuration, metering, and job queues. SQL functions that work with any language or driver
