This site provides the artifacts needed to run the experiments described in the paper "Massively Parallel Multi-Versioned Transaction Processing" accepted at OSDI 2024.
Please follow the five steps shown below: 1) Setup (virtual machine) server, 2) Prepare server for the experiments, 3) Run experiments on server, 4) Process the experiment outputs to generate figures, 5) Verify the results.
- Use your browser to access FluidStack and then login in with the credentials provided on the hotcrp website. Click on the "Virtual Machines" tab to create a virtual machine.
- Select
Ubuntu 22.04 (Plain)
for the OS template.
- Select
RTX A6000 48GB
for the GPU server type. Select4
GPUs per server. We suggest choosing theNorway
server (for reasons described under the Problems section below).
- Add your SSH public key to access the server. If you have a github public key, you can copy and paste it from
https://github.com/[gitusername].keys
. Also, name your server so that you can identify it.
- Now you are ready to deploy your server. Check the server configuration and then push the deploy button.
- Click on the "Your Servers" tab to see your server. Wait for your server to start running. You will see a green dot on the left when your server is running. This may take 1-2 minutes.
- Click on your server. To login to the server via
ssh
, you will need to use the usernameubuntu
and the IP address shown on the right.
- When the server is not in use, stop the server to only pay the idle rate. You can restart the server at any time and continue using it.
- Make sure to delete the server after finishing the experiments to stop paying for the server. If you need to rerun the experiments, then you will need to redo all the steps shown here.
-
Login to the the server.
ssh ubuntu@server_ipaddr
-
Clone this repo with submodules.
git clone --recursive https://github.com/ShujianQian/epic-artifact.git
-
Install dependencies.
cd epic-artifact sudo ./install_dependencies.sh
This script installs all the dependencies required for the experiments, including the GPU driver. The script requires sudo privileges to install packages on your server. It will run for roughly
5-10
minutes, so get a coffee.At the end, the script will reboot the server (to start the GPU driver) so your ssh session will be disconnected.
-
Reconnect to the server after it has rebooted and go to the artifact directory.
ssh ubuntu@server_ipaddr cd epic-artifact
-
Build the executables for all systems.
./build_binaries.sh
This script will run for roughly
2-4
minutes. -
Provide your email address. Since the review process is single blind, please ensure that your email address is anonymized. This is an optional step but it will allow us to send you an email when the experiments are done, which take a while, as mentioned below. Create a file called
email.txt
in theepic-artifact
directory containing the three lines described under the "Specific hardware" section of the hotcrp site. Then, test whether you receive an email from us by running the following script../mail.sh
- (Optional) We recommend first testing the server setup and compilation with a short test experiment.
This script runs a subset of the experiments allowing you to check for obvious errors. It will take approximately
# in epic-artifact ./test_experiments.sh
6
minutes to complete. - Run all the experiments.
This script will run the experiments for roughly
# in epic-artifact ./run_experiments.sh
5
hours. Assuming mail delivery works (as described above), you will receive an email when the experiments are done.
The evaluation is almost complete! The following steps for processing the experiment outputs are relatively short.
- Parse the outputs of the experiments.
The parsed outputs will be stored under the
# in epic-artifact ./parse_experiments.sh
epic-artifact/data/
directory. - Generate figures using the parsed outputs.
This script will create the following figures under the
# in epic-artifact ./plot.sh
epic-artifact/output/
directory.The file name for each figure has a number label (e.g.,04_tpccfull_throughput.png 05a_tpccnp_throughput.png 05b_tpccnp_throughput_gacco_commutative.png 06_ycsb_throughput.png 07_cpu_throughput.png 09_latency.png 10_microbenchmark.png
04
) that is the same as the figure number (e.g.,Figure 4
) in the paper. You can usescp
orrsync
to download these files to your local machine.
Verify that the figures in the files above are similar to the figures in the paper.
Then make sure to delete your server.
- Aria's dependencies cannot be installed alongside those for Caracal. This is because the libunwind package required by Aria's google-glog package conflicts with the libc++ and libc++abi packages required by Caracal. Since we install all packages on the server before running all the experiments for this artifact evaluation, we are unable to generate the Aria outputs. For more information about the package conflicts, please see:
- The Caracal database pins memory pages (using
memlock
). We have found that memory pinning is unreliable on the VM servers. It sometimes fails on the Canada servers but we have not seen this failure on the Norway servers, even though both the servers appear to have the same configurations. Hence we suggest using the Norway servers. However, we don't understand the reason for this failure and so it is possible that the Caracal results may not be reproducible if this failure occurs during a run. - We have occasionally seen problems with Epic where its GPU memory allocator hangs for a minute or two on the VM server. This significantly slows down the experimental runs. These problems are not consistently reproducible and we believe they may be caused by virtualization (since they don't happen on our local machines). If this happens, we have found that deleting an recreating a server solves the issue.
- Run
nvidia-smi
to verify that the Nvidia driver is runningubuntu@recwt9dgzwtn8yqxuax3htpzv:~$ nvidia-smi Sat May 11 08:40:22 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA RTX A4000 Off | 00000000:00:05.0 Off | Off | | 41% 37C P8 6W / 140W | 1MiB / 16376MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+