You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
allow for specification of command line arguments:
Options:
-t, --compute-threads NUM 1 Compute Threads
-d, --compute-depth NUM 37 Compute Depth
-i, --iterations NUM 10 Compute/GC Iterations
-s, --compute-sleep NUM 1 Compute Sleep
-g, --gc-threads NUM 1 GC Threads
-e, --tree-depth NUM 10 Maximum tree depth to allocate
-m, --maxheap NUM 4 Maximum heap to allocate (in MB)
-S, --gc-stats Print GC stats
-D, --debug
-h, --help
adjust output of compute thread so that it prints after each iteration compute:start:<id>:<iter>:<time> where uniquely identifies the thread, identifies the iteration, identifies the start time in milliseconds
adjust output of compute thread so that it prints at the end of each iteration compute:stop:<id>:<iter>:<ts> where the parameters are the same as above
adjust output of GC thread so that it prints, at the start of each iteration gc:start:<id>:<iter>:<ts> where params are same as above
adjust output of GC thread so that it prints, at the end of each iteration gc:stop:<id>:<iter>:<ts> where params are same as above
add a README.md file to each folder detailing what any dependancies are, and exactly how to compile and run the benchmark from the command line
take care to ensure that the units for the emitted timestamp UNITS are standardized across all implementations. We want the raw timestamps as output because we may want to perform calculations like "run time" "average time between iteration start" "average time between iteration end" and so its simpler to have the raw stamps and do post-processing then it is to have to edit each benchmark to emit the right measurement as we think of them.
Here is psuedo-python code for what we are trying to achieve in the MT benchmarks:
# note: you will include defaults and descriptions for the following
# short args.
# see example here:
# https://github.com/UBMLtonGroup/rt-benchmarks/blob/master/Clojure/gcbench/src/gcbench/core.clj#L13
args = parse_args('-t', '-d', '-i', '-s', '-g', '-e', '-m', '-S', '-h', '-D')
# so args.t and args.d are now defined. note that some arguments take
# an int, others are boolean. See the above URL for complete specification
# now that command line args are parsed into 'args' we can make
# runtime configuration decisions:
if args.h is True:
print help
exit
if args.g > 0: # if we want any GC threads, then start them
start_gc_threads(num_threads=args.g,
tree_depth=args.e,
iterations=args.i,
debug=args.D)
if args.t > 0: # if we want any compute threads, then start them
start_comp_threads(num_threads=args.t,
comp_depth=args.d,
iterations=args.i,
comp_sleep=args.s,
debug=args.D)
# wait for all threads to finished
thread.join()
if args.S is True: print gc_stats() # may or may not be possible in a given language
# we are done
exit()
# now the functions
def start_gc_thread(num_threads, tree_depth, iterations, debug):
long_lived_array = new_array(1000) # just like in java
long_lived_tree = make_tree(tree_depth) # just like in java
for i in range(num_threads):
if debug: print "starting GC thread #{i}"
t = new Thread(gc_func, args=(tree_depth, i, iterations, debug))
t.start()
def gc_func(tree_depth, id, iterations, debug):
for i in range(iterations):
start_time = time.time()
print "gc:start:{}:{}:{}".format(id, i, start_time)
t = make_tree(tree_depth)
t = None # possibly trigger GC
stop_time = time.time()
print "gc:stop:{}:{}:{}".format(id, i, stop_time)
def start_comp_threads(num_threads, depth, iterations, comp_sleep, debug):
for i in range(num_threads):
if debug: print "starting computation thread"
t = new Thread(comp_func, args=(depth, i, iterations, comp_sleep, debug)
t.start()
def comp_func(depth, id, iterations, comp_sleep, debug):
for i in range(iterations):
start_time = time.time()
print "comp:start:{}:{}:{}".format(id, i, start_time)
calc_fibonacci(depth) # eg depth=37
stop_time = time.time()
print "comp:stop:{}:{}:{}".format(id, i, stop_time)
sleep(comp_sleep)
The text was updated successfully, but these errors were encountered:
For each benchmark:
compute:start:<id>:<iter>:<time>
where uniquely identifies the thread, identifies the iteration, identifies the start time in millisecondscompute:stop:<id>:<iter>:<ts>
where the parameters are the same as abovegc:start:<id>:<iter>:<ts>
where params are same as abovegc:stop:<id>:<iter>:<ts>
where params are same as abovetake care to ensure that the units for the emitted timestamp UNITS are standardized across all implementations. We want the raw timestamps as output because we may want to perform calculations like "run time" "average time between iteration start" "average time between iteration end" and so its simpler to have the raw stamps and do post-processing then it is to have to edit each benchmark to emit the right measurement as we think of them.
Here is psuedo-python code for what we are trying to achieve in the MT benchmarks:
The text was updated successfully, but these errors were encountered: