Wednesday, 7 August 2013

How to automate and/or simplify NSight benchmarking

How to automate and/or simplify NSight benchmarking

I am benchmarking some algorithms from different libraries and comparing
the kernel timing results.
The results are coming from running through a series of tests on varying
array sizes.
With library A, I use NUnit, and so when I profile the program with
NSight, the kernel timing results from each test iteration come back in a
list.
With library B, which is a C++ library, the only way we know how to get
the kernel timing results from each test iteration is to run each one
manually.. So instead of running a test for 10 different array sizes and
getting 10 different kernel timings back, we are setting params manually
for array size 1, build/compile/run, get NSight results for test 1, then
setting params for array size 2, build/compile/run, etc etc.
Is there a way to automate this with the C++ code? So that similar to
library A, I can build and run once, it does its thing, then I get back a
list of the 10 kernel timing results?
I thought about manually building all 10 instances of the app first, then
running them all via a script or something, but I figure there is an
easier way to do it.
And a 2nd question: Is there a way to get NSight to automatically save the
output in some form? So even with the list of 10 different kernel launch
stats I wouldn't have to export an xml file via the GUI... I'd like to be
able to automate output on both ends so I can compare and analyze the
results from one spot in library A.

No comments:

Post a Comment