So after attending class last week on Wednesday (Friday was a good friday) and learning how to create dummy files of a specific size to get a consistent benchmarking when running the file. I created a “zero” file with the size of originally 100M, but did not get a long enough run time and preferred by my instructor. Thus I decided to make the file 1000M with the command:
dd if=/dev/zero of=zero bs=1 count=0 seek=1000M
I got the follow results after compressing with bzip2, and then decompressing with bzip2 -d:
And now for the decompress:
As we can see the decompression takes significantly less time to run, probably due to the fact that the file is much smaller. But all in all we are making some nice progress. I was able to finally get gprof to behave and give me the display output.
While this does help display what functions call what functions, how many times each of the functions run etc. For the percentages of runtime it shows 100% for each of them which is really puzzling. I tried to run the gprof command:
gprof bzip2|less|gprof2dot|dot -Tps|display //this does not contain the file to be compressed and allows the display to be properly displayed. gprof bzip2 zero|less|gprof2dot|dot -Tps|display //does contain the file to be compressed however consistently throws an gprof: out of memory allocating 17179869144 bytes after a total of 688128 bytes error: unexpected end of file
This is truly puzzling. I will inquire with classmates why this may be, and may have to keep fighting with the gprof to get it working correctly. Worse case scenario, I may have to find a work around or simply resort to utilizing the time function as that seems to give me an actual measurement.
So I found out why everything is showing 100%. This is because at the top of the Flat Profile it shows: Each sample counts as 0.01 seconds. No time accumulated. Apparently this is due to the fact that Linux uses a libc compatibility routine to fake the
profile() system call for
gprof. This needs the
ITIMER_PROF interval timer but should generally work. In the same Stack overflow post that explained this issue, it also recommends OProfile, and I also saw Valgrind recommended. So I may stick with one of those two for a less fussy, more detail oriented profiler to ensure I can properly benchmark all changes in the future.