From cTuning.org
Navigation: cTuning.org > CTools > CBench
If you are interested in the current projects, would like to add new project or would like to help with the implementations, you are welcome to participate in the discussions below. You are also encouraged to send a summary email to the cTuning Discussions Mailing List (mail, web view/register) to keep cTuning community informed about your feedback. You can also contact cTuning steering committee if you have general questions. Finally, you can also select a Wiki option to watch the modifications of this page.
(simply self-register at this website to join our community and edit open Wiki pages)
Change and classify data directories
Description: Since more than one benchmark can use a given data set, the classification of data sets by benchmark or group of benchmarks is irrelevant. The proposal is to change the structure and group them by categories, for example image, sound, ... (possibly more detailed: jpeg, ppm, pgm, mp3, pcm...)
Who is interested?: Erven, Grigori, Yang
Who may have time to help?: Erven, Yang
How to proceed?:
Check output of benchmark
Description: The output of each run should systematically be checked against a known expected output. The __run script should also return a status value to the shell. Note that there might a slight difficulty with floating point benchmarks because of rounding effects or reordering of FP arithmetic. We might need a more "relaxed" comparison.
Who is interested?: Erven, Grigori, Yang
Who may have time to help?: Erven, Yang
How to proceed?:
Add execution time and profile info for cBench/MiDataSets for a variety of architectures to Collective Optimization Database
Description: I think it would be useful to the community to provide execution and profile time (for different architectures, compilers and optimizations) in our Collective Optimization Database...
Who is interested?: Grigori, Abdul
Who may have time to help?: Grigori, Abdul
How to proceed?: We can add stats for a few architectures/compilers but we would need some help from the community to add more stats on different architectures ...
Add cBench/MiDataSets program/dataset description
Description: I think it would be useful to the community to add program description and some analysis of the programs and datasets including info about hot functions ...
Who is interested?: Grigori, <please, add yourself>
Who may have time to help?: <please, add yourself>
How to proceed?: start discussion
Provide current cBench/MiDataSets description and basic analysis
Description: I think it would be useful to the community to add program description and some analysis of the programs and datasets including profiling and hot functions ...
Who is interested?: Grigori, <please, add yourself>
Who may have time to help?: <please, add yourself>
How to proceed?: start discussion
Profile/modify current programs to remove excessive IO when using loop wrapper
Description: I found that in some programs there is a considerable overhead for IO when using loop wrapper since it is either put in a wrong place, or it was actually not possible to put it around the most time consuming routines due to streaming for example.
Who is interested?: Grigori, <please, add yourself>
Who may have time to help?: <please, add yourself>
How to proceed?: start discussion
Add more programs/datasets
Description: It would be useful to add more open-source programs to the cBench that are important to the IT community, i.e. video codecs, some numerical applications (Fortran, MATLAB, SciLab), etc
Who is interested?: Grigori, Yang <please, add yourself>
Who may have time to help?: Yang <please, add yourself>
How to proceed?: start discussion
Parallelize current programs or add new open-source parallel programs
Description: It seems that there is a lack of parallel benchmarks with multiple datasets. Such benchmarks would be very useful for research on adaptive parallelization and sheduling. For example, I would like to extend the work on predictive runtime code scheduling for heterogeneous architectures using statistical collective optimization and machine learning or provide an abstraction software layer to be able to adapt to any underlying heterogeneous multi-core at run-time, but can't find parallel benchmarks with a large number of datasets. One of the solutions is to actually parallelize current cBench/MiDataSets. I am sure that would be very useful to our community.
Who is interested?: Grigori, <please, add yourself>
Who may have time to help?: <please, add yourself>
How to proceed?: start discussion
Characterize datasets
Description: Extend [1], [2] to characterize datasets to predict optimizations or scheduling
Who is interested?: Grigori, <please, add yourself>
Who may have time to help?: <please, add yourself>
How to proceed?: on-going
Extract codelets from benchmarks
Description: Extract small kernels with working data from programs to study optimizations and inter-procedural interactions
Who is interested?: Grigori, <please, add yourself>
Who may have time to help?: <please, add yourself>
How to proceed?: brainstorming stage
Dummy (add new project)
Description:
Who is interested?: <please, add yourself>
Who may have time to help?: <please, add yourself>
How to proceed?: start discussion