Line 52: Line 52:
 
*X12 = '''NVidia CUDA Toolkit 5.0''', number of optimization flags available=TBD, release date=2012 ({{CREF|0247b19de472d7d0:89e947f8430eaa37}}, {{CREF|cff49b38f4c2395d:48d5baa4569f59a8}})
 
*X12 = '''NVidia CUDA Toolkit 5.0''', number of optimization flags available=TBD, release date=2012 ({{CREF|0247b19de472d7d0:89e947f8430eaa37}}, {{CREF|cff49b38f4c2395d:48d5baa4569f59a8}})
 
*X13 = '''Intel Composer XE 2011''', number of optimization flags available=TBD, release date=2011, cost = ~800euro ({{CREF|0247b19de472d7d0:e985f0596b1b1d9e}}, {{CREF|cff49b38f4c2395d:42eab7eefa890ddc}})
 
*X13 = '''Intel Composer XE 2011''', number of optimization flags available=TBD, release date=2011, cost = ~800euro ({{CREF|0247b19de472d7d0:e985f0596b1b1d9e}}, {{CREF|cff49b38f4c2395d:42eab7eefa890ddc}})
*X14 = '''Microsoft Visual Studio 2013''', number of optimization flags available=TBD, release date=2013,cost = has free minimal version ({{CREF|0247b19de472d7d0:}}, {{CREF|cff49b38f4c2395d:5e35f4112bf996c5}})
+
*X14 = '''Microsoft Visual Studio 2013''', number of optimization flags available=TBD, release date=2013,cost = has free minimal version ( {{CREF|cff49b38f4c2395d:5e35f4112bf996c5}})
  
 
=== Compiler optimization level ===
 
=== Compiler optimization level ===

Revision as of 01:10, 23 August 2014

Computational species "bw filter simplified less" (CID=45741e3fbcf4024b:1db78910464c9d05)

Notes

This computation species (kernel) is a threshold filter (CID=45741e3fbcf4024b:1db78910464c9d05) - it is used in image processing and neuron activation functions (part of artificial neural networks).

Some cost-aware experiments (execution time, size, energy, compilation time) performed by Grigori Fursin using Collective Mind framework and artifacts from the public repository to be reproducible. It supports our collaborative research on continuous performance tracking, code optimization and compiler benchmarking (regression detection). If you find any mistakes or would like to extend this page, please help us!

Used artifacts

Datasets

Systems

  • S1 = Dell Laptop Latitude E6320, Processor=P1, Memory = 8Gb, Storage=256Gb (SSD), Max power consumption=52W, Cost (year of purchase 2011)~1200 euros (CID=cb7e6b406491a11c:0d84339816de0271)
  • S2 = Samsung Mobile Galaxy Duos GT-S6312, Processor=P2, Memory = 0.8Gb, Storage=4Gb, Battery=1300 mAh / 3.9V / up to 250 hours, Max power consumption~5W, Cost (year of purchase 2013)~200 euros (CID=cb7e6b406491a11c:a9740acbe06bcd1e)
  • S3 = Polaroid Tablet Executive 9" MID0927, Processor=P3, Memory=1Gb, Storage=16Gb, Battery=3500 mAh / 3.9V / up to 80 hours, Max power consumption~13W, Cost (year of purchase 2014)~100 euros (CID=cb7e6b406491a11c:3419444faf22f3d0)
  • S4 = Semiconductor neural network, PSpice simulation (year of development = 1993-1997)

Processors

Processor mode

  • W1 = 32 bit
  • W2 = 64 bit

OSs

Compilers

Compiler optimization level

  • Y1 = Performance (usually -O3)
  • Y2 = Size (usually -Os)
  • Y3 = -O3 -fmodulo-sched -funroll-all-loops
  • Y4 = -O3 -funroll-all-loops
  • Y5 = -O3 -fprefecth-loop-arrays
  • Y6 = -O3 -fno-if-conversion
  • Y7 = Auto-tuning with more than 6 flags (-fif-conversion)
  • Y8 = Auto-tuning with more than 6 flags (-fno-if-conversion)

Number of run-time code repetitions (for example, processing steps in neural networks)

  • R1 = 4000
  • R2 = 1000
  • R3 = 400

Total number of computations (processed neurons or pixels)

  • T1 ~ 9.6E9
  • T2 ~ 2.4E9
  • T3 ~ 1.0E9

Costs

  • C1= Execution time
  • C2 = Energy
  • C3 = Code size
  • C4 = Compilation time
  • C5 = System size
  • C6 = Hardware price
  • C7 = Software price
  • C8 = (Auto-)tuning price
  • C9 = Development time
  • C10 = Validation and testing time

Evolving advice (combination of decision trees and models)

TBA


Notes

Energy: 1Wh = 3600 joules

W = mAh * V / 1000 = 1300 * 3.9 / 1000 ~ 5W


(C) 2011-2014 cTuning foundation