You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i am using below code part to capture the runtime for model inference
// Import the TensorFlow model. Note: use CreateNetworkFromBinaryFile for .pb files.
armnnTfLiteParser::ITfLiteParserPtr parser = armnnTfLiteParser::ITfLiteParser::Create();
armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile("model_latest.tflite");
// Find the binding points for the input and output nodes
armnnTfLiteParser::BindingPointInfo inputBindingInfo = parser->GetNetworkInputBindingInfo(0, "conv2d_input");
armnnTfLiteParser::BindingPointInfo outputBindingInfo = parser->GetNetworkOutputBindingInfo(0, "Identity");
// Create a profiler and register it for the current thread.
std::shared_ptrarmnn::IProfiler profiler = runtime->GetProfiler(networkIdentifier);
profiler->EnableProfiling(true);
Hello,
i am using below code part to capture the runtime for model inference
// Import the TensorFlow model. Note: use CreateNetworkFromBinaryFile for .pb files.
armnnTfLiteParser::ITfLiteParserPtr parser = armnnTfLiteParser::ITfLiteParser::Create();
// Create ArmNN runtime
armnn::IRuntime::CreationOptions options; // default options
armnn::IRuntimePtr runtime = armnn::IRuntime::Create(options);
// Create a profiler and register it for the current thread.
std::shared_ptrarmnn::IProfiler profiler = runtime->GetProfiler(networkIdentifier);
profiler->EnableProfiling(true);
// Enable profiling.
profiler->EnableProfiling(true);
// Run Inference
armnn::InputTensors inputTensor = MakeInputTensors(inputBindingInfo, &input[0]);
armnn::OutputTensors outputTensor = MakeOutputTensors(outputBindingInfo, &output[0]);
armnn::Status ret = runtime->EnqueueWorkload(networkIdentifier, inputTensor, outputTensor);
// Print output
profiler->Print(std::cout);
able to see json format of each layer profiler result .
Problem: Running .tflite model on arm a78 core with CpuAcc as the option the runtime is different for each run of same model.
for one of model it is varying from 0.8 to 1.2ms
Need to know how runtime is calculating using system clock or by using arm registers
The text was updated successfully, but these errors were encountered: