You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've got a bunch of automated data collection that isn't part of the existing MVP metrics, and the metrics that are provided rely on button pushing in order to be useful.
Since we can calculate haystack size metrics, and learning curve metrics in relation to the work, could we incorporate some of these metrics into the default to provide interesting feedback as a starting point?
• WTF Density: The frequency of WTF events relative to the total Duration
• Needle Density: The number of YAY events relative to the total Duration
• Haystack Size: The magnitude of change built up prior to this troubleshooting session.
• Change Size: The magnitude of changes made during this troubleshooting session.
• Journey Cycle Time: The average/max/stddev of all troubleshooting journeys.
• Experiment Cycle Time: The average/max/stddev of time spent between execution events.
• Experiment Frequency: The number of times the code had to be executed before a solution was found.
• File Scope: The number of distinct contexts (e.g. filename, browser url) involved in troubleshooting the problem.
• Execution Scope: The number of distinct execution contexts (eg. Unit test A, Unit test B )
The text was updated successfully, but these errors were encountered:
Are the default metrics actually useful?
We've got a bunch of automated data collection that isn't part of the existing MVP metrics, and the metrics that are provided rely on button pushing in order to be useful.
Since we can calculate haystack size metrics, and learning curve metrics in relation to the work, could we incorporate some of these metrics into the default to provide interesting feedback as a starting point?
• WTF Density: The frequency of WTF events relative to the total Duration
• Needle Density: The number of YAY events relative to the total Duration
• Haystack Size: The magnitude of change built up prior to this troubleshooting session.
• Change Size: The magnitude of changes made during this troubleshooting session.
• Journey Cycle Time: The average/max/stddev of all troubleshooting journeys.
• Experiment Cycle Time: The average/max/stddev of time spent between execution events.
• Experiment Frequency: The number of times the code had to be executed before a solution was found.
• File Scope: The number of distinct contexts (e.g. filename, browser url) involved in troubleshooting the problem.
• Execution Scope: The number of distinct execution contexts (eg. Unit test A, Unit test B )
The text was updated successfully, but these errors were encountered: