>* Meeting Wed Apr 4 11:28:24 GMT 2018 with Thor
The meeting resulted in a modularizing and testing specific parts of the system. We can use various data from the demo dataset but we require specific runs to get specific values.
Run demo 3 three times with the current setup information to retrieve from the systemWe have defined when we change the speaker so we can look at the events and see who the system thinks is speaking (see if it changes)
Definition of other information
Time difference between event start and timestamp of “success” message
N of accumulated CPU secs over speed
Correct “success” msg/tot success msgs
Incorrect “success” msgs/ tot success msg | = 1-success rate
%CPU with incorrect conclusions (false positives) = ratio of correct to incorrect
Speed | Effort | Success rate | Error Rate | Wasted Effort | |
---|---|---|---|---|---|
Event | |||||
* Human Detected | |||||
* Person identified | |||||
* Person identified : Collab | |||||
* Human Leaves | |||||
* Search : Collab | |||||
* Human Leaves : Collab | |||||
* Info Extraction : Collab | |||||
* Role Negotiation | |||||
* Leg detection |
Interval between timestamp of “human detected” posting minus the timestamp marking when the human enters the area where the robots can detect humans
Interval between timestamp of “human identified” msg minus the timestamp of the “human detected” msg
Interval between timestamp when the person’s identity is stored in the CCMCatalog minus the timestamp of “human detected” msg
Measured from the time the human leaves the scene (ground truth) until either robot posts msg “human”;
Measured from the time a robot decides it is time for it to move until the robot has successfully negotiated where to go via the CCMCatalog
Interval between when human leaves the scene (ground truth) until the event is logged in the shared data structure (CCMCatalog)
From TDM inception to CCMMaster Task ID accepted, in Demo3
– Thu Apr 5 09:22:26 GMT 2018 Plan of attack - David
Apparently the xml file used during recording wasn’t set up correctly. This results in us having to redo a run for demo 2. – NOTE, this could have been prevented by using a solid method of implementation for the .xml files. e.g. writing a script and maintaining a db of the values to be used. This write on the fly method has proven to be more than a little inefficient and unprofessional.As we require data to finish the KPI tables for the reports.
- Pierre finally got through to Thor on this subject
Once I have re-configured the TDM_psyclone.py function I will test it to ensure that the system works. Hopefully it will work as requested
Since Kris requested a new demo, and it looks like [yet to be confirmed] the .xml file wasn’t set up to record all the data we need to do another video recording run of demo 3. Thu Apr 5 13:46:30 GMT 2018
What is the actual worth of our current dataset in dropbox. I.e. should we scrap all of them since we only have a few minor information points anyway. [Assuming that the xml file wasn’t set up to record everything]
The datasets, at least the .csv files are worthless. We might be able to retreive minor data but probably not worth the work.
KPI | ID | TrialRun | |||
---|---|---|---|---|---|
Human Detected | 1) | A | |||
Person Identified | 2) | A | |||
Human Leaves | 3) | A |
| Role Negotiation | 4) | A | | | |
Emotianal Reading | 7) | B | |||
Task Negotiation | 5) | C | |||
Turn Taking | 6) | C | |||
| Dialog Understanding | 8) | D |
Collaborative | ||
---|---|---|
Person Identified | 9) | A |
Human Leaves | 10) | A |
Information Extraction | 11) | C |
Trial Runs :
Both robots are active but static. The static is perfomred by <parameter name=”simulatemoving” type=”String” value=”%SimulateSystem%” /> with %SimulateSystem% in system.inc and system2.inc being set to Yes. Robot Slave has its camera covered. We mark out the TDM module and run the system using DEMO3 specs.
David sits at computer, Pierre walks in front of camera. David presses human enters accordingly. David presses human enters again when Pierre exits screen [Human leaves only appears on the screen after person has been identified. So if there is no identification we wouldn’t be able to mark human leaves]. Pierre enters x 10
Repeat for David x 10
Human detection is measured by <Human enters>(marked by us) until <FaceFound>
<Human enters> - <HumanAppeardSelf>
<RoleAssigned> - <NowPrimaryRole> or <NowSecondaryRole> (Depending on which robot gets assigned)
<Human enters> - <HumanAppeardSelf>
<Human enters (second press)> - <HumanLeaves>
The system needs to recognize so that the function actually work.
Pierre steps in front of the camera stays sad for 10 seconds. David steps in front of the camera stays sad for 10 seconds.
Pierre steps in front of the camera stays happy for 10 seconds. David steps in front of the camera stays happy for 10 seconds.
Record a simple panel navigation discussion. Communicator is static controller is in motion. All modules, including TDM are active. Need to ensure that the system can still perform actions.
Ask the panel to push button. Give wrong pin repeat process x 10 times.
From <RoleAssigned> to either <NowDefaultRole> || <NowPrimaryRole> || <NowSecondaryRole>
From #TDM : Created Object to [CCMMaster] Task ID No accepted
Count times, in current demo videos, that instructions lead to actions.