10. 課題A物体の把持/解放検出(1
1) R. Yasuoka, A. Hashimoto et al, “Detecting Start and End Times of Object-Handlings on a Table
by Fusion of Camera and Load Sensors,” CEA2013
14. 荷重センシングシステム(5の利用
F0 at {0,0} F1 at {1,0}
F2 at {0,1} F3 at {1,1}
W at {x, y}
W
FF
y
W
FF
xFFFFW 3231
3210 ,,
+
=
+
=+++=
5) A. Schmidt et al, “Context acquisition based on load sensing,” UbiComp ’02
• 作業台への荷重の変化を利用
• 物体が1個の場合
– 荷重変化の重心=物体の位置
– 荷重変化の増減 → 「取る/置く」 の区別
15. 荷重センシングシステム(5の利用
F0 at {0,0} F1 at {1,0}
F2 at {0,1} F3 at {1,1}
W at {x, y}
W
FF
y
W
FF
xFFFFW 3231
3210 ,,
+
=
+
=+++=
• 作業台への荷重の変化を利用
• 物体が複数の場合
– 荷重変化の重心≠物体の位置
– 荷重変化の増減 → 「取る/置く」 の組み合わせ次第
20. 荷重センシングシステム(3の構築
F0 at {0,0} F1 at {1,0}
F2 at {0,1} F3 at {1,1}
W at {x, y}
W
FF
y
W
FF
xFFFFW 3231
3210 ,,
+
=
+
=+++=
3) A. Schmidt et al, “Context acquisition based on load sensing,” UbiComp ’02
データロガー: HBM QuantumX 440A
ロードセル: HBM C9B
45. 結果
被験者
A B C D E
a) 作業に関わる物体接触 99 101 72 98 77
b) 作業に無関係な接触 92 106 73 76 95
c) 正しい予測の回数 128 162 108 135 121
1. 予測精度 67.0% 78.3% 74.5% 77.6% 70.3%
2. 手動での操作回数 2 2 9 1 2
• 従来のレシピ提示システムだと,最低29回の操作が必要
• 被験者Cは1/3,それ以外は1/10程度まで操作数現象
• 予測精度に対して期待以上の効果
• 多少の間違いは許容され得る(情報の提示方法に依存)
46. 今回の助成を受けた研究成果
テーマ: カメラと荷重センサの統合による机上物体と
人とのインタラクション検出
A. 物体の把持/解放検出
B. 物体認識
発表文献:
– [1] Ryuta Yasuoka, Atsushi Hashimoto, Takuya Funatomi, and Michihiko Minoh. Detecting
start and end times of object-handlings on a table by fusion of camera and load sensors. In
Proceedings of the 5th international workshop on Multimedia for cooking & eating activities,
pages 51–56. ACM, 2013.
– [2] 井上仁, 橋本敦史, 中村和晃, 舩冨拓哉, 山肩洋子, 上田真由美, and 美濃導彦. 食材認識のため
の画像と食材切断時の振動音及び荷重の利用. 電子情報通信学会論文誌 D, 97(9), 2014.
– [3] 安岡竜太, 橋本敦史, 舩冨卓哉, and 美濃導彦. カメラと荷重センサの統合による机上物体に対 す
るハンドリング開始・終了の検出 (食メディア (調理支援), メディア・コミュニケーションの 品質と福祉, 及
び一般). 電子情報通信学会技術研究報告. MVE, マルチメディア・仮想環境基 礎, 112(474):69–74,
2013.
Hinweis der Redaktion
・There are a lot of systems to support food preparation.
・We are targeting the cooks who need to try an unfamiliar recipe.
If you follow the recipe workflow, you will get a very nice dinner.
Provide a recipe format
Provide a web-based UI which works in collaboration with HWML
Propose a recipe navigation system driven by
- Step layer describe the structure of work-flow.
- Each step are divided into sub-steps in the finer layer. They are in total order.
Recipe element has materials element and “directions” element
The directions element consists of steps, which has attribute of “parents,” and this describes the graph structure of the recipe.
- The materials element consists of object and it’s group. This element has principally no important difference from traditional format.
A step element consists of sub-steps, which have concrete directions to chefs by hypertext, audio, and video.
It has trigger element, which links the sub-step to any event .
This is the web-based User-Interface. So it works on, for example, any tablet devices.
Contents of sub-step is displayed here,
Graph representation is to complicated -> we display a sequencialized plan of cooking.
User can control the system via “completion checkboxes” on the side of steps and substeps.
This is the web-based User-Interface. So it works on, for example, any tablet devices.
Contents of sub-step is displayed here,
Graph representation is to complicated -> we display a sequencialized plan of cooking.
User can control the system via “completion checkboxes” on the side of steps and substeps.
Three modules
Viewer is a front-end of the system, and I showed in the previous slide.
Recognizer represent any kind of input system except the touch display, and extendable by any developers.
CLICK!
Navigator has responsibilities to plan recommendation, which decides the order of steps shown in the viewer’s navigation menu area in the right side, and also to decide the displaying information for the next user’s action.
In our implementation, we developed a system to recognize acess-to-object, and the intention forecasting algorithm based on it.
Three modules
Viewer is a front-end of the system, and I showed in the previous slide.
Recognizer represent any kind of input system except the touch display, and extendable by any developers.
CLICK!
Navigator has responsibilities to plan recommendation, which decides the order of steps shown in the viewer’s navigation menu area in the right side, and also to decide the displaying information for the next user’s action.
In our implementation, we developed a system to recognize acess-to-object, and the intention forecasting algorithm based on it.