ABSTRACT
We propose a 2.5D video editing system called DDMixer2.5D. 2.5D video contains not only color channels but also a depth channel, which can be recorded easily using recently available depth sensors, such as Microsoft Kinect. Our system employs this depth channel to allow a user to quickly and easily edit video objects by using simple drag-and-drop gestures. For example, a user can copy a video object of a dancing figure from video to video simply by dragging and dropping using finger on the touch screen of a mobile phone handset. In addition, the user can drag to adjust the 3D position in the new video so that contact between foot and floor is preserved and the size of the body is automatically adjusted according to the depth. DDMixer2.5D has other useful functions required for practical use, including object removal, editing 3D camera path, creating of anaglyph 3D video, as well as a timeline interface.
Supplemental Material
- Bai, X., Wang, J., Simons, D., and Sapiro, G. Video snapcut: robust video object cutout using localized classifiers. ACM Trans. Graph. 28, 3 (July 2009), 70:1--70:11. Google ScholarDigital Library
- Barnes, C., Shechtman, E., Finkelstein, A., and Goldman, D. B. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics (Proc. SIGGRAPH) 28, 3 (Aug. 2009). Google ScholarDigital Library
- Chen, J., Paris, S., Wang, J., Matusik, W., Cohen, M., and Durand, F. The video mesh: A data structure for image-based three-dimensional video editing (2011). 1--8Google Scholar
Index Terms
- DDMixer2.5D: drag and drop to mix 2.5D video objects
Recommendations
Empirical observations on video editing in the mobile context
Mobility '07: Proceedings of the 4th international conference on mobile technology, applications, and systems and the 1st international symposium on Computer human interaction in mobile technologyToday's mobile devices enable the users to capture video clips with integrated digital cameras. However, capturing a video clip exactly as intended is often challenging -- in many cases, the possibility to edit the clip after capture would be useful. ...
LACES: live authoring through compositing and editing of streaming video
CHI '14: Proceedings of the SIGCHI Conference on Human Factors in Computing SystemsVideo authoring activity typically consists of three phases: planning (pre-production), capture (production) and processing (post-production). The status quo is that these phases occur separately, and the latter two have a significant amount of "slack ...
Dynamic hair manipulation in images and videos
This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair ...
Comments