skip to main content
10.1145/1809939.1809955acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnparConference Proceedingsconference-collections
research-article

Video stylization for digital ambient displays of home movies

Published:07 June 2010Publication History

ABSTRACT

Falling hardware costs have prompted an explosion in casual video capture by domestic users. Yet, this video is infrequently accessed post-capture and often lies dormant on users' PCs. We present a system to breathe life into home video repositories, drawing upon artistic stylization to create a "Digital Ambient Display" that automatically selects, stylizes and transitions between videos in a semantically meaningful sequence. We present a novel algorithm based on multi-label graph cut for segmenting video into temporally coherent region maps. These maps are used to both stylize video into cartoons and paintings, and measure visual similarity between frames for smooth sequence transitions. We demonstrate coherent segmentation and stylization over a variety of home videos.

References

  1. Agarwala, A., Hertzmann, A., Salesin, D., and Seitz, S. 2004. Keyframe-based tracking for rotoscoping and animation. In Proc. ACM SIGGRAPH, 294--302. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Alahari, K., Kohli, P., and Torr. 2008. Reduce, reuse & recycle: Efficiently solving multi-label MRFs. In CVPR, 1--8.Google ScholarGoogle Scholar
  3. Arksey, N. 2007. Exploring the Design Space for Concurrent Use of Personal and Large Displays for In-Home Collaboration. Master's thesis, University of British Columbia.Google ScholarGoogle Scholar
  4. Bai, X., Wang, J., Simons, D., and Saprio, G. 2009. Video snapcut: Robust video object cutout using localized classifiers. In Proc. ACM SIGGRAPH. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Belongie, S., Malik, J., and Puzicha, J. 2002. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Machine Intel. (PAMI) 24, 509--521. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Bizzocchi, J. 2008. Winterscape and ambient video -- an intermedia border zone. In Proc. ACM Multimedia. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Black, M., and Anandan, P. 1993. A framework for the robust estimation of optical flow. In ICCV, 231--236.Google ScholarGoogle Scholar
  8. Blake, A., Rother, C., Brown, M., Prez, P., and Torr, P. 2004. Interactive image segmentation using an adaptive gmmrf model. In ECCV, 428--441.Google ScholarGoogle Scholar
  9. Bousseau, A., Neyret, F., Thollot, J., and Salesin, D. 2007. Video watercolorization using bidirectional texture advection. In Proc. ACM SIGGRAPH, 1--7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Boykov, Y., and Funka-Lea, G. 2006. Graph cuts and efficient n-d image segmentation. IJCV 2, 70, 109--131. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Boykov, Y., and Kolmogorov, V. 2004. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Machine Intel. (PAMI) 26, 1124--1137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Boykov, Y., Veksler, O., and Zabih, R. 2001. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Machine Intel. (PAMI) 23, 1222--1239. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Collomosse, J., and Hall, P. 2003. Cubist style rendering from photographs. IEEE Trans. Visualization and Comp. Graphics (TVCG) 9, 4, 443--453. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Collomosse, J., Rowntree, D., and Hall, P. 2005. Stroke surfaces: Temporally coherent artistic animations from video. IEEE Trans. Visualization and Comp. Graphics (TVCG) 11, 540--549. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Collomosse, J. 2004. Higher Level Techniques for the Artistic Rendering of Images and Video. PhD thesis, University of Bath.Google ScholarGoogle Scholar
  16. Comaniciu, D., and Meer, P. 2002. Mean shift: A robust approach toward feature analysis. IEEE Trans. PAMI 24, 603--619. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. DeCarlo, D., and Santella, A. 2002. Abstracted painterly renderings using eye-tracking data. In Proc. ACM SIGGRAPH, 769--776.Google ScholarGoogle Scholar
  18. Hall, P. M., and Hicks, Y. 2004. CSBU-2004-03: A method to add gaussian mixture models. Tech. rep., Univ. Bath.Google ScholarGoogle Scholar
  19. Hays, J., and Essa, I. A. 2004. Image and video based painterly animation. In Proc. ACM NPAR, 113--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Hertzmann, A., and Perlin, K. 2000. Painterly rendering for video and interaction. In Proc. ACM NPAR, 7--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Hertzmann, A. 1998. Painterly rendering with curved brush strokes of multiple sizes. In Proc. ACM SIGGRAPH, 453--460. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Jacobs, C., Finkelstein, A., and Salesin, S. 1995. Fast multiresolution image querying. In Proc. ACM SIGGRAPH, 277--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Kovar, L., Gleicher, M., and Pighin, F. 2002. Motion graphs. In Proc. ACM SIGGRAPH, 473--482. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Kyprianidis, J.-E., Kang, H., and Doellner, J. 2009. Image and video abstraction by anisotropic kuwahara filtering. In Proc. Pacific Graphics, vol. 28.Google ScholarGoogle Scholar
  25. Litwinowicz, P. 1997. Processing images and video for an impressionist effect. In Proc. ACM SIGGRAPH, 407--414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Lowe, D. 2004. Distinctive image features from scale-invariant keypoints. IJCV 60, 91--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Meier, B. J. 1996. Painterly rendering for animation. In Proc. ACM SIGGRAPH, 477--484. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Paris, S. 2008. Edge-preserving smoothing and mean-shift segmentation of video streams. In ECCV, 460--473. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Perez, P., Gangnet, M., and Blake, A. 2003. Poisson image editing. In Proc. ACM SIGGRAPH, 313--318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Price, B., Morse, B., and Cohen, S. 2009. Livecut: Learning-based interactive video segmentation by evaluation of multiple propogated cues. In ICCV.Google ScholarGoogle Scholar
  31. Ruiz, D., Takahashi, H., and Nakajima, M. 2003. Image categorization using color blobs in a mobile environment. In Proc. Eurographics, 427--432.Google ScholarGoogle Scholar
  32. Schodl, A., Skeliski, R., Salesin, D., and Essa, H. 2000. Video textures. In Proc. ACM SIGGRAPH, 489--498. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Slatter, D., Cheatle, P., and Greig, D. 2010. Faces from the web: Automatic selection and composition of media for casual screen consumption and printed art-work. In Proc. SPIE.Google ScholarGoogle Scholar
  34. Viola, P., and Jones, M. 2004. Robust real-time object detection. Intl. Journal. Computer vision (IJCV) 57, 2, 137--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Wang, J., Xu, Y., Shum, H., and Cohen, M. 2004. Video tooning. In Proc. ACM SIGGRAPH, vol. 23, 574--583. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Wang, T., Mansfield, A., Hu, R., and Collomosse, J. 2009. An evolutionary approach to automatic video editing. In Proc. 6th European Conf. on Visual Media Production (CVMP). Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Winnemoller, H., Olsen, S., and Gooch, B. 2006. Real-time video abstraction. In Proc. ACM SIGGRAPH, 1221--1226. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Xiao, J., Zhang, X., Cheatle, P., Gao, Y., and Atkins, C. 2008. Mixed-initiative photo collage authoring. In Proc. ACM Multimedia, 509--518. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. You, W., Feis, S., and Lea, R. 2008. Studying vision-based multiple-user interaction with in-home large displays. In Proc. 3rd ACM workshop on Human-Centred Computing (HCC), 19--26. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Video stylization for digital ambient displays of home movies

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        NPAR '10: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering
        June 2010
        183 pages
        ISBN:9781450301251
        DOI:10.1145/1809939

        Copyright © 2010 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 June 2010

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader