ABSTRACT
Molecular biologists seek to explain why similar proteins bind different molecular partners. The visual examination and comparison of binding cavities in protein structures can reveal information about the molecular partners that bind to a protein. Similarities can reveal regions that accommodate similar molecular fragments, while differences in binding preferences can arise from regions where binding sites differ. By comparing the binding cavities of multiple proteins, further information about the specificity of each protein can be discovered. But the visual examination of protein structure is a difficult cognitive task that requires persistence and quantitative precision. Software supports these efforts, but software for analyzing structure is difficult to use for investigators without computational backgrounds. By enabling non-computational users to better use analytical software, we hope to support progress in structural biology. Below, we present LeapRenderer, a 3-dimensional gesture-driven interface for visualizing protein surface models controlled primarily by the Leap Motion Controller. LeapRenderer serves exploration and classification functions. It allows researchers to explore the protein structures by manipulating 3-D renderings via rotation and scaling. It also aids researchers in categorizing similar proteins into groups by providing a simple interface for comparing and sorting protein surfaces. These capabilities thereby support the discovery and classification of protein binding sites.
- W. H. Chen, C. T. Hsieh, and T. T. Liu. A real time hand gesture recognition system based on dft and svm. Applied Mechanics and Materials, 284:3004--3009, 2013.Google ScholarCross Ref
- W. L. DeLano. The PyMOL Molecular Graphics System, 2002.Google Scholar
- T. Grossman, D. Wigdor, and R. Balakrishnan. Multi-finger gestural interaction with 3d volumetric displays. In Proceedings of the 17th annual ACM symposium on User interface software and technology, pages 61--70. ACM, 2004. Google ScholarDigital Library
- L. H. Li and J. F. Du. Visual based hand gesture recognition systems. Applied Mechanics and Materials, 263:2422--2425, 2013.Google Scholar
- M. R. Mine, F. P. Brooks Jr, and C. H. Sequin. Moving objects in space: exploiting proprioception in virtual-environment interaction. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 19--26. ACM Press/Addison-Wesley Publishing Co., 1997. Google ScholarDigital Library
- D. Petrey and B. Honig. GRASP2: visualization, surface properties, and electrostatics of macromolecular structures and sequences. Method Enzymol, 374(1991):492--509, Jan. 2003.Google ScholarCross Ref
- E. F. Pettersen, T. D. Goddard, C. C. Huang, G. S. Couch, D. M. Greenblatt, E. C. Meng, and T. E. Ferrin. UCSF Chimera--a visualization system for exploratory research and analysis. J Comput Chem, 25(13):1605--12, Oct. 2004.Google ScholarCross Ref
- S. S. Rautaray and A. Agrawal. Vision based hand gesture recognition for human computer interaction: a survey. Artificial Intelligence Review, pages 1--54, 2012.Google Scholar
- G. Rovelo, D. Vanacken, K. Luyten, F. Abad, and E. Camahort. Multi-viewer gesture-based interaction for omni-directional video. In SICCHI Conference on Human Factors in Computing Systems (CHI), 2014. Google ScholarDigital Library
- L. Song, R. M. Hu, H. Zhang, Y. L. Xiao, and L. Y. Gong. Real-time 3d hand gesture detection from depth images. Advanced Materials Research, 756:4138--4142, 2013.Google ScholarCross Ref
- J. P. Wachs, H. I. Stern, Y. Edan, M. Gillam, J. Handler, C. Feied, and M. Smith. A gesture-based tool for sterile browsing of radiology images. Journal of the American Medical Informatics Association, 15(3):321--323, 2008.Google ScholarCross Ref
- F. Weichert, D. Bachmann, B. Rudak, and D. Fisseler. Analysis of the accuracy and robustness of the leap motion controller. Sensors (Basel, Switzerland), 13(5):6380, 2013.Google ScholarCross Ref
- J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries, toolkits or training: a$1 recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on User interface software and technology, pages 159--168. ACM, 2007. Google ScholarDigital Library
- Z. Q. Zhang and W. Zhu. Robust hand gesture detection based on feature classifier. Advanced Materials Research, 823:626--630, 2013.Google ScholarCross Ref
Index Terms
- A gesture-based interface for the exploration and classification of protein binding cavities
Recommendations
A flexible volumetric comparison of protein cavities can reveal patterns in ligand binding specificity
BCB '14: Proceedings of the 5th ACM Conference on Bioinformatics, Computational Biology, and Health InformaticsConformational flexibility is an underlying cause of error in all comparisons of protein structure. Using flexible representations, some comparison algorithms can identify subtle functional similarities among distantly related proteins even when they ...
Imprint of evolutionary conservation and protein structure variation on the binding function of protein tyrosine kinases
Motivation: According to the models of divergent molecular evolution, the evolvability of new protein function may depend on the induction of new phenotypic traits by a small number of mutations of the binding site residues. Evolutionary ...
Comments