
Cohen, P. R., Dalrymple, M., Moran, D. B., Pereira, F. C. N., Sullivan, J. W., Gargan, R. A.,
Schlossberg, J. L., & Tyler, S. W. (1989). Synergistics use of direct manipulation and nat-
ural language.
Proceedings of Human Factors in Computing Systems (CHI ’89)
. New
York: ACM Press, 227–34.
Cohen, P. R., Johnston, M., McGee, D. R., Oviatt, S. L., Pittman, J., Smith, I. A., et al. (1997).
QuickSet: Multimodal interaction for distributed applications. Paper presented at the 5th
ACM International Conference on Multimedia, Seattle.
Cohen, P. R., & McGee, D. R. (2004). Tangible multimodal interfaces for safety-critical appli-
cations.
Communications of the Association for Computing Machinery
47(1):41–46.
Cohen, P. R., McGee, D., & Clow, J. (2000). The efficiency of multimodal interaction for
a map-based task. Paper presented at the 6th Applied Natural Language Processing
Conference, Seattle.
Cohen, P. R., & Oviatt, S. L. (1995). The role of voice input for human–machine communication.
Proceedings of National Academy of Sciences of the United States of America
92(22):9921–27.
Dalal, M., Feiner, S. K., McKeown, K. R., Pan, S., Zhou, M. X., Hollerer, T., et al. (1996).
Negotiation for automated generation of temporal multimedia presentations. Paper pre-
sented at the Fourth Annual ACM International Conference on Multimedia, Boston.
Danninger, M., Flaherty, G., Bernardin, K., Ekenel, H., Kohler, T., Malkin, R., et al. (2005).
The connector: Facilitating context-aware communication. Paper presented at the 7th
International Conference on Multimodal Interfaces, Trento, Italy.
Demirdjian, D., Ko, T., & Darrell, T. (2003). Constraining human body tracking. Paper pre-
sented at the Ninth IEEE International Conference on Computer Vision, Nice.
Deng, L., Wang, K., Acero, A., Hon, H.-W., Droppo, J., Boulis, C., et al. (2002). Tap-to-talk in
a specific field: Distributed speech processing in miPad’s multimodal user interface.
IEEE
Transactions on Computer Speech and Audio Processing
10(8):605–19.
Duncan, L., Brown, W., Esposito, C., Holmback, H., & Xue, P. (1999). Enhancing virtual
maintenance environments with speech understanding. Technical Report TECHNET-
9903, Boeing Mathematics and Computing Technology.
Dupont, S., & Luettin, J. (2000). Audio-visual speech modeling for continuous speech recog-
nition.
IEEE Transactions on Multimedia
2(3):141–51.
Ehlen, P., Purver, M., & Niekrasz, J. (2007). A meeting browser that learns. Paper pre-
sented at the AAAI 2007 Spring Symposium: Interaction Challenges for Artificial Assis-
tants, Stanford, CA.
Ellis, C. A., & Barthelmess, P. (2003). The Neem dream. Paper presented at the 2nd Tapia
Conference on Diversity in Computing, Atlanta.
Epps, J., Oviatt, S. L., & Chen, F. (2004). Integration of speech and gesture inputs during
multimodal interaction. Paper presented at the 2004 Australian International Conference
on Computer–Human Interaction. Available at
http://www.cse.unsw.edu.au/~jepps/
ozchi04.pdf
.
Falcon, V., Leonardi, C., Pianesi, F., Tomasini, D., & Zancanaro, M. (2005). Co-located sup-
port for small group meetings. Paper presented at the 2005 SIGCHI Conference on
Human Factors in Computing Systems, Workshop: The Virtuality Continuum Revisited.
Available at
http://portal.acm.org/citatio n.cfm?id=1057123&coll=portal&dl=ACM&CFID=
6362723&CFTOKEN=9 0890358
.
Faure, C., & Julia, L. (1994). An agent-based architecture for a multimodal interface. Paper
presented at the 1994 AAAI Spring Symposium, Palo Alto, CA.
References
435