DVDS, multi-sensory, collaborative virtual environment, Elsevier Publications, CAD system, virtual environment, geometric modeling, viewpoint, ACM Symposium, CAD Journal, data sets, CAD systems, environment, multi-modal, Virtual Reality Software and Technology, SS, 3D modeling, virtual prototyping, designers, Hand gestures, Library features, geometry, voice input, developments in design methodology, design process, CAD, Data translation, Virtual Worlds, Virtual World, Collaborative Virtual Environments, Virtual Reality, Detailed design, geometric space, ACM Transactions, Andrew E. Johnson, data types, V2V, Funkhouser T. A., Deering M. F., Distributed Virtual Reality System, Greenhalgh C., pp, Interface Communications
Proceedings of DETC'00 ASME 2000 Design Engineering Technical Conferences and Computers and Information in Engineering Conference Baltimore, Maryland, September 10-13, 2000
GEOMETRIC MODELING AND COLLABORATIVE DESIGN IN A MULTI-MODAL MULTI-SENSORY VIRTUAL ENVIRONMENT
Rajarathinam Arangarasan Ph.D. Student Department of Mechanical Engineering University of Wisconsin
Madison Madison, WI-53706 Tel: (608) 265 3125 Fax: (608) 262 4814 Email: [email protected]
Rajit Gadh Professor Department of Mechanical Engineering University of Wisconsin Madison Madison, WI-53706 Tel: (608) 262 9058 Fax: (608) 265 2316 Email: [email protected]
Virtual environment, virtual prototyping, geometric modeling, multi-modal multi-sensory interface, collaborative virtual environment, distributed virtual environment
ABSTRACT Shape modeling plays a vital role in the design process but often it is the most tedious task in the whole design cycle. In recent years the Computer Aided Design (CAD) industry has evolved through a number of advances and developments in design methodology. However, modeling in these CAD systems requires expertise and in-depth understanding of the modeling process, user interface and the CAD system itself, resulting in increased design cycle time. To overcome these problems a new methodology and a system called "Detailed Virtual Design System" (DVDS) has been developed for detailed shape modeling in a multi-modal, multi-sensory Virtual Environment (VE). This system provides an intuitive and natural way of designing using hand motions, gestures and voice commands. Due to the lack of effective collaborative design, visualization and analysis tools, designers spend a considerable amount of time and effort in the group discussion during design process. To enable multiple designers to effectively and efficiently collaborate in a design environment, framework of a collaborative virtual environment, called "Virtual Environment to Virtual Environment" (V2V), has been discussed. This framework allows same site and remote site multi-modal, multisensory immersive interface between designers. INTRODUCTION Geometric modeling is central and vital to the designing process. Rapid geometry creation reduces the design process
time and thus the product cycle time considerably. Most of the conventional Computer Aided Design (CAD) systems usually use 1D and 2D input devices (i.e. keyboard and mouse respectively) and a 2D output device (i.e. monitor) for 3D shape modeling. Since the modeling space is usually in 3D, these 1D and 2D devices are mapped onto a 3D modeling space which results in a non-intuitive design interface, unnecessary steps in modeling and difficulties in visualizing the complex 3D and higher dimensional shapes. Thus this technique makes the modeling interface complex and increases the modeling time considerably. Due to the nature of the conventional CAD systems' interface, only one input device (e.g. either keyboard/mouse) can be effectively used at any time. Although the designer (human being) has the built-in capability to communicate in a multi-modal, multi-sensory interface, the restrictions inherent in CAD systems lead to a considerable waste of Human Resources
. Moreover, most CAD systems are designed such that only one person can use it effectively at a time, thus making the collaboration process a tedious task. The aim of this paper is twofold. First, it describes a new methodology and Detailed Virtual Design System (DVDS) for detailed shape modeling in a multi-modal, multi-sensory virtual environment (VE). Second, it describes the framework of a collaborative virtual environment (V2V), which allows same site and remote site multi-modal collaboration between multiple designers primarily focused on collaborative engineering
Copyright © 2000 by ASME
design, visualization of complex prototypes, simulation and analysis results. Overall, this paper addresses the following issues: Elimination of the mapping process at the input/output (I/O) interface level, synchronization of multi-modal, multisensory user interactions, combine industry standard CAD representation with virtual reality
(VR) technology for geometric modeling, a framework for same site and remote site multi-modal collaborative virtual environment, use of hybrid network topology and hybrid communication for real time interaction. RELATED RESEARCH Related work has been divided into three categories. They are research related to interaction and navigation in VE, geometric modeling and data representation in a VE, and collaborative virtual environment. Interaction and navigation in a VE Mine (1997) described the Immersive Simulation Animation and Construction Program, (ISAAC), where several interaction techniques and the direct manipulation of objects in VE were discussed. These interaction techniques are well suited to visualization and navigation in VE but less suited to shape creation and modification operations. Shai (1998) has studied several 3D input devices and analyzed its effectiveness in 3D interaction. Chu et al. (1997) analyzed the multi-modal interface for a virtual reality based CAD system and studied several interface issues in VE. It focused primarily on concept shape design and single user multi-modal system. Modeling and data representation Deering (1995) described sketching and animation in VE and explained the system 'HoloSketch'. Though several aspects of sketching in VE were discussed, it was more suited for graphics and animation type applications rather than CAD (solid modeling) applications. Trika et al. (1997) analyzed efficient feature addition in VE by maintaining the knowledge of part cavities, their adjacencies and a triangulated boundary representation of an approximating polyhedron. Gupta et al. (1997) investigated the ease of part handling, part insertion and assembly analysis in VE using multi-modal input. They also experimented with the setup of various configurations, but the work was limited to rigid 2D polyhedral objects. Dani and Gadh (1997a) have presented a framework for a virtual reality based CAD system for conceptual shape design (COVIRDS). Dani and Gadh (1997b) discussed a dual graph based Geometric Representation
for concept shape modeling in a VE. Although separate research have been done in interface and concept shape modeling in VE, there is no single system which provides a unique VE for detailed shape modeling through a synchronized multi-modal, multi-sensory user interface that
enables multiple designers to collaborate in the same virtual environment, and uses the well developed industry standard CAD knowledge and representation in a VE for geometric modeling. Collaborative Virtual Environment Das et al. (1997) described a highly scalable architecture called `NetEffect' for developing, supporting and managing large, media-rich 3D virtual worlds. It partitions the virtual world into communities, which are then distributed among a set of servers. This system migrates clients from one server to another as clients move through the communities. This architecture is more as multi-media enabled chat program, where small data sets are shared and communicated during the collaboration process. Frecon and Stenius (1998) described the system `DIVE', whose applications operate solely on the world abstraction and do not communicate directly with each other. This allowed a clean separation between application and network interfaces. DIVE architecture is based on the active replication of (parts of) the database. When editing/modifying the virtual world, DIVE first modifies the local copy and slowly updates in other peer systems. Database modifications are sent using a reliable multi-cast protocol, and continuous data streams (audio/video) are sent using unreliable multicast. DIVE doesn't rely on any central service. Greenhalgh and Benford (1995a, 1995b) described a prototype virtual reality teleconferencing system `MASSIVE', which allows multiple users to communicate using arbitrary combinations of audio, graphics, and text media over local and wide area networks (WAN). A spatial model of interaction was used to control the communication. Several adapters have been discussed which helps the user to control the awareness levels in VE. Combination of client-server and peer-to-peer types of interactions were used. Snowdon and West (1994) described an overview of the AVIARY prototype, and an application implemented using AVIARY. Kessler and Hodges (1996) described a network communication protocol for distributed virtual environment systems. Funkhouser (1996) described several network topologies for a scalable multi-user virtual environment. Research about SHASTRA, a web based collaborative design environment, can be found in (Bajaj and Cutchin, 1997). Research about CAVERN, a networked virtual environment system can be found in (Leigh et al., 1997a). Though the above collaborative, distributed virtual environment systems focus on scalability, accommodating large number of users, effective data sharing, coherency and network protocols, they did not highly emphasize the richness or the quality of the multi-modal interface and connection between the designers. Some of the above systems are able to handle very large data for visualization, but it requires the clients to be highend workstations to perform real-time simulation, which is not likely in many real world situations. In short, they usually do the trade-off between the quantity of the users and the quality of
Copyright © 2000 by ASME
interaction/communication between the designers. The current research focuses on the richness of the communication, and how the multi-modal multi-sensory interaction can be extended for same site and remote site collaboration, which enables high throughput between the designers. It also discusses a new methodology that allows mid-range systems to be effectively used for large data visualization
in a real-time collaborative virtual environment.
CONVENTIONAL CAD SYSTEMS VS VR-CAD SYSTEMS
Keyboard Fingers Mouse Hand, fingers Monitor Eyes
1D 2D 2D 3D 2D 3D
1D-space 2D-space 3D-space
Human sensors used: Hand, fingers, and eyes Figure 1 I/O interface in conventional CAD
Voice Mouth (Speech)
Gesture Fingers (Haptic)
3D Motion Hands, head, body (3D Position Tracking)
Stereoscopic Display Eyes (Visual) Force Feedback Skin (Tactile)
Synthesised 3D-Sound Ears (Auditory)
Human sensors used: Mouth, fingers, hands, head, body, eyes, skin, and ears. Figure 2 I/O interface in VR-CAD systems Figure 1. and Fig. 2. highlight the differences between the conventional CAD systems and multi-modal, multi-sensory VR CAD systems. Conventional CAD systems need mapping process between I/O devices and the 3D geometric space. Also
they use only a few human sensors for modeling process. On the other hand, in VR-CAD systems mapping processes are eliminated, which helps make shape manipulation direct and intuitive. Moreover, it effectively uses several different human sensors for high throughput between the designer and the system. Since COVIRDS (Dani and Gadh, 1997a) is similar to the current research of DVDS, current research has been compared with the conventional CAD systems and COVIRDS. Parameters
Conventional CAD Systems COVIRDS DVDS
Detailed design Parametric modeling
Yes Limited1 Yes
Intuitive design steps
Industry standard CAD representation Yes Limited2 Yes3
Multi-sensory interface Immersive display Collaborative design
1 Primitive based parametric modeling 2 Maintains its own representation and file format 3 Uses the same representation as that of the underlying CAD system 4 Allows only for viewing, not in editing mode Table 1 Feature comparisons between conventional CAD Systems, COVIRDS and DVDS
The next section of the paper discusses the architecture of the Detailed Virtual Design System (DVDS), interaction and navigation in DVDS, modeling and data representation in DVDS, and bi-directional data translation between DVDS and commercial CAD systems.
DETAILED VIRTUAL DESIGN SYSTEM (DVDS) Detailed design involves the development of a detailed model of the product. This includes defining the features, determining their dimensions, tolerances, materials, manufacturing processes, etc. Almost all of the CAD systems are developed for detailed design. But due to the interface and interaction techniques, 1D and 2D I/O devices, they make shape modeling a time-consuming, tedious and non-intuitive task. The nature and architectural development of the DVDS provide higher dimensional, multi-modal, multi-sensory interaction between the designer and the system, thus making the design process faster, easier and more intuitive.
DVDS architecture Figure 3. and Fig. 4. show the System Architecture
of DVDS. DVDS is an intermediate software layer, which resides between the hardware and the commercial CAD system. DVDS' command parser module synchronizes the multi-modal inputs, which arrive simultaneously from different kinds of input
Copyright © 2000 by ASME
devices. It parses the input commands and redirects them accordingly to the CAD system to activate any geometric manipulations and/or to the graphics engine for display and navigation operations. Table 2. shows the generic sketch entities that are supported in DVDS. Each sketch entity has a set of control points, which help to change the dimensions and modify the shape of the sketch, and a handle, which is used to transform the sketch entity. The handle is mostly at the geometric center of the
Designer(s) Space Hardware Layer DVDS
Since DVDS share CAD data directly from the underlying CAD system, any modifications performed directly on CAD systems dynamically reflect in DVDS.
CAD Application Figure 3 Layout of DVDS architecture DVDS
Gesture Recognition Voice recognition
Command Parser Engine
- Database -
sketch entity, as shown in Table 2.
Sketch Control Primitives points
Two extreme points
Center and radius
3 points on circumference
Ellipse Arc Arc
Center, point along major and minor axis respectively Center, starting point and ending point on arc Start, End and another point on arc
End points of each line segment Spline along the control points
End points of line segment
Table 2 Generic sketch entities and its control points and handle.
Add Feature Definition
Update PART Model
Add Feature Options
Figure 5 Sequence of feature manipulation operations in conventional CAD systems
Update PART Model
Add / Modify SketchPlane / Sketches
Add / Modify Feature definition
Add / Modify Feature options
Designer(s) Space Hardware Layer
Auditory Feedback Figure 4 System architecture of DVDS
DVDS CAD Kernel / Application
Bi-directional operations Uni-directional operations
Geometric Space User Interaction Space
Figure 6 Sequence of feature manipulation operations in DVDS Due to the architectural design of the conventional CAD systems' interface, the sequence of operations in CAD systems is unidirectional, as shown in Fig. 5. On the other hand, in DVDS the sequence of operations is bi-directional (refer Fig.
Copyright © 2000 by ASME
6.). For example this allows the designer to dynamically change the feature definition and sketch parameters simultaneously using his both hands, while changing the viewpoint orientation using a different mode of input.
Figure 7 Liquid features of sketched extrusion Though the computing speed has increased considerably in recent years, the technology has not yet developed enough to allow real-time data updating in a `solid modeling kernel'. This is a key issue for real-time geometric manipulation. To overcome this problem a new concept called "Liquid Features" is introduced. During geometry manipulation, usually only a few set of features need to be dynamically updated. Since most of the features are sweep-based system, the feature definitions are easily represented in an approximated faceted format and manipulated in real-time. Once the feature definition is completed it updates the database in the `modeling kernel', as shown in Fig. 6. The liquid features are shown in Fig. 7.
Fillet, chamfer, shell, etc.
Positive / negative Extrusion, revolution, blend, loft, etc.
Set of any predefined parametric features.
Figure 8 Partial list of features supported in DVDS
Grasp Release Point Ring Okay
Resize of Orientation of Orientation of Transformation of
Figure 9 Hand gestures and its actions in DVDS
Figure 8. shows a partial list of features supported in DVDS. Sketched sections and a sweep profile define swept features (e.g. positive/negative extrusion, positive/negative revolution, lofting, blending etc.). Direct features are simple features that are created in simple commands without sketches and sweep profiles (e.g. fillets, chamfers, shell feature, etc.). Library features are a set of parametrically defined swept and direct features. For example, an extruded boss with a concentric hole and a fillet on its outer edge can be a library feature.
Interface and navigation in DVDS Figure 9. shows the basic hand gestures and some of its corresponding actions performed in DVDS. Basic gestures are grasp, release, point, ring and okay. Grasp is used to grab the object or viewpoint to orient in 6 degrees of freedom (DOF). Release is a notification of end of grasping gesture. Point gesture is used to relocate the model or select a small point by pointing at it. This gesture is useful in manipulating tiny features. Ring gesture is used to reshape, modify the feature by pulling, pushing, twisting the control points. Gesture okay is to confirm the action. The primary operations are free form transformation and constrained transformation of the part or viewpoint. Usually the viewpoint is changed so that the accuracy of the geometry is maintained. Another kind of operation is zoom in and zoom out. These operations are manipulated through direct hand motion and gestures. Apart from gestures, voice input is also used for invoking commands. Also 3D menu and 3D tool bars (refer Fig. 7., Fig. 12.) are used as an additional input widgets.
Data translation It has been observed that effective data translation between several CAD systems remains an ongoing issue. To eliminate data translation and to make use of the well-developed CAD knowledge DVDS uses the industry standard CAD representation that is used by the underlying CAD system e.g. SolidWorksTM. Summary of major file formats that are supported for low-level geometric editing and exporting are listed in Table 3.
Types Native geometry Imported model Imported model
- Editable - Full feature definition - Restores history
Solidworks "PRT" file
- Editable - No past features or history - Non editable - Triangulated
Parasolid, SAT, VDAFS, STEP, IGES VRML, STL, PROE Render, etc.
Table 3 Data translation in DVDS
COLLABORATIVE VIRTUAL ENVIRONMENT (V2V) This section describes the framework of a multi-modal multi-sensory collaborative virtual environment, called "Virtual Environment to Virtual Environment" (V2V). It discusses network topologies and session management, types/kinds of
Copyright © 2000 by ASME
Data Type NETWOR K
data shared, policy control and Clients Connectivity and Interface Modalities (CIM) table. V2V enables collaborative design and visualization between multiple designers in same site and remote site virtual environments. Objective of V2V is to provide quality of service
(QoS) in terms of multi-modal, multi-sensory interface in a CVE, to enable mid-range systems be effectively used in a real-time collaborative virtual environment, to enable the designers visualize and design complex and large data sets, and to enable same site and remote site collaboration effective between multiple designers.
Shared data types and data management
V2V focuses on wide range of data types. The generic data
sets are classified based on the type (e.g. static, partially
dynamic and highly dynamic), based on the size (e.g. small,
medium, huge), and kind of data (e.g. shared data, interface
communications). Even though there is no precise definition for
small, medium, and large data sets, based on the current
computing and network facilities and capabilities, the
classification has been done on the size of the shared data.
Though this classification is blurred, it helps to classify the data
sets in a general way. Table 4 shows different data types and
their influences in V2V.
Amount of data
At session During bandwidth application
DD Varies from Varies from > 100Mbps Simulation of
PD Varies from Varies from 10Mbps ~ Geometric
SS MS 100 Mbps modeling
24 Kbps ~ 10 Visualizing
SD SS HS
Mbps static objects
IC Varies from
24 Kbps ~ 10 Interaction
Mbps between clients
Table 4 Data types and management
DD-Highly Dynamic Data
PD-Partially Dynamic Data SD- Static Data
HS-Huge sized data sets MS-Medium sized data sets
IC- Interface Communications SS-Small sized data sets
For small and medium size data sets, depending upon the
client's configuration and request, the data will either be
replicated in the client or the server will do the rendering
process for this client and send the result to the client. If the
data set is huge, then usually the clients will not be able to
handle it, so they communicate with the server for rendering.
Network topology and session management V2V uses hybrid approach of using both client-server and peer-to-peer network topology. In V2V there are two things that are shared. They are data and interface modalities. Data is fully maintained by a centralized server or allowed to replicate by clients for static data. Data is transferred between clients and server only. No data is transferred between clients. On the other
hand, interface modalities are shared directly between clients and/or with the help of server for multi-casting. Data sharing is performed through reliable connection. Interface modalities are transferred through reliable and/or unreliable unicast and/or multicast based on the requirement. Server will be always running and clients login (or join) and logout (or leave) a session at any time. During login the server checks the authentication of the client, establishes the connection and stores the clients' interface modalities and connectivity information in the centralized database. Once the connection is established the client is allowed to communicate/interact with other clients. Parallel processing is used extensively at server side as well as client side for communication and computations to achieve real time interaction.
Interface modalities As discussed in the system architecture of DVDS each client will have its own set of interface modalities (like voice input, gesture recognition, 3D position/orientation tracking, stereoscopic display, 3D synthesized sound output, etc.). Fig. 10. shows the senders' and receivers' perspective of the interface modalities.
Convert to text + emotional info. Streamed
Combine text and emotional info. Play
3D Tracking Video
3D Position / Orientation Viewpoint Information Series of images of both the eyes Streamed
Perform the appropriate action Render the scene for this viewpoint Use both images for stereo display Display
Figure 10 Interface modalities from sender's and receiver's perspective The modalities connection between clients is done in two ways. (i) Either a client can request another client to share its modality (i.e. getting a modality connection, for example if a client wants to see exactly what another client is viewing, he can request to share the other client's viewpoint); (ii) a client can notify another client that he wants one of his modality to connect (i.e. giving a modality connection. For e.g. if a client want to show what he is viewing to another client he can share his viewpoint with the other client). It is not always necessary that the kind of media on both the senders' and receivers' side should be same. For e.g. voice input can be converted to text and can be either converted back to voice or it can be directly displayed as text output. Similarly for other kind of interface modalities also these kinds of variants has been used based on the available I/O hardware and setup of
Copyright © 2000 by ASME
clients' interface modalities. Since the same words (sentences) makes different meaning at different occasions, the emotion (like joy, sad, angry, etc.) of the speakers voice also captured and transmitted to the receiving clients.
Policies Each client sets the rights and policies for its own I/O interface modalities, how other client share with them. For e.g. client C2 can set its voice not be sent to clients C5 and C6. Even when clients C5 and C6 can request the voice connection from C2, they do not have the right privilege to establish that interface connection. This feature becomes important, when one is sharing important information with another specific client. Similarly on the other hand any client can set himself that it did not want to be connected from some other specific clients or from any other clients at all, because this client may be busy and not want to be disturbed by other clients. If some information need to be sent to many clients, instead of this client sending to every other client, with the help of the server it can multi-cast it to all those required clients. Policies can be very complex, since it can fine control the access privilege for each client's individual interface modalities.
Clients Connectivity and Interface Modalities (CIM) table At the time of login, the server collects the information about client's network connection, interface modalities etc. and registers them in the CIM table as shown in Table 5.
Client Type of
with server From
C2, C4 C2, C3 V, G, T, S, A A-
V, T, M
G, T, S, A
V, G, T, S
CS Centralized data V Voice input S Stereoscopic display
RP Replicated data G Gesture M Monocular display
T 3D tracking A Audio output
Table 5 Clients Connectivity and Interface
Modalities (CIM) table maintained by the server
The "lookup" table contains the status of each client, type of connection established with the server, bi-directional references of other clients, available I/O interface modalities, policies for sharing I/O interface modalities, etc. In Table 5., Afor client C1 in policy column means that client C1 is not sharing its audio output. Minus sign () in policy column represents that policy for that specific interface modality is restricted. In Table 4. the policies are shown for simple case, but it becomes highly complicated when the policy for each modalities for each client is defined.
EXAMPLE MODELS In Fig. 11. it can be noticed that three virtual hands of different designers on the same design environment can be seen.
Figure 11 Sample model created in DVDS Figure 12 Sample model created in DVDS Figure 13 Designer in the process of geometric modeling in DVDS
Copyright © 2000 by ASME
It enables multiple designers to interact and involve in design process. In Fig. 12. the 3D-tool bar can be observed. Fig. 13. shows the immersive display system, where designer in the middle of the modeling process. SYSTEM CONFIGURATION FOR DVDS AND V2V For DVDS HARDWARE: ·= Intergraph Realizm II (Rendering Engine) ·= Ascension flock of birds (3D tracking) ·= 5th Dimension Data glove (Hand gestures) ·= IBM microphone set (Voice input) ·= Electrohome projector (or) ImmersaDesk (Stereoscopic Projection) ·= CrystalEyes glasses (Stereoscopic viewing) ·= Sony speaker (Synthesized 3D sound output) SOFTWARE: ·= WorldToolKit library (Graphics library) ·= Visual C++ (Compiler) ·= SolidWorks-99TM (Commercial CAD system) ·= IBM Viavoice (Voice recognition) For V2V HARDWARE: Server Configuration: ·= SGI ONYX Reality Engine 2 ·= ImmersaDesk (Stereoscopic Projection) Clients Configuration: ·= SGI Octane ·= Sun ultra-sparc multiprocessor workstations ·= Intergraph multiprocessor workstations ·= Electrohome projector or ImmersaDesk (Stereoscopic Projection) Network Configuration: ·= Gigabit LAN connection for some clients and to server connection. ·= 10/100 Mbps LAN connections between some clients. FUTURE WORK Research needs to be done in the following areas to enhance and extend the systems features and its applications. Develop a data management system, which allows interaction and collaboration with highly dynamic data sets efficiently and make use of the existing database systems. Investigate a method to extend the existing Quality of Service (QoS) in terms of multi-modal interface for scalable collaborative virtual environment. Currently the system is applied in high bandwidth, low latency network; it needs further research to use effectively in low bandwidth, high latency network.
SUMMARY This paper discussed two important research topics. First, it discussed the research about a new methodology and Detailed Virtual Design System (DVDS) for detailed geometric modeling in a multi-modal, multi-sensory virtual environment. Then it discussed how the multi-modal, multi-sensory interface enables and helps the designers to build detailed geometric modeling intuitively and rapidly. Also it discussed how the welldeveloped industry standard CAD representation has been integrated with DVDS, thus eliminating the tedious bidirectional data translation between commercial CAD systems. Second, a framework for collaborative virtual environment (V2V) has been described. It discussed about different types of data used in a collaborative virtual environment, multi-modal interface between several clients and server for effective communication
, explained the Clients Connectivity and Interface Modalities (CIM) table which helps in real-time interaction between the clients and server and discussed the policy control for interface modalities between clients. REFERENCES Bajaj C. L., Bernardini F., 1995, "Distributed and Collaborative Synthetic Environments", Human-Computer Interaction and Virtual Environments number 3320,NASA Conference Publication, Hampton, Virginia, NASA, April 1995. Bajaj C., Cutchin S., 1997, "Web Based CollaborationAware Synthetic Environments", Proceedings of the 1997 GVU/NIST TEAMCAD workshop, Atlanta, GA, pp.143150. Bolt R. A., and Herranz E., 1992, "Two-Handed Gesture in Multi-Modal Natural Dialog," Proceedings of ACM Symposium on UIST, pp. 7-14. Capin T. K., Thalmann D., 1999, "A Taxonomy of Networked Virtual Environments", Proceedings of IWSNHC3DI' 99, Santorini, Greece. Chu C. C., Dani T. H. and Gadh R., 1997, "Multimodal Interface for a Virtual Reality Based CAD System," CAD Journal, Elsevier Publications, Vol. 29, No.10, pp 709-725. Chu C. C., Dani T. H. and Gadh R., 1998, "Evaluation of Virtual Reality Interface for product shape design," IIE Transactions Special Issue
on Virtual Reality, vol. 30, no. 7, pp. 629-643. Dani T. H., Gadh R., 1997a, "Creation of concept shade designs via a virtual reality interface," CAD Journal, Elsevier Publications, Vol.29, No.8, pp.555-563. Dani T. H. and Gadh R., 1997b, "A Dual Graph Representation within a Geometric Framework supporting Shape Design in a Virtual Reality Environment," Technical report
, I-CARVE Lab, Mechanical Engineering Department, University of Wisconsin Madison. Dani T. H., Gadh R., 1998, Chapter-14: "Virtual Reality A new Technology for the Mechanical Engineer," The Mechanical Engineers' Handbook, 2nd Edition. John Wiley & Sons
, Editor: Myer Kutz. Das T. K, Singh G., Mitchell A., Kumar P. S., McGee K., 1997, "NetEffect: A Network Architecture for Large-Scale
Copyright © 2000 by ASME
Multi-User Virtual Worlds", Proceedings of ACM Symposium on Virtual Reality Software and Technology. Deering M. F., 1995, HoloSketch: "A Virtual Reality Sketching / Animation Tool," ACM Transactions on ComputerHuman Interaction, Vol 2, No. 3, pp 220-238. Frecon E., Stenius M., 1998, "DIVE: A Scaleable network architecture for distributed virtual environments", Distributed Systems Engineering Journal (Special Issue on Distributed Virtual Environments), Vol. 5, No.3, pp 91-100. Funkhouser T. A., 1996, "Network Topologies for Scalable Multi-User Virtual Environments", Proceedings of VRAIS '96, pp. 222-228. Greenhalgh C., Benford S., 1995a, "MASSIVE: a Distributed Virtual Reality System Incorporating Spatial Trading", Proceedings of the 15th International Conference
on Distributed Computing Systems (DCS '95), Vancouver, Canada, pp. 27-34. Greenhalgh C., Benford S., 1995b, "MASSIVE: A Collaborative Virtual Environment for Teleconferencing", ACM Transactions on Computer-Human Interaction, Vol. 2, No. 3, pp. 239-261. Gupta R., Whitney D. and Zelter D., 1997, "Prototyping and Design for Assembly analysis using Multimodal virtual environments," CAD Journal, Elsevier Publications, Vol. 29, No. 8, pp 585-597. Hoffman C. M., 1989, "Geometric and Solid Modelling," Morgan Kaufmann
publichers Inc. Hotz G., Kerzmann A., Lennerz C., Schmid R., Schomer E., Warken T., 1999, "SiLVIA A Simulation Library for Virtual Reality Applications," Proceedings of IEEE Virtual Reality, pp.82. Jayaram S., Wang Y., Jayaram U., 1999, "A Virtual Assembly Design Environment," Proceedings of IEEE Virtual Reality, pp.172-179. Johnson G., 1998, "Collaborative Visualization 101," ACM SIGGRAPH Computer Graphics
, Vol. 32, No. 2, pp. 8-11. Kessler G. D., Hodges L. F., 1996, "A Network Communication Protocol for Distributed Virtual Environment Systems", Proceedings of VRAIS '96, pp 214-221. Leigh, J., Johnson, A. E., DeFanti, T.A., 1997a, "Issues in the Design of a Flexible Distributed Architecture for Supporting Persistence and Interoperability in Collaborative Virtual Environments", Proceedings of Supercomputing '97 San Jose, California. Leigh J., DeFanti T. A., Andrew E. Johnson, Maxine D. Brown, Daniel J. Sandin, 1997b, "Global Tele-Immersion: Better Than Being There", Proceedings of ICAT '97, Tokyo, Japan. Mantyla M., 1988, "An Introduction to solid Modeling," computer science
press. Mine M. R., 1997, "ISAAC: a meta-CAD system for Virtual Environments," CAD Journal, Elsevier Publications, Vol. 29, No. 8, pp. 547-553. Mortenson M. E., 1985, "Geometric Modeling," John Wiley & sons.
Arangarasan R., Dani T. H., Chu C. C., Gadh R., 1999, "Virtual Design Technology
and Applications," SME Conference at U.S. Army
Tank-Automotive and Armaments Command (TACOM) Facility in Waren (Detroit), Michigan. Arangarasan R., Chu C. C., Dani T. H., Liu X., Gadh R., 2000, "Geometric Modeling in Multi-Modal, Multi-Sensory Virtual Environment", Proceedings of 2000 NSF Design and Manufacturing Research Conference, Vancouver, Canada. Schwartz P., Bricker L., Campbell B., Furness T., InkPen K., Matheson L., Nakamura N., Shen L.-S., Tanney S., Yeh S., 1998, "Virtual Playground: Architectures for a Shared Virtual World", Proceedings of ACM Symposium on Virtual Reality Software and Technology, pp. 43-50. Shai S., 1998, "User Performance in Relation to 3D Input Device Design," Computer Graphics, pp. 50-54. Snowdon D. N., West A. J., 1994, "The AVIARY VRsystem. A Prototype Implementation", 6th ERCIM workshop, Stockholm. Stork A., Maidhof M., 1997, "Efficient and Precise Solid Modeling using a 3D Input Device," Proc. of the Fourth Symposium on Solid Modeling & Applications, Atlanta, GA, , pp. 181-194. Trika S. N., Banerjee P. and Kashyap R. L., 1997, "Virtual reality interfaces for feature-based computer-aided design systems", CAD Journal, Elsevier Publications, Vol. 29, No. 8, pp. 565-574. Wheless G. H., Lascara C. M., Leigh J., Kapoor A., Johnson A. E., DeFanti T. A., 1998, "CAVE6D: A Tool for Collaborative Immersive Visualization of Environmental Data", IEEE Visualization.
Copyright © 2000 by ASME