See all apps

engage3D Video Conferencing

Submitted on April 03, 2013

What problem are you intending to solve?

We are going to extend current video conferencing technology to 3D using the Microsoft Kinect, WebGL and WebRTC

What is the technological approach, or development roadmap?

2 common 3D graphics representations include triangle meshes and point clouds. Triangle meshes are a set of triangles that roughly approximates objects. They are prevalent in video games, as relatively few triangles need be used to represent objects in-game. This allows fast frame rates, which gamers demand.

Another method to represent 3D objects involves using a set of colored points known as a point cloud. Point clouds can be thought of as reciprocals to triangle meshes. Whereas triangle meshes contain relatively few triangles and render quickly, point clouds are dense, more accurate, and take longer to render.

Unlike triangle meshes, which may be animated, point clouds do not deform. Users may need to wait several seconds before a point cloud is downloaded, processed and ready to be viewed. Once ready, the user can interact with the cloud. However, the cloud will remain as a rigid body.

The introduction of fast gigabit connections opens up an entirely new medium for using real time 3D data in a web browser. With such bandwidth, this 3D information can be transferred as quickly as a standard video conferencing application. This allows the creation and use of real-time 3D interactive telepresence applications. In order to achieve this, Microsoft Kinect sensors are used for the capturing of this data and the information is transmitted from the Kinect to the browser using a tool called WebSockets. Once the data arrives at the destination, the client application renders the visual data using WebGL (three.js) and audio data using WebRTC (easyRTC). This allows for real time viewing of information captured by the Kinect in any location where these high bandwidth networks are available. Most of this work has been research until this point, but as of last week we have begun testing over the Chattanooga gigabit network, streaming live content from the TN Aquarium.

How will end users interact with it, and how will they benefit?

Users will engage with each other through multiple Microsoft Kinect video sensors, similar to a standard video conferencing application, however the major distinction will be that the video stream will be rendered in WebGL. This can later be extended to include advanced projection technologies such as holograms and 3D televisions. Also, it is rumored that a HD Kinect is in the works, which will provide users with much higher resolution images and higher fidelity depth data. This has tremendous implications for the fields of education and health. For example, physicians working remotely could get real-time 3D views of their patients. Educators could record instruction in 3D and allow students to play this data back interactively.

How will your app leverage the 1Gbps, sliceable and deeply programmable network?

Initially, the network will be leveraged for its high bandwidth capabilities. Network slicing can further guarantee a high quality of service. Also, when necessary, processing can be done in the cloud to sync multiple cameras from one location, relieving the front-end of that overhead.

Further application information

Additional supporting information, materials and resources

Read about project updates - project blog

Take a look at the existing code - project repository

Will your work be beta-ready by the end of the Development Challenge?


How much effort do you expect this work to take?

4-8 hours a week per person until the end of the Mozilla Ignite Challenge. The team is growing, but so is the proposed work. We are very exited about this opportunity.

Do you need help?

The team is shaping up nicely and we have a diverse set of skills at our disposal. We can always use advice in the areas of webRTC, websockets, Three.js and Kinect hacking in general.

If you can help let them know in the comments below.

Bill Brock

I work as a Computational Engineer with SimCenter Enterprises, a non-profit who supports the University of Tennessee at Chattanooga's SimCenter. I hold a MS in Computational Engineering and am currently pursuing a PhD on a part time basis. My interests and experience lie in python/c++ development and high performance computing. Most of my HPC work has involved computational fluid dynamics or mesh generation.

and team members

Andor Salga graduated from the Bachelor's of Software Development degree program at Seneca College in Toronto. He holds a diploma in Computer Programming and Analysis, specializing in 3D game development using C/C++. Andor worked as a research assistant and technical advisor at Seneca's Centre for Development of Open Technology developing open source WebGL libraries such as Processing.js, XB PointStream and C3DL. He is currently a junior game developer at BNOTIONS. Forrest Pruitt is a junior at the University of Tennessee at Chattanooga.He is pursuing a BSc in Computer Science with a concentration in Information Security and Assurance, as well as a minor in Theatre. He spends his spare time solving Rubik's Cubes, mastering Tetris, and traveling. James McNutt holds a MS in Teacher Education with a concentration in Secondary Mathematics Education, from the University of Tennessee in 2012. His also has a BS Honors Mathematics with a Minor in Physics, 2009. James is responsible for working with local institutions and the evaluation of our application for educational purposes. Craig Tanis is a Lecturer at the University of Tennessee at Chattanooga. He holds a MS in Computer Science from Tulane University and teaches classes on Game Design and Parallel Programming for the University. He is also a PhD candidate in Computational Engineering. Also, it needs to be mentioned that the Chattanooga community is very supportive of this application. Gigabit connectivity has been provisioned through some local non-profits (co.lab) and academic institutions (The University of Tennessee at Chattanooga) to expedite the development of engage3D. All code developed for this project will remain open since nearly all of it leverages one open source tool or another.

comments powered by Disqus