Textureshop has created a new photo editing tool that combines existing techniques for shape-from-shading and texture synthesis to enable a user to texture objects in a photograph.
The Textureshop tool allows user to give an object in a photo the appearance of a different texture. Textureshop added more patch deformations than previous models. Instead of selecting patch-like areas directly from source texture, this program distorts the texture coordinate of each of the pixels in the patch with surface normal, and then samples it from texture, so that the texture fits on the underlying surface.
Easier to use - Existing professional photography editing programs require a long time commitment and appreciable skill to edit a feature of a photo while keeping it on the surface of an object in a scene.
Operates on a single image - Methods previously employed to texture synthesis have used a series of images from different viewpoints.
This application avoids the need for reconstructing global 3-D mesh, and synthesizes the texture instead on a network of individually parameterized surface patches, designed to operate on a single image.
A number of scientific animations have been developed by the University of Illinois National Center for Supercomputing Applications (NCSA) Advanced Visualization Laboratory (AVL) under the direction of Professor Kalina Borkiewicz. Examples of available animations include the Milky Way, black holes, tornado super-twisters, solar super-storms, and many more. AVL's animations are cinematic virtual tours through astrophysics, earth sciences, engineering, and other data domains. Their work has been shared through venues such as international digital-dome museum shows, high-definition documentary television programs, and IMAX movies. Their work has also been featured in productions such as The Science of Interstellar documentary, The Tree of Life and Europa Report, and the Dynamic Earth IMAX show. For more information, please visit the AVL website.
NCSA AVL animations are available non-exclusively for commercial use under the two license agreements provided in the sidebar. Note that the financial terms, statement of attribution, and copyright notice need to be determined and finalized before the license is ready for execution. Please contact OTM at firstname.lastname@example.org or Professor Kalina Borkiewicz at email@example.com.
Commercial Use License Producer: a three-party license that is intended for use when an organization engages a separate producer or distributor.
Commercial Use License: a two party license that is between the University and an organization
Thomas Huang has been recognized worldwide as a pioneer in research involving topics such as computer vision and signal processing, winning numerous honors and awards for his seminal contributions to these fields.
One noteable invention is Huang's facial recognition software. Consisting of three modules - face detection, discriminative manifold learning, and multiple linear regression - the age-estimation software was trained on a database containing photos of 1,600 faces.
The software can estimate ages from 1 year to 93 years. The software’s accuracy ranges from about 50 percent when estimating ages to within 5 years, to more than 80 percent when estimating ages to within 10 years. The accuracy has the potential to be improved through additional training on larger databases of faces.
In addition to performing tasks such as security control and surveillance monitoring, age-estimation software also could be used for electronic customer relationship management.
Through facial and emotion recognition algorithms, Huang and his team have also been able to reconstruct a 3D face according to the input image and then personalize the 3D face model. This allows for the possibility of having synthetic talking faces deliver messages, instead of text.
A method for detecting an object in an image includes calculating a log L likelihood of pairs of pixels at select positions in the image that are derived from training images. The calculated log L likelihood of the pairs of pixels is compared with a threshold value. The object is detected when the calculated log L likelihood is greater than the threshold value.
This technology is a robust speech recovery algorithm that rejects noises and enhances speech in a real-world noisy environment. The invention works with any type, number, or location of microphones and background noise sources.
Speech is separated from background noise blindly.
No information required beforehand about the speech, background noise, or environment
The only preliminary assumption is that speech is high-kurtosis and most background noises are low-kurtosis
A three dimensional virtual reality creation, manipulation and editing system including a voice and three dimensional gesture input interface. An operator immersed in a data structure preferably presented in a three dimensional immersion environment interacts with the data structure and performs operations on the data structure in the environment through the voice and gesture input interface. A virtual director receives input from the voice and gesture interface.
The director records the input as keyframe values that are necessary time-based spline data points for commanding the display environment to redisplay a recorded spline path for viewing or operational purposes. The voice recognition system accepts commands according to a menu displayed in the three dimensional environment to enable various operational and display functions through the gesture input portion of the interface.
In accordance with one or more aspects of a match, expand, and filter technique for multi-view stereopsis, features across multiple images of an object are matched to obtain a sparse set of patches for the object. The sparse set of patches is expanded to obtain a dense set of patches for the object, and the dense set of patches is filtered to remove erroneous patches. Optionally, reconstructed patches can be converted into 3D mesh models.
Patch-Based Multi-view Stereo Software (PMVS) is a multi-view stereo software that takes a set of images and camera parameters, then reconstructs 3D structure of an object or a scene visible in the images. Only rigid structure is reconstructed, in other words, the software automatically ignores non-rigid objects such as pedestrians in front of a building. The software outputs a set of oriented points instead of a polygonal (or a mesh) model, where both the 3D coordinate and the surface normal are estimated at each oriented point.