Exporters From Japan
Wholesale exporters from Japan   Company Established 1983
CARVIEW
Select Language

Rigid Body Physics and Collisions

Unity's built-in physics engine (PhysX) handles rigid body physics including the collisions between rigid bodies.

API commands can alter the physics time step to balance the accuracy of physics behavior against real-time performance, or modify behavior by adjusting mass, friction, etc. per-object at runtime.

carview.php?tsp=

NVidia Flex Uniform Particle Representation

Flex uses a uniform particle-based object representation that allows rigid bodies, soft bodies, cloth objects and fluids to interact.

On the left, we use the cloth simulation to drop a rubbery sheet which collides with a rigid body object. On the right, balls of increasing mass are dropped into a pool of water, causing greater and greater displacement and splashing.

This type of unified representation can help machine learning models use both the underlying physics and rendered images to learn a physical and visual representation of the world through interactions with objects in the world.

Advanced Physics Benchmark Dataset

Using the TDW platform, we have created a comprehensive benchmark for training and evaluation of physically-realistic forward prediction algorithms, which will be released as part of the TDW package.

Once completed, this dataset will contain a large and varied collection of physical scene trajectories, including all data from visual, depth, and force sensors, high-level semantic label information for each frame, as well as latent generative parameters and code controllers for all situations.

This dataset goes well beyond existing related benchmarks, providing scenarios with large numbers of complex real-world object geometries, photo-realistic textures, as well as a variety of rigid, soft-body, cloth, and fluid materials.

The codebase for generating the dataset will be made publicly available in conjunction with the TDW platform.





Indirect Object Interaction Through Avatars

In TDW, avatars are the embodiment of AI agents within a scene.

Avatars can take the form of simple disembodied cameras for generating egocentric-view rendered images, segmentation and depth maps etc.

Avatars using simple geometric primitives such as cubes or spheres can move around the environment, acting as basic embodied agents. These avatars are well-suited to basic algorithm prototyping.

More complex embodied avatars are possible with user-defined physical structures and physically-mapped action spaces

The Magnebot robot's mobility and arm articulation actions are driven by physics, as opposed to any form of pre-scripted animation, and controlled using high-level API commands. Here Magnebot uses its "magnet" end-effector to remove an object from a table. It also picks up a series of objects and places them into a container held by its other magnet; it then carries them to a different room and pours them out again.

Research Use Cases

TDW has been used in a number of labs within MIT and Stanford, as well as IBM

Visual Recognition Transfer

A learned visual feature representation, trained on a TDW image classification dataset comparable to ImageNet, was transferred to fine-grained image classification and object detection task.

Multi-modal Physical Scene Understanding

TDW's audio impact synthesis generated a synthetic dataset of impact sounds used to test material and mass classification.

Learnable Physics Models

Using TDW's ability to handle complex physical collisions and non-rigid deformations, agents learn to predict physical dynamics in novel settings.

Visual Learning in Curious Agents

Intrinsically-motivated agents based on TDW's high-quality rendering and flexible avatar models exhibit rudimentary self-awareness and curiosity.

Social Agents and Virtual Reality

In experiments on animate attention, both human observers in VR and a neural network agent embodying concepts of intrinsic curiosity found animacy to be more "interesting".

carview.php?tsp=
carview.php?tsp=
carview.php?tsp=
carview.php?tsp=

Frequently Asked Questions

Find answers to frequently asked questions about TDW.

  • Fast! Here are some basic benchmarks:

    Benchmark Quality Image Size FPS
    Object transform data, 100 objects N/A N/A 761
    Image capture Low 256x256 380
    Image capture High 1024x1024 41
    Move avatar per frame Low 256x256 160
    Flex Benchmark (Windows) FlexParticles,
    Transform, CameraMatrices, and Collisions
    N/A N/A 204

    Full benchmark details

  • If you want to contribute code, you can create a new branch and then open a PR from your fork of the TDW repo. Please note however the code for the simulation binary (the "build") is still closed-source, meaning that you won't be able to directly modify the API, fix bugs in the build, etc. If you have suggestions, feature requests, bug reports, etc., please add them as GitHub Issues.

    However if you believe that your particular use case absolutely requires access to the backend source code, then please refer to the discussion on our repo regarding this: Requesting access to TDW C# source code

  • Maybe! See our README: ThreeDWorld (TDW)

    • Windows, OS X, or Linux.
    • For high-fidelity rendering and particle-based physics simulations, an NVIDIA GPU.
    • Python 3.6+
  • TDW's team is working full-time on the project, so expect feature updates every few weeks or so.

  • Yes. You can optionally run your Python code on a different machine. Additionally, the repo contains a Docker file for TDW. Further details on Docker container.

Our Team

Development Team

carview.php?tsp=

carview.php?tsp=

Jeremy Schwartz

Project Lead, MIT BCS
carview.php?tsp=

Seth Alter

Lead Developer, MIT BCS

Principal Investigators

carview.php?tsp=

Jim DiCarlo

MIT BCS
carview.php?tsp=

Josh McDermott

MIT BCS
carview.php?tsp=

Josh Tenenbaum

MIT BCS
carview.php?tsp=

Dan Yamins

Stanford NeuroAILab
carview.php?tsp=

carview.php?tsp=

Dan Gutfreund

MIT-IBM Watson AI Lab
carview.php?tsp=

Chuang Gan

MIT-IBM Watson AI Lab

Contributors

carview.php?tsp=

James Traer

MIT BCS
carview.php?tsp=

Jonas Kubilius

MIT BCS
carview.php?tsp=

Martin Schrimpf

MIT BCS
carview.php?tsp=

Abhishek Bhandwaldar

MIT-IBM Watson AI Lab
carview.php?tsp=

Julian DeFreitas

Vision Sciences Lab, Harvard
carview.php?tsp=

Damian Mwroca

Stanford NeuroAILab
carview.php?tsp=

Michael Lingelbach

Stanford NeuroAILab
carview.php?tsp=

Megumi Sano

Stanford NeuroAILab
carview.php?tsp=

Dan Bear

Stanford NeuroAILab
carview.php?tsp=

Kuno Kim

Stanford NeuroAILab
carview.php?tsp=

carview.php?tsp=

Nick Haber

Stanford NeuroAILab
carview.php?tsp=

Chaofei Fan

Stanford NeuroAILab

Brain and Cognitive Sciences, MIT

If you are interested in using TDW in your research, please contact:

Jeremy Schwartz,
TDW Project Lead

43 Vassar St
Cambridge, MA 02139

 
Original Source | Taken Source