DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models
Authors: Tsun-Hsuan Johnson Wang, Juntian Zheng, Pingchuan Ma, Yilun Du, Byungchul Kim, Andrew Spielberg, Josh Tenenbaum, Chuang Gan, Daniela Rus
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We showcase a range of simulated and fabricated robots along with their capabilities. In summary, we contribute: Extensive experiments in simulation to verify the effectiveness of Diffuse Bot, extensions to text-conditioned functional robot design, and a proof-of-concept physical robot as a real-world result. |
| Researcher Affiliation | Collaboration | Tsun-Hsuan Wang1, , Juntian Zheng2,3, Pingchuan Ma1, Yilun Du1, Byungchul Kim1, Andrew Spielberg1,4, Joshua B. Tenenbaum1, Chuang Gan1,3,5, , Daniela Rus1, 1MIT, 2Tsinghua University, 3MIT-IBM Watson AI Lab, 4Harvard, 5UMass Amherst |
| Pseudocode | Yes | Algorithm 1 Training: Embedding Optimization. Algorithm 2 Sampling: Diffusion As Co-design. |
| Open Source Code | No | The paper mentions a project page (https://diffusebot.github.io/) for 'more results' and 'demo videos', but it does not explicitly state that the source code for the methodology is provided on this page or elsewhere. The text does not contain phrases like 'We release our code...' or a direct link to a code repository. |
| Open Datasets | No | The paper states it uses Point-E [39] as a pre-trained diffusion model which was trained on 'Point-E s curated dataset of several million 3D models'. However, the paper does not specify the train/test/validation splits for any datasets used in *their* experiments, nor does it provide concrete access (link, DOI, formal citation for public dataset with author/year) to a dataset used for their specific experimentation. The authors generate samples for evaluation rather than using a pre-defined dataset with splits. |
| Dataset Splits | No | The paper describes experimental setups, but it does not specify dataset splits for training, validation, or testing (e.g., '80/10/10 split' or specific sample counts for each split). The experiments involve generating samples for evaluation, rather than typical dataset partitioning. |
| Hardware Specification | Yes | The soft gripper was 3D-printed using a digital light projection (DLP) type 3D printer (Carbon M1 printer, Carbon Inc.) and commercially available elastomeric polyurethane (EPU 40, Carbon Inc.). |
| Software Dependencies | No | The paper mentions using 'Soft Zoo [61]' and 'Material Point Method for simulation', and Point-E [39], but it does not provide specific version numbers for any software, libraries, or environments (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | In Table 4, we list the configurations of the embedding optimization. In Table 5, we list the configurations of diffusion as co-design. For baselines, we use learning rates for control optimization following γ in Table 5; for particle-based and voxel-based approaches, we use learning rate 0.01 for design optimization; for implicit function and diff-CPPN, we use learning rate 0.001 for design optimization. For the network architecture of implicit function, we use a 2-layer multilayer perceptron with hidden size 32 and Tanh activation. |