This article provides the technical implementation details of the Tree-Diffusion architecture using PyTorch and NF-ResNet.This article provides the technical implementation details of the Tree-Diffusion architecture using PyTorch and NF-ResNet.

Implementation Details of Tree-Diffusion: Architecture and Training for Inverse Graphics

2025/09/27 09:23

Abstract and 1. Introduction

  1. Background & Related Work

  2. Method

    3.1 Sampling Small Mutations

    3.2 Policy

    3.3 Value Network & Search

    3.4 Architecture

  3. Experiments

    4.1 Environments

    4.2 Baselines

    4.3 Ablations

  4. Conclusion, Acknowledgments and Disclosure of Funding, and References

    \

Appendix

A. Mutation Algorithm

B. Context-Free Grammars

C. Sketch Simulation

D. Complexity Filtering

E. Tree Path Algorithm

F. Implementation Details

F Implementation Details

We implement our architecture in PyTorch [1]. For our image encoder we use the NF-ResNet26 [4] implementation from the open-sourced library by Wightman [38]. Images are of size 128 × 128 × 1 for CSG2D and 128 × 128 × 3 for TinySVG. We pass the current and target images as a stack of image planes into the image encoder. Additionally, we provide the absolute difference between current and target image as additional planes.

\

\ For the autoregressive (CSGNet) baseline, we trained the model to output ground-truth programs from target images, and provided a blank current image. For tree diffusion methods, we initialized the search and rollouts using the output of the autoregressive model, which counted as a single node expansion. For our re-implementation of Ellis et al. [11], we flattened the CSG2D tree into shapes being added from left to right. We then randomly sampled a position in this shape array, compiled the output up until the sampled position, and trained the model to output the next shape using constrained grammar decoding.

\ This is a departure from the pointer network architecture in their work. We think that the lack of prior shaping, departure from a graphics specific pointer network, and not using reinforcement learning to fine-tune leads to a performance difference between their results and our re-implementation. We note that our method does not require any of these additional features, and thus the comparison is fairer. For tree diffusion search, we used a beam size of 64, with a maximum node expansion budget of 5000 nodes.

\

:::info Authors:

(1) Shreyas Kapur, University of California, Berkeley (srkp@cs.berkeley.edu);

(2) Erik Jenner, University of California, Berkeley (jenner@cs.berkeley.edu);

(3) Stuart Russell, University of California, Berkeley (russell@cs.berkeley.edu).

:::


:::info This paper is available on arxiv under CC BY-SA 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Tokenized Assets Shift From Wrappers to Building Blocks in DeFi

Tokenized Assets Shift From Wrappers to Building Blocks in DeFi

The post Tokenized Assets Shift From Wrappers to Building Blocks in DeFi appeared on BitcoinEthereumNews.com. RWAs are rapidly moving on-chain, unlocking new opportunities for investors and DeFi protocols, according to a new report from Dune and RWAxyz. Tokenized real-world assets (RWAs) are moving beyond digital versions of traditional securities to become key building blocks of decentralized finance (DeFi), according to the 2025 RWA Report from Dune and RWAxyz. The report notes that Treasuries, bonds, credit, and equities are now being used in DeFi as collateral, trading instruments, and yield products. This marks tokenization’s “real breakthrough” – composability, or the ability to combine and reuse assets across different protocols. Projects are already showing how this works in practice. Asset manager Maple Finance’s syrupUSDC, for example, has grown to $2.5 billion, with more than 30% placed in DeFi apps like Spark ($570 million). Centrifuge’s new deJAAA token, a wrapper for Janus Henderson’s AAA CLO fund, is already trading on Aerodrome, Coinbase and other exchanges, with Stellar planned next. Meanwhile, Aave’s Horizon RWA Market now lets institutional users post tokenized Treasuries and CLOs as collateral. This trend underscores a bigger shift: RWAs are no longer just copies of traditional assets; instead, they are becoming core parts of on-chain finance, powering lending, liquidity, and yield, and helping to close the gap between traditional finance (TradFi) and DeFi. “RWAs have crossed the chasm from experimentation to execution,” Sid Powell, CEO of Maple Finance, says in the report. “Our growth to $3.5B AUM reflects a broader shift: traditional financial services are adopting crypto assets while institutions seek exposure to on-chain markets.” Investor demand for higher returns and more diversified options is mainly driving this growth. Tokenized Treasuries proved there is strong demand, with $7.3 billion issued by September 2025 – up 85% year-to-date. The growth was led by BlackRock, WisdomTree, Ondo, and Centrifuge’s JTRSY (Janus Henderson Anemoy Treasury Fund). Spark’s $1…
Share
BitcoinEthereumNews2025/09/18 06:10