UnMix-NeRF
Spectral Unmixing Meets Neural Radiance Fields
ICCV 2025

Fabian Perez1 2Sara Rojas2Carlos Hinojosa2Hoover Rueda-Chacón1Bernard Ghanem2
1Universidad Industrial de Santander      2King Abdullah University of Science and Technology (KAUST)
Paper Supplementary Code Data
Introduction Image

TL;DR: We propose UnMix-NeRF, the first method integrating spectral unmixing into NeRF, enabling hyperspectral view synthesis, accurate unsupervised material segmentation, and intuitive material-based scene editing, significantly outperforming existing methods.


RGB

Unsupervised Material Segmentation

Scene Editing

RGB

Unsupervised Material Segmentation

PCA Visualization


Abstract

Neural Radiance Field (NeRF)-based segmentation methods focus on object semantics and rely solely on RGB data, lacking intrinsic material properties. This limitation restricts accurate material perception, which is crucial for robotics, augmented reality, simulation, and other applications. We introduce UnMix-NeRF, a framework that integrates spectral unmixing into NeRF, enabling joint hyperspectral novel view synthesis and unsupervised material segmentation. Our method models spectral reflectance via diffuse and specular components, where a learned dictionary of global endmembers represents pure material signatures, and per-point abundances capture their distribution. For material segmentation, we use spectral signature predictions along learned endmembers, allowing unsupervised material clustering. Additionally, UnMix-NeRF enables scene editing by modifying learned endmember dictionaries for flexible material-based appearance manipulation. Extensive experiments validate our approach, demonstrating superior spectral reconstruction and material segmentation to existing methods.


Method

UnMix-NeRF Framework

Spectral Unmixing Field

We extend standard NeRF by incorporating the Extended Linear Mixing Model (ELMM) at each 3D point. An MLP predicts both a density and a latent feature, from which two heads produce (i) scaling factors via a sigmoid to model local spectral variability, and (ii) raw abundances plus a tint factor normalized with softmax to enforce non-negativity and sum-to-one. Diffuse reflectance is then computed by mixing learned global endmember spectra with these abundances.

Specular Field

To capture view-dependent effects, we add a specular branch inspired by Ref-NeRF. It takes the same latent feature and viewing direction to predict a specular spectrum, modulated by the tint factor, and blends it with the diffuse component under the dichromatic model.

Volume Rendering and Camera Response

We integrate spectral radiance and per-point abundances along camera rays via volumetric rendering. The resulting hyperspectral signature is mapped to sRGB using a camera spectral response matrix, ensuring physically accurate color reproduction.

Material Segmentation via Cluster Probe

Unsupervised material segmentation is obtained by treating learned endmembers as cluster centers: each rendered abundance vector is assigned to the material whose endmember maximizes the normalized inner product, yielding pixel-accurate clusters without external supervision.

Loss Function

We jointly optimize hyperspectral and RGB reconstruction with an L2 loss on predicted spectra and an RGB loss after spectral-to-RGB projection. Physical consistency is enforced via non-negativity and sum-to-one constraints on abundances, balanced by tunable weights.

Implementation Details

Our model is built in Nerfstudio on top of Nerfacto, using Nerfacc for fast rendering. We employ a 16-dimensional hidden feature MLP, gradient scaling by squared ray distance, and optimize with Adam plus an exponential LR scheduler on an NVIDIA A100 GPU. Endmembers are clamped to valid spectral ranges and chosen per scene according to the number of distinct materials. For more details and code, please refer to our GitHub repo.


Visual Results

Scene: Hotdog

Spectral Reflectances

450 nm

500 nm

550 nm

600 nm

650 nm

Visualization of Learned Material Abundances

Specular Reflectances

Scene: Ajar

Visualization of Learned Material Abundances


Quantitative Results


SURF Optics Dataset

SURF Optics Dataset Results

BaySpec Dataset

BaySpec Dataset Results

On the SURF Optics dataset (left), UnMix-NeRF outperforms other methods in both scenes. In the Rosemary scene, we achieve a PSNR of 28.91 dB, a 2.14 dB improvement over the next best HyperGS (26.77 dB), while cutting RMSE from 0.0445 to 0.0332 (25 % reduction) and SAM from 0.0021° to 0.0019° (10 % reduction). Although HyperGS still records a slightly higher SSIM (0.9845 vs. our 0.9355), our approach delivers the lowest spectral and intensity errors. In the Basil scene, UnMix-NeRF records 29.21 dB (+3.91 dB), RMSE of 0.0364 (36 % lower), and SAM of 0.0043° (16 % lower), while also surpassing HyperGS in SSIM (0.9584 vs. 0.9503). This shows that explicitly modeling endmembers and abundances yields dramatically improved spectral fidelity without sacrificing structural quality.

On the BaySpec dataset (right), UnMix-NeRF consistently outperforms HyperGS across all three sensor scenes. For Pinecone, we reach 27.13 dB (+0.13 dB), reduce RMSE by 4 % (0.0429 vs. 0.0447) and SAM by 7 % (0.0287° vs. 0.0309°), and boost SSIM by 0.066. In Caladium, our method delivers 30.08 dB (+2.38 dB), cuts RMSE by 25 % (0.0312 vs. 0.0414) and SAM by 14 % (0.0237° vs. 0.0277°). Also, in the Anacampseros scene, we achieve 28.20 dB (+2.0 dB), raise SSIM by 0.037, and slash spectral angle by 56 % (0.0154° vs. 0.0353°).


More Visual Results

Endmembers Learning During Training

Endmember Learning

Citation

If you find this work useful, please cite our paper:

@inproceedings{perez2025unmix,
              title={UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields},
              author={Perez, Fabian and Rojas, Sara and Hinojosa, Carlos and Rueda-Chac{\'o}n, Hoover and Ghanem, Bernard},
              booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
              year={2025}
            }

Contact

For any questions, please feel free to contact:


This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

We borrow the source code for our website. We sincerely appreciate DreamFusion authors for their awesome templates.