Interested in advertising on Derpibooru? Click here for information!
The Travelling Pony Museum Shop!

Help fund the $15 daily operational cost of Derpibooru - support us financially!

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ %sub%

Detailed syntax guide

furrypony
Cosmia Nebula  - For Patreon supporters
Crystal Roseluck - Had their OC in the 2023 Derpibooru Collab.
Elements of Harmony - Had an OC in the 2022 Community Collab
Twinkling Balloon - Took part in the 2021 community collab.
My Little Pony - 1992 Edition
Happy Derpy! - For Patreon supporters
Bronze Supporter - Bronze Patron
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Not a Llama - Happy April Fools Day!
An Artist Who Rocks - 100+ images under their artist tag

hopelessly sad filly
NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep MultiLayer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality
furrypony
Cosmia Nebula  - For Patreon supporters
Crystal Roseluck - Had their OC in the 2023 Derpibooru Collab.
Elements of Harmony - Had an OC in the 2022 Community Collab
Twinkling Balloon - Took part in the 2021 community collab.
My Little Pony - 1992 Edition
Happy Derpy! - For Patreon supporters
Bronze Supporter - Bronze Patron
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Not a Llama - Happy April Fools Day!
An Artist Who Rocks - 100+ images under their artist tag

hopelessly sad filly
Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. “mixing” the per-location features), and one with MLPs applied across patches (i.e. “mixing” spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.