Recognition: unknown
A Neural Algorithm of Artistic Style
read the original abstract
In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
WILD SAM: A Simulated-and-Real Data Augmentation for Autonomous Driving Perception under Challenging Weather
WILD SAM combines denoised pseudo-labels from real adverse-weather images with simulation-based training to improve object detection AP by up to 13% on the Four Seasons dataset for rain and snow.
-
Defining Robust Ultrasound Quality Metrics via an Ultrasound Foundation Model
TinyUSFM-uLPIPS and TinyUSFM-NRQ provide task-linked, cross-organ, and clinically predictive quality assessment for ultrasound images that outperforms conventional metrics in calibration with segmentation performance ...
-
Gram-MMD: A Texture-Aware Metric for Image Realism Assessment
Gram-MMD is a texture-aware realism metric that computes MMD on upper-triangular Gram matrices from backbone activations, providing complementary information to semantic distributional metrics.
-
VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness
VBench-2.0 is a benchmark suite that automatically evaluates video generative models on five dimensions of intrinsic faithfulness: Human Fidelity, Controllability, Creativity, Physics, and Commonsense using VLMs, LLMs...
-
Lost in the Tower of Babel: The Adverse Effects of Incidental Multilingualism in LLMs
Incidental multilingualism from uneven web training makes LLMs unequal, brittle, and opaque across languages.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.