Coherent Mark-based Stylization of 3D Scenes at the Compositing Stage

This article describes the mark-based stylization technique that we developped at Inria with Romain Vergne, Mohamed-Amine Farhat Pierre Benard and Joelle Thollot. This work will be presented as a full paper at the Eurographics conference and publshed in the Compter & Graphics Forum Journal. Our author version of the article can be found here. Our technique guarentees temporal coherence and operates at the compositing stage using G-Buffers as input, easing the integration into existing pipelines. The source code is avaible here and necessitate that you use Gratin, an open-source node-based compositing software that you can find here. For windows users, we created an archive containing an executable that you can use right out of the box.

This post is divided into three parts. First, I will describe the main principle behing our technique, then I will explain how to compile and use the application, then I will detail it works in the code.

Main principle

In the ocean of NPR techniques, we can distinguish two different families when it come to stylizing color regions: texture-based and mark-based approaches. The first embeds marks in an image which is then mapped or pached either onto the 3D scene or over the entire screen. Here are some reference of previous research works falling into that category. More concretly, this kind of technique is quite common when using 3D software like Maya or Blender. The later consists in attaching 2D or 3D marks (brush strokes) to anchor points distributed onto the 3D scene.

Our method belongs to the familly of mark-based methods but borrows ideas from texture-based methods. Moreover, it consists in dynamically generating a motion-coherent 3D anchor point distribution that are drawn as billboard brush strokes. In order to ensure temporal continuity as well as motion coherence at the compositing stage, the anchor point generation is based on implicit Voronoi noise generated from G-Buffers. This allows our technique to operate regardless of the geometric representation or animation technique.

Anchor Point Generation

Our anchor point generation algorithm is two folds: a first process generates anchor points around the objects based on a 3D Worley noise which takes 3D positions stored in the G-Buffers as input. Then, we group together all pixels that generated the same anchor point and compute its final position in screen-space as the average of the screen-space position of these pixels. To ensure a quasi constant density of the points, especially when zooming in or out, we introduce a fractalization process which generate several layers of points. Moreover, at each pixel of the position G-Buffer, we generate several points using the same process as earlier but with a different frequency for each layer. For each layer its sampling frequency is related to the gradient of the position at the considered pixel such that when the gradient is low, meaning that the surface is flat, we increase the sampling frequency and increase it other wise. The intuition behind this process is to sample more points when the surface is flat because at a given frequency of the Worley noise, more points are generated when the position varies, the surface of the object intersecting more virtual 3D cells than when it is flat.

Let me show you the difference of our anchor point genereation process with and without the fractalization :

Improve Temporal Continuity

Until this stage we focused

Use the application

Understanding the code

3D Expert
Computer Graphics R&D Freelance

My research interests include character animation and non-photorealistic rendering.