# Creating carved surfaces using three.js

Published
12 minutes
How I achieved the 2D-to-3D textured surface effect in Ceramics, my first generative art NFT project.

Welcome to the first deep-dive technical blog post on how I created Ceramics, my first NFT generative art project.

A little ask before we begin: I’m going to be very candid with the technical aspects of this project, far more than you’d be able to decipher from studying the source code. You’re welcome to build on these techniques for your own projects, but please develop them sufficiently enough that they don’t resemble Ceramics. Copycat projects aren’t cool, and I’m sharing this information on the good faith that readers will learn and understand rather than copy. Onwards!

## #Technical foundations

My initial concept for Ceramics was to have a flat surface that was somehow carved into, but I had no idea how I’d practically achieve that 3D effect with code.

Vertex shaders seemed like an obvious area of exploration because they’re specifically for constructing 3D surfaces, but then I’d have to define the depth of the whole surface using a mathematical formula and I couldn’t get close to the undulating and overlapping strokes I had in mind. I’m sure there’s a way to create this project with shaders (Camille Roux’s genuary sketches come pretty close!) but it’s not where my skillset lies.

The second idea I had was to use three.js, I’ve used three.js before on Hexatope and I love how easy it is to make realistic-looking physical materials. My idea was to create a series of tubes and use boolean operations to carve them out of a solid box; unfortunately, three.js doesn’t have constructive solid geometry functionality and the recommended three-csg library had terrible performance with exponential numbers of tubes, so this was a no-go as well.

My next idea felt like a long shot from the start — drawing the strokes on a 2D canvas and mapping each pixel’s colour to the z-axis of a point in a mesh. I assumed the performance of this method would be dreadful because calculating the normal (perpendicular vector) of each vertex and lighting/shading it is a lot for the GPU to handle, but it turned out to be reasonable enough for a static image (a couple of seconds for a 1000px square output).

The Canvas API is my happy place so it was a revelation to discover this method, it means I can have complete control of the composition with JavaScript and fairly easily turn it into a 3D scene.

Here’s some example code of how this method works:

.js
``````import * as THREE from 'three'

const width = 1000
const height = 1000
const maxDepth = 100

// we need the depth data for one unit wider than the output image
const columns = width + 1
const rows = height + 1

// setup renderer
const renderer = new THREE.WebGLRenderer()
renderer.setSize(width, height)
document.body.appendChild(renderer.domElement)
const scene = new THREE.Scene()
const camera = new THREE.OrthographicCamera(
-width / 2,
width / 2,
height / 2,
-height / 2,
0,
1000
)
camera.position.set(0, 0, 500)

// create canvas
const canvas = document.createElement('canvas')
canvas.width = columns
canvas.height = rows
const c = canvas.getContext('2d')

// draw whatever you'd like on the canvas
c.fillStyle = 'black'
c.fillRect(0, 0, columns, rows)
c.filter = 'blur(100px)'
c.fillStyle = 'white'
c.beginPath()
c.arc(width / 2, width / 2, width / 2, 0, Math.PI \* 2)
c.fill()

// construct plane and set the z-axis of each vertex to the pixel's depth
const plane = new THREE.PlaneGeometry(width, height, width, height)
const depthData = c.getImageData(0, 0, columns, rows).data
const positionAttribute = plane.getAttribute('position')
for (let i = 0, count = positionAttribute.count; i < count; i++) {
// the depthData is an array of RGBA values
// we're taking the red channel which has the values 0-255
positionAttribute.setZ(i, (depthData[i * 4] / 255) \* maxDepth)
}

// compute the normals of each vertex based on the triangles they're
// connected to, this makes the lighting reflect accurately
positionAttribute.needsUpdate = true
plane.computeVertexNormals()

// add plane to the scene and render
const material = new THREE.MeshNormalMaterial()
const mesh = new THREE.Mesh(plane, material)
renderer.render(scene, camera)``````
.js
``````import * as THREE from 'three'

const width = 1000
const height = 1000
const maxDepth = 100

// we need the depth data for one unit wider than the output image
const columns = width + 1
const rows = height + 1

// setup renderer
const renderer = new THREE.WebGLRenderer()
renderer.setSize(width, height)
document.body.appendChild(renderer.domElement)
const scene = new THREE.Scene()
const camera = new THREE.OrthographicCamera(
-width / 2,
width / 2,
height / 2,
-height / 2,
0,
1000
)
camera.position.set(0, 0, 500)

// create canvas
const canvas = document.createElement('canvas')
canvas.width = columns
canvas.height = rows
const c = canvas.getContext('2d')

// draw whatever you'd like on the canvas
c.fillStyle = 'black'
c.fillRect(0, 0, columns, rows)
c.filter = 'blur(100px)'
c.fillStyle = 'white'
c.beginPath()
c.arc(width / 2, width / 2, width / 2, 0, Math.PI \* 2)
c.fill()

// construct plane and set the z-axis of each vertex to the pixel's depth
const plane = new THREE.PlaneGeometry(width, height, width, height)
const depthData = c.getImageData(0, 0, columns, rows).data
const positionAttribute = plane.getAttribute('position')
for (let i = 0, count = positionAttribute.count; i < count; i++) {
// the depthData is an array of RGBA values
// we're taking the red channel which has the values 0-255
positionAttribute.setZ(i, (depthData[i * 4] / 255) \* maxDepth)
}

// compute the normals of each vertex based on the triangles they're
// connected to, this makes the lighting reflect accurately
positionAttribute.needsUpdate = true
plane.computeVertexNormals()

// add plane to the scene and render
const material = new THREE.MeshNormalMaterial()
const mesh = new THREE.Mesh(plane, material)
renderer.render(scene, camera)``````

## #Drawing smooth strokes

Now I’d figured out how I was going to make the 3D scene I needed to work out what I was going to draw on the canvas. I decided to use a modified flow field to guide the shape and pattern of the strokes, which I’m going to go into detail about in my next article, but I needed to draw them on the canvas in a way that would translate well into 3D.

Here are the different methods I worked through to figure this out, along with interactive demos.

### #Low-opacity circles

At regular points along the stroke, draw a circle with a low opacity. The circles blend together to form a smooth curve.

Code snippet
.js
``````c.globalAlpha = 0.05
points.forEach(([x, y, thickness]) => {
c.beginPath()
c.arc(x, y, thickness / 2, 0, Math.PI * 2)
c.fill()
})``````
.js
``````c.globalAlpha = 0.05
points.forEach(([x, y, thickness]) => {
c.beginPath()
c.arc(x, y, thickness / 2, 0, Math.PI * 2)
c.fill()
})``````

This is a very cheap method of making smooth strokes and is effective when the points are close together but there is no way to control the profile of the stroke – the edge of the stroke is very sharp when the circles are close enough to look smooth.

### #Low-opacity lines

Connect the points of the stroke on a path and draw progressively thicker low-opacity lines.

Code snippet
.js
``````// use lighten blend mode so the lightest colour always wins
// the middle of the stroke wouldn't be white if we used alpha instead
c.globalCompositeOperation = 'lighten'

const path = new Path2D()
path.moveTo(points, points)
points.slice(1).forEach(([x, y]) => {
path.lineTo(x, y)
})

for (let t = 0; t < 1; t += 1 / (steps + 0.5)) {
c.strokeStyle = `rgb(\${255 * t}, \${255 * t}, \${255 * t})`
c.lineWidth = thickness * (1 - t)
c.stroke(path)
}``````
.js
``````// use lighten blend mode so the lightest colour always wins
// the middle of the stroke wouldn't be white if we used alpha instead
c.globalCompositeOperation = 'lighten'

const path = new Path2D()
path.moveTo(points, points)
points.slice(1).forEach(([x, y]) => {
path.lineTo(x, y)
})

for (let t = 0; t < 1; t += 1 / (steps + 0.5)) {
c.strokeStyle = `rgb(\${255 * t}, \${255 * t}, \${255 * t})`
c.lineWidth = thickness * (1 - t)
c.stroke(path)
}``````

Again this is a very performant method of drawing the strokes and the profile of the stroke can be controlled by changing the opacity or width of each step. However, the stroke can’t have a variable width and we can’t control the shape of the start and end of each stroke.

This method also uses circles along the line but each is filled with a radial gradient.

Code snippet
.js
``````c.globalCompositeOperation = 'lighten'

points.forEach(([x, y, thickness]) => {
c.save()
c.translate(x, y)
// the gradient is a finite size so we use scale to change its size
c.scale(thickness / 200, thickness / 200)
c.beginPath()
c.arc(0, 0, 100, 0, Math.PI * 2)
c.fill()
c.restore()
})``````
.js
``````c.globalCompositeOperation = 'lighten'

points.forEach(([x, y, thickness]) => {
c.save()
c.translate(x, y)
// the gradient is a finite size so we use scale to change its size
c.scale(thickness / 200, thickness / 200)
c.beginPath()
c.arc(0, 0, 100, 0, Math.PI * 2)
c.fill()
c.restore()
})``````

This works well when the points are close together but when they’re spaced out the middle of each gradient becomes visible and makes the stroke look ridged. With this method we can control the depth and profile of the stroke by manipulating the gradient.

The next method I tried was a lot more technically complex. For each point I calculated the perpendicular point at the edge of the stroke and drew overlapping polygons with a linear gradient running perpendicular to the stroke.

Code snippet
.js
``````const points = inputPoints.map((point, i) => {
const [x, y, thickness] = point

// get the angle by averaging the angle of the previous and next point
const a = i > 0 ? inputPoints[i - 1] : point
const b = i < inputPoints.length - 1 ? inputPoints[i + 1] : point
const angle = Math.atan2(b - a, b - a)

const left = {
x: x + Math.cos(angle + Math.PI / 2) * thickness * 0.5,
y: y + Math.sin(angle + Math.PI / 2) * thickness * 0.5,
}

const right = {
x: x + Math.cos(angle - Math.PI / 2) * thickness * 0.5,
y: y + Math.sin(angle - Math.PI / 2) * thickness * 0.5,
}

return {
x,
y,
thickness,
angle,
left,
right,
}
})

c.globalCompositeOperation = 'lighten'

points
.slice(1, points.length - 1)
.forEach(({ x, y, thickness, angle, left, right }, i) => {
const prev = points[i]
const next = points[i + 2]

const distLeftPrev = Math.sqrt(
(left.x - prev.left.x) ** 2 + (left.y - prev.left.y) ** 2
)
const distRightPrev = Math.sqrt(
(right.x - prev.right.x) ** 2 + (right.y - prev.right.y) ** 2
)
const distLeftNext = Math.sqrt(
(left.x - next.left.x) ** 2 + (left.y - next.left.y) ** 2
)
const distRightNext = Math.sqrt(
(right.x - next.right.x) ** 2 + (right.y - next.right.y) ** 2
)

c.save()

// move to the center segment
c.translate(x, y)
c.rotate(angle)

// draw an approximate rectangle covering the area of the segment
// because we've already translated and rotated (which we need for the gradient to work)
// we can't use the actual point vectors unless we also translate and rotate them
// for simplicity in this demo I've chosen to approximate it insetad
c.beginPath()
c.moveTo(-distLeftPrev, thickness)
c.lineTo(distLeftNext, thickness)
c.lineTo(distRightNext, -thickness)
c.lineTo(-distRightPrev, -thickness)
c.closePath()

c.scale(thickness / 100, thickness / 100)
c.fill()

c.restore()
})``````
.js
``````const points = inputPoints.map((point, i) => {
const [x, y, thickness] = point

// get the angle by averaging the angle of the previous and next point
const a = i > 0 ? inputPoints[i - 1] : point
const b = i < inputPoints.length - 1 ? inputPoints[i + 1] : point
const angle = Math.atan2(b - a, b - a)

const left = {
x: x + Math.cos(angle + Math.PI / 2) * thickness * 0.5,
y: y + Math.sin(angle + Math.PI / 2) * thickness * 0.5,
}

const right = {
x: x + Math.cos(angle - Math.PI / 2) * thickness * 0.5,
y: y + Math.sin(angle - Math.PI / 2) * thickness * 0.5,
}

return {
x,
y,
thickness,
angle,
left,
right,
}
})

c.globalCompositeOperation = 'lighten'

points
.slice(1, points.length - 1)
.forEach(({ x, y, thickness, angle, left, right }, i) => {
const prev = points[i]
const next = points[i + 2]

const distLeftPrev = Math.sqrt(
(left.x - prev.left.x) ** 2 + (left.y - prev.left.y) ** 2
)
const distRightPrev = Math.sqrt(
(right.x - prev.right.x) ** 2 + (right.y - prev.right.y) ** 2
)
const distLeftNext = Math.sqrt(
(left.x - next.left.x) ** 2 + (left.y - next.left.y) ** 2
)
const distRightNext = Math.sqrt(
(right.x - next.right.x) ** 2 + (right.y - next.right.y) ** 2
)

c.save()

// move to the center segment
c.translate(x, y)
c.rotate(angle)

// draw an approximate rectangle covering the area of the segment
// because we've already translated and rotated (which we need for the gradient to work)
// we can't use the actual point vectors unless we also translate and rotate them
// for simplicity in this demo I've chosen to approximate it insetad
c.beginPath()
c.moveTo(-distLeftPrev, thickness)
c.lineTo(distLeftNext, thickness)
c.lineTo(distRightNext, -thickness)
c.lineTo(-distRightPrev, -thickness)
c.closePath()

c.scale(thickness / 100, thickness / 100)
c.fill()

c.restore()
})``````

Although the code for this method is longer and it uses more processing power to draw, the points can be quite far apart and still create a smooth stroke.

I was considering going with the radial gradient method until I had the idea of a special grooved tool which would only be possible using linear gradients. My ultimate solution used these segments of linear gradients but with a lot more math to position and scale the gradients and control ramping down the start and end of each stroke.

## #Tool profiles

The advantage of using gradients is that I could tweak the gradient to change the profile of the stroke. I used custom cubic beziers to control the gradient and make a range of tool shapes. You can experiment with the tool beziers in this demo and see how they affect the quality of the 3D surface.

Code snippet
.ts
``````import CubicBezier from '@thednp/bezier-easing'

c: CanvasRenderingContext2D,
bezier: [number, number, number, number]
const easing = new CubicBezier(...bezier)
const lightness = []
for (var t = 0; t <= 1; t += 0.02) {
lightness.push(easing._at(t))
}

lightness.forEach((l, i) => {
i / (lightness.length - 1) / 2,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
1 - i / (lightness.length - 1) / 2,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
})

}``````
.ts
``````import CubicBezier from '@thednp/bezier-easing'

c: CanvasRenderingContext2D,
bezier: [number, number, number, number]
const easing = new CubicBezier(...bezier)
const lightness = []
for (var t = 0; t <= 1; t += 0.02) {
lightness.push(easing._at(t))
}

lightness.forEach((l, i) => {
i / (lightness.length - 1) / 2,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
1 - i / (lightness.length - 1) / 2,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
})

}``````

The grooved tool uses a repeating linear gradient and a fixed depth, the number of grooves depends on the thickness of the stroke.

Code snippet
.ts
``````import CubicBezier from '@thednp/bezier-easing'

c: CanvasRenderingContext2D,
grooves: number
const easing = new CubicBezier(0.1, 0, 0.6, 1)

const lightness = []
for (var t = 0; t <= 1; t += 0.02) {
lightness.push(easing._at(t))
}

lightness.forEach((l, i) => {
for (let grooveI = 0; grooveI < grooves; grooveI++) {
const through = i / (lightness.length - 1) / 2 / grooves
grooveI / grooves + through,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
(grooveI + 1) / grooves - through,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
}
})

}``````
.ts
``````import CubicBezier from '@thednp/bezier-easing'

c: CanvasRenderingContext2D,
grooves: number
const easing = new CubicBezier(0.1, 0, 0.6, 1)

const lightness = []
for (var t = 0; t <= 1; t += 0.02) {
lightness.push(easing._at(t))
}

lightness.forEach((l, i) => {
for (let grooveI = 0; grooveI < grooves; grooveI++) {
const through = i / (lightness.length - 1) / 2 / grooves
grooveI / grooves + through,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
(grooveI + 1) / grooves - through,
`rgb(\${l * 255}, \${l * 255}, \${l * 255})`
)
}
})

}``````

## #The range problem

This was all looking very promising, but I discovered an issue when I exported at high resolution; on big flat strokes you could see ridges in the render. Since we’re mapping a canvas to the pixel’s depth, there can only be 256 set depths (each colour channel can be between 0-255, and we’re just using the red channel), so when the gradient is shallow contours appear between each depth and are thrown into relief by dramatic lighting.

I thought this would be an easy thing to fix; if I’m only using the red channel, can’t I also use the green and blue channels and triple the range from 256 to 768? The trouble is that going from black to white will increment each channel at the same rate, so it would go from 0 to 3 to 6 if I added the channels together.

I had an idea that maybe I could offset the channels somehow by layering different coloured gradients, but quickly shied away from that because of what a headache it would be 😅 Instead, what if I used a colour that wasn’t white so each channel would increment at a different rate.

Using `rgb(253, 254, 255)` has a range of 0-764 which is great, but the steps aren’t created equal – at the start and end of the gradient the channel increments are almost in sync so the steps are still visible in the areas we’re most likely to see them.

To figure out the ideal colour I made a CodePen which makes a 10,000-pixel-wide gradient and counts the number of distinct steps as well as how ‘wide’ each step is. My goal was to find a colour with a large range but also relatively evenly-spaced steps, especially at the start and end of the gradient. I used my pen to calculate the standard deviation of the step widths and tried a few random colours*.

I landed on `rgb(50, 250, 255)`, a pleasant aqua. I can’t muster up a defence for why I picked that exact colour, but it has a good spread of steps without sacrificing too much range, and massively reduced the visibility of the steps.

## #Final thoughts

I hope you’ve enjoyed this peek into the technical setup behind Ceramics. Working on this project has stretched my problem-solving skills more than anything else I’ve ever done, and it’s wonderful to be able to share some of that process with you!

I’ll be posting more deep dives on the project in the following weeks, and a drop release date announcement is imminent. Follow me on Twitter to stay in the loop.

1. ### Varun Vachharreplied7 months ago:

So good! 👏🏽 Also, I must have missed this one earlier. But it's a fascinating technique. You're essentially creating dynamic bump maps. charlottedann.com/article/cerami…

2. ### Kelly Milliganreplied7 months ago:

Great write up! Love the 2D to 3D toggles. Excited to see these arrive 💪

3. ### Matt McDonnellreplied7 months ago:

Great article and an inspiring project!

4. ### Eric De Giuli (EDG)replied7 months ago:

Cool! Looking forward to the release. Btw: going from one channel to 3 channels actually increases your range to 256*256*256, way more than adding them up. Just need to make the float as c.r+256*c.g+256*256*c.b etc