The promise and difficulty of AI-enhanced editing
Several years ago, an executive from Skylum (the makers of Luminar editing software) told me that the company was aggressively hiring machine learning programmers as part of a campaign to equip Luminar with AI functionality. It was my first insight into the importance of using AI to stand out from other photo editing apps. Now Skylum just came out Neo Lightingthe latest incarnation of its AI-based editor.
One of the new features I most wanted to explore is “Relight AI”, which is emblematic of what AI technologies can do for photo editing. Imagine being able to adjust the lighting in a scene based on elements identified by the software, add light to foreground objects, and control the depth of adjustment as if the image was rendered in 3D.
To be frank, I’m only focusing on the Relight AI feature, not the Luminar Neo as a whole. The app was only recently released and in my experience so far still has some rough edges and some basic features are missing.
Much of the photo editing we do involves relighting, from adjusting the overall exposure of an image to dodging and burning specific areas to make them more or less prominent.
But one of the main characteristics of AI-based tools is the ability to analyze a photo and determine what is depicted in it. When the software knows what is in an image, it can act on this knowledge.
If a person is detected in the foreground, but they are in shadow, you can increase the exposure on them to make it look like a strobe or reflector is shining on them. Usually we do this with selective painting, circular or linear gradients, or by making complex selections. These methods are often time-consuming or the effects are too general.
For example, the following photo is not only underexposed, but the tones between the foreground and the background are quite similar; we want more light on foreground subjects and create separation with the active background.
So I can start with the obvious: make people brighter. An option in many applications is to paint an exposure setting over it. In Luminar Neo, the way to do this is to use the “Expand” tool to increase the exposure value, then use the “Mask” function to make the change only apply to the subjects.
Another option would be to apply a linear gradient that brightens the bottom half of the image and blends into the top, but the ground on the left side of the frame, which is clearly further behind the family, would also be brighter .
Ideally, you want to be the art director asking for the foreground to be brighter and letting the software figure it out.
How Relight AI works
The Relight AI tool allows you to control the brightness of areas close to the camera and areas far from the camera, it also allows you to extend the depth of the effect. In our example, increasing the “Near Brightness” slider does indeed brighten the family and the railing, and even adjusts the background a bit, to smooth the transition between what Luminar Neo has determined to be the foreground and the background.
The photo is already much closer to what I wanted, and I moved a single slider. I can also lower the “Far Brightness” slider to make the whole background recede. The “Depth” control balances the other two values (I’ll come back to Depth soon).
Depending on how the effect is applied, the “Dehalo” control under Advanced Settings can smooth the transition around foreground elements, such as people’s hair. You can also make near and far areas warmer or cooler using the “Heat” sliders.
What about photos without people?
OK, photos with people are important, but also handy fruits for the AI. Humans get special treatment because often a person detected in the foreground is going to be the subject of the photo. What if an image doesn’t include a person?
In this next example, I want to keep the color of the sky and the silhouettes of the building but lighten the foreground. I’m going to crank the brightness up to 100 to exaggerate the effect so we can get an idea of where Luminar identifies objects.
We see that the plants in the immediate foreground are lit, as well as the main building. Luminar shielded the sky in the background to the left of the building and did not hit the farthest building on the right. Relight AI therefore clearly detects prominent shapes.
When I reduce the Depth value, the closest bushes are still lit up but the buildings remain in shadow. The increased depth adds an unnatural halo to the main building, but the side building still holds up well.
So overall Relight AI is not bad. In these two images, it achieved its main goals: let me quickly and easily adjust near and far brightness.
Where it struggles
This is where I hold a big disclaimer that applies to all photos edited using artificial intelligence tools: the quality of the effect depends a lot on the images themselves and on what the software can detect there.
In this photo of trees, the software doesn’t really know what it’s looking at. The bushes and groups of trees on the right and left are about the same distance from the camera, then the rest of the trees move away. I would expect those side and foreground trees to be lit up, and the forest would get darker the further away from the lens.
However, when I make dramatic changes to the near and far brightness controls, Relight AI reverts to top-to-bottom gradients, because in many photos the foreground is at the bottom and the background is in the areas middle and upper. It looks like the prominent trees on the right and left have been partially recognized, as they don’t get as dark as the others, but again the effect doesn’t work here.
Sometimes with people, the tool applies the near brightness value to them and sticks to it, even when you adjust the depth parameter. For example, in this photo of a person in a field of sunflowers, darkening the background and brightening the foreground balances the image better, picking up the leaves and sunflowers closest to the camera.
When I set Depth to a low value so that the light appears very close to the camera, the flower on the left (the closest object) darkens, but the lighting on the person remains the same. The tool assumes that a person will be the main subject, regardless of the perceived depth in the image.
Another limitation of the tool is the inability to adjust the mask created by the AI. You can edit a mask from the tool’s global effect, much like we did when manually painting earlier, but that only affects where in the image that the processing is. tool will be visible. You cannot go in and help the AI identify which areas are at what depths. (This also ties into the argument I made in a previous column about not knowing what an AI tool will detect.)
Light up in the future
Luminar Neo’s Relight AI feature is bold, and when it works well, it can produce great results with very little effort, that’s the point. Computational photography will continue to advance and object recognition will certainly improve in the future.
And it’s also important to realize that it’s just a tool. A realistic workflow would involve using features like this, then augmenting them as needed with other tools, like dodging and burning, to get the result you’re looking for.