Click here to Skip to main content
15,614,270 members
Articles / Internet of Things / Arduino
Posted 20 Apr 2022


4 bookmarked

Writing an X11 Color Picker With GFX

Rate me:
Please Sign up or sign in to vote.
5.00/5 (2 votes)
20 Apr 2022MIT15 min read
Implement a fancy color picker in your ESP32 WROVER or WROOM IoT applications
Learn how to use some of the latest features of GFX to implement an efficient color picker in your projects. In this project, we will use an ESP32 WROVER and an ILI9341 display with touch (provided by an XPT2046)

ESP32 Color Picker


This article almost didn't get written. I came to the hasty conclusion that nobody needs a color picker for IoT, and then stumbled over my own use case for one.

On top of that, this article is an opportunity to explore some of the latest GFX features like user-level dynamic batching and HSV color model support.

In the end, I decided it deserved the full treatment, and despite developing it more as a curiosity than anything, in the end, it has earned its own article.


GFX is a full featured graphics library for little widgets. It fills a gap left by other offerings with respect to middleweight IoT devices like the ESP32 and the Atmel SAMD gadgets by providing higher level features like JPG and TrueType that these slightly more powerful MCUs can successfully take advantage of.

The ESP32 WROVER is a little 32-bit dual core MCU that can operate at up to 240MHz. It has 512kB of SRAM - about 300kB of that is available to the user. On top of that, it has no less than 4MB of PSRAM attached to an internal 80MHz SPI bus, yielding as much as one fetch every 4 clock cycles, or thereabouts, which is actually fairly reasonable.

It's a bit more than we need for this application, but they are a popular alternative to the less powerful Arduino offerings. It should be noted that this code can easily be ported to an ESP32 WROOM (which lacks the extra 4MB of RAM) by embedding the desired TTF font as a header file instead of loading it into PSRAM. You can use the fontgen tool that ships with GFX to create the header. The other option is to use the small 20+kB font file that ships with the project. That can be loaded into SRAM.

GFX isn't limited to the Arduino framework, but the Arduino framework comes with a lot more device support, and faster SPI communication to TFT devices than the ESP-IDF can currently provide which is why using the Arduino framework with the ESP32 has been my focus.

We'll be using Platform IO by way of VS Code as our development IDE. GFX will work with some other environments, but with Platform IO, it's plug and play.

Note: Before running this project the first time, be sure to Upload Filesystem Image (under tasks) in order to put the files under /data on SPIFFS.

Understanding this Mess

In Action

I've provided a brief Youtube video here, to give you a better idea of what we're building.

High Level Concepts

To make the color picker easier for humans to use, and easier for us to render, we'll be dealing with the HSV color model instead of RGB. Most of the heavy lifting in terms of what colors to draw where, like the hue bar and the gradient is handled simply just by using HSV instead of RGB.

We'll be rendering our hue selector as simple horizontal bar along the bottom of the touch display. Most of the display will contain a dual axis gradient with Saturation being the Y value and Value being the X value.

Rendering the hue bar is simply a matter of increasing the hue channel of a color from 0% to 100% as we move from left to right.

Rendering the gradient for a given hue value means we must render successively increasing saturation and value channels as we move along the Y and X axes, respectively.

Getting the name of a color is a little trickier, but other than a lot of boilerplate nonsense for the 140 different named X11 colors, there's not a lot of head scratching involved - the actual process couldn't be simpler. We just take our color, using the palette's nearest() function to match nearest palette color, getting us the X11 palette index for that, which we feed into a string table of names.

Making It Perform

As always, rendering text is the lion's share of the work for this little monster. True Type is not easy to do for these machines. To both speed things along somewhat, and to keep things flexible, our fonts are stored as TTF and OTF files on the SPIFFS partition. Rather than try to use them directly from SPIFFS which would be terribly slow, we copy the file into PSRAM on startup. Then when we need to render it, we just reconstitute the font from our buffer, which is virtually instantaneous. If we were using a WROOM, we'd have to embed the font file as a header and use that to render from.

The other thing that can take a lot of time is rendering the gradient. In this application, it's 44,800 pixels in total. It requires 7 SPI transactions to draw a pixel to an ILI9341. There has to be a better way.

One thing we could have done in the past is create a temporary 320x140 bitmap, draw to that, and then write that to the display all at once, and that would have worked, but it's a lot of effort. That also means you have that much memory available, and if you don't, it can't fall back to a middle ground.

With the most recent version of GFX, you now have access to user level batching. What it does is it allows you to set up a rectangular window. You can then write pixels to that window in order from top to bottom, left to right, without specifying the coordinates for each pixel. Not only will it use the bitmap technique above for you, but it will fall back to driver level batching if there isn't enough memory available and the driver supports it, which the ILI9341 happens to. It sounds complicated. Using it is very simple. We'll get to that.

Coding this Mess


We've wired both the display and the touch controller to the same SPI bus. We've used MOSI pin 23,MISO pin 19, and SCLK pin 18. For the LCD CS line, it's pin 5. For the touch line, it's pin 15. For the LCD, the DC line is pin 2, RST is 4, and BL/LED is 14. The touch IRQ line is not connected.


On the software end, let's start with the platformio.ini for this project:

platform = espressif32
board = node32s
framework = arduino
monitor_speed = 115200
upload_speed = 921600
lib_deps = 
lib_ldf_mode = deep

This prepares us for a generic ESP32 WROVER devkit attached to an ILI9341 display with an XP2046 touch controller on it. It includes my driver for the ILI9341 which also pulls in GFX, and the library I wrote for the touch driver. lib_ldf_mode = deep keeps Platform IO from getting confused about what dependencies GFX relies on.

The build flags are there because GFX requires C++14 or better to compile, while the Arduino framework environment typically uses GNU C++11. Node32s is just a board that is a good "generic" board for any ESP32 (excepting S2/S3/C3 lines). The PSRAM lines are necessary in order to enable access to the 4MB of PSRAM.


Now we get to the meat.

// Arduino ESP32 headers
#include <Arduino.h>
#include <SPIFFS.h>
// bus framework header
#include <tft_io.hpp>
// driver header
#include <ili9341.hpp>
// touch header
#include <xpt2046.hpp>
// gfx for C++14 header
#include <gfx_cpp14.hpp>
// our x11 stuff
#include "x11_palette.hpp"
#include "x11_names.hpp"
// import the namespace for the drivers
using namespace arduino;
// and for GFX
using namespace gfx;

// both devices share the SPI bus:
#define HOST VSPI

// wiring is as follows for the touch and display
// MOSI 23
// MISO 19
// SCLK 18
// VCC 3.3v
// see below for additional pins:
#define LCD_CS 5
#define LCD_DC 2
#define LCD_RST 4
#define LCD_BL 14

#define TOUCH_CS 15
// you may need to change this to 1 if your screen is upside down
#define LCD_ROTATION 3
// if you don't see any backlight, or any display 
// try changing this to false
#define LCD_BL_HIGH true

// use the default pins for the SPI bus
using bus_t = tft_spi<HOST,LCD_CS>;
// set up the display
using lcd_t = ili9341<LCD_DC,LCD_RST,LCD_BL,bus_t,LCD_ROTATION,LCD_BL_HIGH>;
// set up the touch driver
using touch_t = xpt2046<TOUCH_CS>;

Pretty much everything here is self explanatory until the last several lines at the end.

GFX drivers typically use my htcw_tft_io decoupled bus library for better performance and the ability to be agnostic about the actual nature of the bus (whether it's I2C, SPI or parallel for example). The ILI9341 driver is no exception. Given that, we declare an TFT SPI bus type for it to use by way of the tft_spi<> line, passing in the HOST and the CS line for the attached LCD, in this case the ILI9341.

Finally, we can declare our ILI9341 driver with the various pins and settings it needs, all of which are fed in by way of preprocessor macros.

The last thing is to declare the touch driver, using its CS line. Note that we didn't use a tft_spi<> bus declaration for this driver. Not all drivers use the TFT IO framework, and for the most part, that is limited to displays. Since the bus framework doesn't control the CS line, it does itself, necessitating that we pass it in when we declare it.

You'll note that this code heavily favors templates. It's how GFX oriented stuff tends to operate, and it comes with a number of advantages. GFX and the drivers make heavy use of "template instance statics" (I'm not sure what the official name for the concept is) but basically GFX and related code relies on the fact that the statics associated with a template class are per instantiation, meaning if I declare two different ILI9341 devices because I have two hooked up to my ESP32, any statics they have will be different from each other, but still static relative to itself. It's because of this that you can drive multiple displays of any kind at once using this setup, vs the more traditional way of doing things like Adafruit_GFX and TFT_eSPI do.

Anyway, now that we've gotten some of the boilerplate code out of the way, let's move on:

// declare the display
lcd_t lcd;
// declare the touch. The touch takes a reference to an SPIClass
// However, devices that share a bus must share the same instance.
// Always retrieving the SPIClass from spi_container ensures the same
// instance for the same host (requires the htcw_tft_io lib)
// Since the touch and the LCD share a bus, we want to use
// the same instance. spi_container<HOST>::instance() retrieves that
// in a cross platform manner.
touch_t touch(spi_container<HOST>::instance());

The lcd declaration is trivial. The expression passed to the touch constructor could use some explaining. The comments cover it but I'll reiterate here. Disappointingly, there is no cross platform way to get an SPI instance for a given "SPI host" even though many devices that run the Arduino framework have multiple hosts. Furthermore, most, if not all of these platforms require that any devices that share an SPI bus also share an instance of an SPIClass and therein lies the rub. There's no easy way to retrieve it unless you already know your platform, and even then with platforms like the ESP32, you still need to hang on to a single global instance of an SPIClass for each of the hosts you need. It's a mess. My htcw_tft_io contains a solution. spi_container is a template that takes a numeric, zero based host as an argument and returns a single shared instance of an SPIClass that drives that host. There's an i2c_container template that serves a similar purpose.

Here, we use it to get the same instance of the SPIClass that the tft_spi<> declaration is using internally.

It should be noted that I typically declare my devices as global to my code, because physically they are globally accessible within the circuit, so I like to make the drivers follow suit.

The next two lines just give us a convenient way to access some X11 colors and our X11 color palette which we'll explore later:

// easy access to the X11 colors for our display
using color_t = color<typename lcd_t::pixel_type>;
// easy access to our X11 palette mapped to 24-bit RGB
using x11_t = x11_palette<rgb_pixel<24>>;

The color<> pseudo-enumeration presents 140 named X11 colors in any given color model and pixel format you give it. If you want the color "old lace" as a 24-bit RGB pixel, you'd use color<rgb_pixel<24>>::old_lace. Here we're passing lcd_t::pixel_type in order to use the same pixel format used by the ILI9341 (16-bit RGB).

The palette is a 140 color palette with one entry for each X11 color. In this case, the palette maps each X11 color to a 24-bit pixel with the RGB color model. As I said before, we're primarily operating in HSV in this application, but when computing color distance, you'll get more expected results if you use RGB than if you use HSV.

// you can try one of the other fonts if you like.
const char* font_path =  "/Ubuntu.ttf"; 
//"/Telegrama.otf"; // "/Bungee.otf";
const char* font_name = font_path+1;
uint8_t* font_buffer;
size_t font_buffer_len;

I've shipped 3 fonts with the project, which include the two that are commented out. You can also download more fonts from websites like

Anyway, these are some globals we use for the font. It contains the file name, the name (just the file without the leading /), a buffer in PSRAM to hold the font, and the length of the buffer.

// holds the currently selected hue value
float current_hue;

Real/floating point values for a pixel are always scaled in the range of 0 to 1 with 0 being 0% and 1 being 100%. It's simply easier to deal with HSV in this manner, so we do. The current_hue value is the H channel for the HSV pixel we're selecting. It is selected using the hue bar at the very bottom of the screen.

Next up is the calibrate() function which is used to present a calibration screen to the user since these cheapo touch displays need to be calibrated before they can be used. This routine optionally writes the calibration data to SPIFFS so you can load it later rather than having to calibrate every time. Let's explore it now:

// calibrates the screen, optionally writing the calibration file to SPIFFS
void calibrate(bool write=true) {
  File file;
  if(write) {
    file ="/calibration","wb");
  int16_t values[8];
  uint16_t x,y;
  srect16 sr(0,0,15,15);
  ssize16 ssr(8,8);
  // top left
  // reconstitute our font stream from PSRAM
  const_buffer_stream cbs(font_buffer,font_buffer_len);
  open_font fnt;
  // attempt to open the font (already checked in setup)
  float scale = fnt.scale(30);
  const char* text = "Touch the corners\nas indicated";
  ssize16 fsz = fnt.measure_text({32767,32767},{0,0},text,scale).inflate(2,2);
  srect16 tr = fsz.bounds().center((srect16)lcd.bounds());

  while(!touch.calibrate_touch(&x,&y)) delay(1);
  if(write) {
  delay(1000); // debounce
  if(write) {

In the interest of brevity, I've omitted some of the repetitive code above. The first thing we do is initialize the touch driver. We don't strictly have to since it initializes on first use, but I just feel better when I do.

Next we open the file if write was specified. Then we declare our calibration point array values which contains two int16_t entries for each x,y coordinate of a corner, specified in clockwise order starting from the top left.

After that we fill our screen, get a font from our buffer, write some instructions to the center of the screen at font height of 30 pixels, and write little tear drops one at a time on each corner, waiting for you to touch them, recording the device points retrieved by touch.calibrate_touch(), and then erasing it and drawing the next corner until each value is both stored in the array, and written to the file if specified.

When it's done, we pass the values array to touch.calibrate() to calibrate the screen with the data.

The teardrops are just a circle with a square drawn in one of the corners, overlapping it. It's really simple.

Next we have a function that reads from the calibration file in SPIFFS if it's present, and calibrates the device using those values. It's the exact file we wrote earlier, and we calibrate the display similarly with the data we read from the file instead of prompting for it.

// read the calibration from SPIFFS
bool read_calibration() {
  if(SPIFFS.exists("/calibration")) {
    File file ="/calibration","rb");
    int16_t values[8];
    uint16_t x,y;
    for(int i = 0;i<8;i+=2) {
      if(2!=file.readBytes((char*)&x,2)) { file.close(); return false; }
      if(2!=file.readBytes((char*)&y,2)) { file.close(); return false; }
    return touch.calibrate(lcd.dimensions().width,lcd.dimensions().height,values);
  return false;

Now we get to some of the actual main application graphics, finally:

// draw a 90deg linear gradient from HSV(0%,100%,100%) to HSV(100%,100%,100%)
void draw_hue_bar(rect16 rect) {
  int w = (float)rect.width()/

  for(int x = rect.left();x <= rect.right(); ++x) {
    hsv_pixel<24> px(true,(((float)(x-rect.left()))/(rect.width()-1)),1,1);

This is actually really simple. The most complicated part is getting the width (w) of each hue value. For our display w should wind up being 1.

We loop from the left side of the rectangle to the right. Note that it's not x1 to x2, because the rectangle may be flipped horizontally. For position, we scale x to a value between 0 and 1 and then feed that to the hue channel of our pixel. Note that the pixel's constructor takes 4 arguments. The first is a dummy boolean value that must be passed when you're specifying real numbers. Otherwise, the constructor will expect integer values that aren't scaled. The leading boolean disambiguates the overload.

Next, we have the routine to draw the actual selected color, and the nearest matching X11 color next to it.

// draw the color match bar (exact and nearest x11 color)
void draw_color(hsv_pixel<24> color) {
  x11_t pal;
  typename x11_t::pixel_type px;
  typename x11_t::mapped_pixel_type cpx;

What we're doing here is drawing the first color. Then we convert the color to RGB and using the palette we declared earlier, we match the color to the closest matching palette color, which in most cases, including this one, uses the Euclidian/Cartesian distance algorithm to determine which pixel most closely matches. We do it in RGB space in order to avoid some less than desirable results doing so with HSV.

Drawing the name of the color is relatively straightforward. The first part is a little like above, because we map the color to the nearest X11 color in the palette.

// draw the name of the color
void draw_color_name(hsv_pixel<24> color) {
  x11_t pal;
  typename x11_t::pixel_type ipx;
  typename x11_t::mapped_pixel_type cpx;
  const char* name = x11_names[ipx.template channel<0>()];
  // reconstitute our font stream from PSRAM
  const_buffer_stream cbs(font_buffer,font_buffer_len);
  open_font fnt;
  // attempt to open the font (already checked in setup)
  float scale = fnt.scale(30);
  ssize16 fsz = fnt.measure_text({32767,32767},{0,0},name,scale).inflate(2,2);
  srect16 tr = fsz.bounds().center({0,160,319,208});

What we do with it next though, is we get an indexed pixel out of the palette, and we use its index as a lookup into a 140 entry string array full of color names.

Once we have that, it's simply a matter of reconstituting our font from the buffer, measuring the text with it, and then drawing the background followed by the text itself.

Now let's get to the part where the gradient is drawn, as there's an important technique therein:

void draw_frame(float hue) {
  // draw a linear gradient on the HSV axis, where h is fixed at "hue"
  // and S and V are along the Y and X axes, respectively
  hsv_pixel<24> px(true,hue,1,1);
  auto px2 = px;
  // batching is the fastest way
  auto b = draw::batch(lcd,srect16(0,0,319,139));
  for(int y = 0;y<140;++y) {
    px2.template channelr<channel_name::S>(((double)y)/139.0);
    for(int x = 0;x<320;++x) {
      px2.template channelr<channel_name::V>(((double)x)/319.0);
  // commit what we wrote
  // draw the color bar
  // draw the color name

What we're doing here primarily is drawing the gradient. We start by creating an HSV pixel at the specified hue. Then we prepare for a batch write using draw::batch<>(), giving it the target rectangle.

As we move along the Y and X axes, we adjust the S and V channels of px, and each time we write it out to the batch.

When we're finally done, we commit the batch before drawing the color bar and name.

Using batching is typically orders of magnitude faster than if we had just used draw::point<>(). How fast depends on the end capabilities of the display controller and how much free SRAM you have.

And now, the good old Arduino setup() function:

void setup() {

  File file =,"rb");
  if(!file)  {
    Serial.printf("Asset %s not found. Halting.",font_name);
    while(true) delay(1000);
  // get the file length,fs::SeekMode::SeekEnd);
  size_t len = file.position();;
  if(len==0) {
    Serial.printf("Asset %s not found. Halting.",font_name);
    while(true) delay(1000);
  // allocate the buffer
  font_buffer = (uint8_t*)ps_malloc(len);
  if(!font_buffer)  {
    Serial.printf("Unable to allocate PSRAM for asset %s. Halting.",font_name);
    while(true) delay(1000);
  // copy the file into the buffer
  // don't need the file anymore
  font_buffer_len = len;
  // test the font to make sure it's good (avoiding checks later)
  // first wrap the buffer w/ a stream
  const_buffer_stream cbs(font_buffer,font_buffer_len);
  open_font fnt;
  // attempt to open the font
  gfx_result r=open_font::open(&cbs,&fnt);
  if(r!=gfx_result::success) {
    Serial.printf("Unable to load asset %s. Halting.",font_name);
    while(true) delay(1000);

  if(!read_calibration() || !touch.calibrated()) {
  current_hue = 0;
  // draw the hue bar at the bottom
  // draw the initial frame 

Astute readers may notice that in some cases, touch can be initialized more than once. This doesn't hurt anything since it won't reinitialize if already initialized. These initializations are also not necessary but they helped me with my testing and I just left them in. All of my device drivers automatically initialize on first use unless doing so is somehow impossible.

Most of the routine is code to copy the font into PSRAM and load it to make sure it's valid.

After that, we calibrate if necessary, and then set to current hue, draw the hue bar, and draw the frame with the gradient.

And now, loop():

void loop() {
  uint16_t x=0,y=0;
  // touched?
  if(touch.calibrated_xy(&x,&y)) {
    // hue bar?
    if(y>=210) {
      current_hue = ((double)x)/319.0;
    } else if(y<140) { // gradient area
      double s = ((double)y)/139.0;
      double v = ((double)x)/319.0;
      // get our HSV pixel
      hsv_pixel<24> px(true,current_hue,s,v);
      // update the screen with it

This routine is almost trivial. We simply poll for a touch event, and if we've got one, we determine where along the y was touched. If it is less than 140 it's the gradient, or if it's greater than or equal to 210, it's the hue bar. In the first case, we recompute the color and draw that portion of the screen. In the latter case, we have to recompute the hue and redraw the whole frame.


Okay I cheated. I generated most of this file, and the names header file using a tool I wrote in C# that scrapes System.Drawing.Color for all of the X11 colors and names. The palette provides two functions. One simply returns a color given an indexed pixel. That routine is huge and autogenerated.

The other routine is just boilerplate for doing a nearest color lookup. It compares the distance of each color in the palette with the comparand and finds the one that's closest.


That's it! Hopefully, this gives you some ideas on how to better use GFX in your own projects. Remember you can keep yourself abreast of the latest documentation and source at


  • 20th April, 2022 - Initial submission


This article, along with any associated source code and files, is licensed under The MIT License

Written By
United States United States
Just a shiny lil monster. Casts spells in C++. Mostly harmless.

Comments and Discussions

-- There are no messages in this forum --