March 14, 2021

How to create an Electron App to deploy TensorFlow.js Models

Deploy Deep Learning models is a tedious task, mainly because models are hard to export due to their size and complexity. TensorFlow has several tools to deploy models, one of those tools is TensorFlow.js which allows us to run models on a web page using Javascript or on a server using Node.js.

On the other hand, we have Electron used to build native applications for windows, mac, and Linux. Electron uses Javascript and HTML to create these applications. Thus, we can use TensorFlow.js along with Electron to deploy our deep learning models.

In this blog post we will use Electron to create an app that utilizes the Line Art Colorization Model from this blog post. This model takes as input several images to colorize a line art image.

Create applications with Electron is relatively easy, we need at least two files, one Javascript file, and one HTML file. The Javascript file is the Main Process that is in charge of creating the app window and the communication between Renderer Processes. The HTML file is called the Renderer Process, we can have multiple renderer processes. The HTML files can load more Javascript files to add dynamism to the app or enable communication between the main process and the renderer process.

Electron

Let's start coding the app, you can find the source code in this repository.

index.js

const console = require('console');
const { app, dialog, ipcMain, BrowserWindow } = require('electron');
const fs = require("fs");
const path = require("path");

let mainWindow;
let workerWindow;

function createWindow() {
  mainWindow = new BrowserWindow({
    height: 800,
    width: 1200,
    webPreferences: {
      nodeIntegration: true
    }
  });

  mainWindow.loadFile('src/views/index.html');

  mainWindow.on('closed', () => {
    mainWindow = null;
  });
}

function createWorker() {
  // hidden worker
  workerWindow = new BrowserWindow({
    show: false,
    // height: 800,
    // width: 1200,
    webPreferences: { 
      nodeIntegration: true,
      enableRemoteModule: true
  }
  });

  workerWindow.loadFile('src/views/worker.html');

  workerWindow.on('closed', () => {
    workerWindow = null;
  });

  // workerWindow.webContents.openDevTools();

  console.log("worker created");
}

app.on('ready', () => {
  createWindow();
  createWorker();
});

app.on('window-all-closed', () => {
  if (process.platform !== 'darwin') {
    app.quit();
  }
});

app.on('activate', () => {
  if (mainWindow === null) {
    createWindow();
  }

  if (workerWindow == null) {
    createWorker();
  }
});

index.js will be our main process where we import electron and create two windows, the mainWindow and the workerWindow. The main window will render a web page src/views/index.html and show the application to the user.

You can notice some commented code in the previous snippet, the worker window won't be showed since we don't really need to show this window, only need it to create a new thread, then if we have code errors we won't be alerted by the thread, I reccomend you to always show the worker window using:

show: true,

So you can open the dev tools and check for errors.

The main window will create the main thread to run the application, however, we are going to load a TensorFlow.js model and use it to colorize a line art image which could take some time. Consequently, we can not block the main thread or our application will become unresponsive. This is why we need the worker window, this window will be in charge of loading and running the model using a different thread. Thus, the worker window will also create another renderer process

To communicate between threads and renderer processes, Electron includes the ipcMain and ipcRenderer functions:

ipcMain.on('get-directories', (event, arg) => {
    dialog.showOpenDialog({
      properties: ['openDirectory']
    }).then( async (data) => {

      if (data.filePaths == null || Array.isArray(data.filePaths) && data.filePaths.length === 0) {
          // console.log("op1");
          return;
      }

      const imagesPath = data.filePaths[0];

      const imagesNames = fs.readdirSync(imagesPath);

      let imagesPaths = [];

      imagesNames.forEach(imageName => {
        const fullPath = path.join(imagesPath, imageName);
        
        if (fs.lstatSync(fullPath).isFile()) {
          if (imageName[0] != ".") {
            imagesPaths.push(fullPath);
          }
        }
      });

      const name = arg.name;

      if (imagesPaths.length == 3 && name == "color") {
        // You only need 2 color images
        return;
      }
      
      if (imagesPaths.length != 3) {
        if (name == "color" && imagesPaths.length != 2) {
          return;
        }
      }

      let mainPath = imagesPath.split("/").slice(0, -1).join("/");

      workerWindow.webContents.send('send-model-paths', {mainPath, imagesPaths, name});

      const pathsDone = true;

      event.sender.send('send-paths', {pathsDone, imagesPaths, name});

    }).catch(err => {
      console.log(err);
    })
});

ipcMain.on('run-model', async (event, arg) => {
  const indexRadio = arg.indexRadio;

  workerWindow.webContents.send('run-model', {indexRadio});
});

ipcMain.on('first-model-done', (event, arg) => {
  const firstFinalImagePath = arg.firstFinalImagePath;
  mainWindow.webContents.send('first-model-done', {firstFinalImagePath});
});

ipcMain.on('second-model-done', (event, arg) => {
  const secondFinalImagePath = arg.secondFinalImagePath;
  mainWindow.webContents.send('second-model-done', {secondFinalImagePath});
});

ipcMain.on('final-model-done', (event, arg) => {
  const finalImagePath = arg.finalImagePath;
  mainWindow.webContents.send('final-model-done', {finalImagePath});
});

All these functions are also in the index.js file, if you have used Node.js or Javascript and creating events, these functions work in a similar way, ipcMain will listen to different events, for example get-directories will open a dialog window so the user can select a folder to load the images from.

We can not communicate between renderer processes, we first need to send the information to the main process and then the main process will send the information to the required renderer process

To send information from the Main Process to a renderer process we can use the following function:

workerWindow.webContents.send('send-model-paths', {mainPath, imagesPaths, name});

This function will send the images paths to the worker renderer process.

We can also send information using:

event.sender.send('send-paths', {pathsDone, imagesPaths, name});

In this case, we can only use this function inside an event like get-directories, then the process renderer which triggered the event will receive the send-paths event.

To listen and trigger events from the process renderer, we need a Javascript file:

front_js.js

const { ipcRenderer } = require('electron');

const lineArtFolderButton = document.getElementById('line-art-folder');
const distanceFolderButton = document.getElementById('distance-folder');
const colorFolderButton = document.getElementById('color-folder');

lineArtFolderButton.addEventListener('click', async () => {
    const name = "lineArt";
    ipcRenderer.send('get-directories', {name});
});

distanceFolderButton.addEventListener('click', async () => {
    const name = "distance";
    ipcRenderer.send('get-directories', {name});
});

colorFolderButton.addEventListener('click', async () => {
    const name = "color";
    ipcRenderer.send('get-directories', {name});
});

const lineFirst = document.getElementById('line-art-1');
const lineSecond = document.getElementById('line-art-2');
const lineThird = document.getElementById('line-art-3');

const distanceFirst = document.getElementById('distance-1');
const distanceSecond = document.getElementById('distance-2');
const distanceThird = document.getElementById('distance-3');

const colorFirst = document.getElementById('color-1');
const colorSecond = document.getElementById('color-3');

const runModelButton = document.getElementById('run-model')
runModelButton.disabled = true;

const imgsElements = {"lineArt": [lineFirst, lineSecond, lineThird],
                      "distance": [distanceFirst, distanceSecond, distanceThird],
                      "color": [colorFirst, colorSecond]}

function renderImgs(imagesPaths, imgsElement) {
    for (const [index, imgE] of imgsElement.entries()) {
        imgE.src = imagesPaths[index];
    }
}

ipcRenderer.on('send-paths', async (event, arg) => {
    const imagesPaths = arg.imagesPaths;
    const pathsDone = arg.pathsDone;
    renderImgs(imagesPaths, imgsElements[arg.name]);

    if (pathsDone) {
        runModelButton.disabled = false;
    }
});

// ipcRenderer.on('send-error', async (event, arg) => {
//     // show/animate error message
// });

const radio1 = document.getElementById('in-1');
const radio2 = document.getElementById('in-2');

const finalImage = document.getElementById('final_image');

const firstP = document.getElementById('first_p');
const secondP = document.getElementById('second_p');
const finalP = document.getElementById('final_p');

const firstImg = document.getElementById('first_img');
const secondImg = document.getElementById('second_img');
const finalImg = document.getElementById('final_img');

function checkRadioOption() {
    if (radio2.checked) {
        return 1;
    } else if (radio1.checked) {
        return 0;
    } else {
        return 2;
    }
}

runModelButton.addEventListener('click', async () => {
    // dont disable button but check if i can run the model
    // or there is an error
    firstP.classList.add("content-model-yet");
    firstImg.src = "assets/loading.gif";

    const indexRadio = checkRadioOption();

    finalImage.src = "assets/preview2.jpg";

    secondImg.src = "";
    finalImg.src = "";

    // finalImage.src = "";

    ipcRenderer.send('run-model', {indexRadio});
    runModelButton.disabled = true;
});

ipcRenderer.on('first-model-done', async (event, arg) => {
    const finalImagePath = arg.firstFinalImagePath;

    firstImg.src = "assets/done.png";
    firstP.classList.remove("content-model-yet");

    secondP.classList.add("content-model-yet");
    secondImg.src = "assets/loading.gif";

    finalImage.src = finalImagePath;
});

ipcRenderer.on('second-model-done', async (event, arg) => {
    const finalImagePath = arg.secondFinalImagePath;

    secondImg.src = "assets/done.png";
    secondP.classList.remove("content-model-yet");

    finalP.classList.add("content-model-yet");
    finalImg.src = "assets/loading.gif";

    finalImage.src = finalImagePath;
});

ipcRenderer.on('final-model-done', async (event, arg) => {
    const finalImagePath = arg.finalImagePath;

    finalImg.src = "assets/done.png";
    finalP.classList.remove("content-model-yet");

    finalImage.src = finalImagePath;

    runModelButton.disabled = false;
});

This file is the process renderer from the main process. Here we use ipcRenderer.on to listen to events and ipcRenderer.send to trigger events. The most part of this code is HTML manipulation, to show and hide images, the progress of the model, etc.

Here the important component to understand is the way we communicate through the main process and the renderer process. The idea is to send actions from the front-end or the main window by the user, for example loading images, or running the model, these actions are received by the main process as events, if we need to communicate to the worker window, we can send it an event.

TensorFlow.js

Once we have seen most of the part of the Electron section, we can start looking at the model code. To have a better and easier comunication with the model we create a new class:

model.js

const tf = require('@tensorflow/tfjs-node');

let Jimp = require('jimp');
let resolvePath = require('path').resolve
let path = require("path");

const fs = require("fs");

async function getTensorImage(filePath, final_height, final_width) {
    return new Promise((resolve, reject) => {
        Jimp.read(filePath, (err, image) => {
            if (err) {
                reject(err);
            } else {
                const height = image.bitmap.height;
                const width = image.bitmap.width;
                const buffer = tf.buffer([1, height, width, 3], 'float32');

                image.scan(0, 0, width, height, function(x, y, index) {
                    buffer.set(image.bitmap.data[index], 0, y, x, 0);
                    buffer.set(image.bitmap.data[index + 1], 0, y, x, 1);
                    buffer.set(image.bitmap.data[index + 2], 0, y, x, 2);
                });

                resolve(tf.tidy(() => tf.image.resizeBilinear(
                    buffer.toTensor(), [final_height, final_width]).div(255)));
                }
        });
    });
}

class ColorModel {
    constructor(appPath) {
        this.first_part_model = null;
        this.second_part_model = null;
        this.third_part_model = null;

        this.color_paths = [];
        this.line_art_paths = [];
        this.distance_paths = [];

        this.models_loaded = false;

        this.main_path = [];
        this.images_ready = false;

        this.appPath = appPath;
    }

    async loadModel(callback) {
        if (this.models_loaded == false) {
            // Model files must be in a different file than the js, assets files
            this.first_part_model = await tf.node.loadSavedModel(path.join(this.appPath, 'model/saved_model/first_part_model'));
            this.second_part_model = await tf.node.loadSavedModel(path.join(this.appPath, 'model/saved_model/second_part_model'));
            this.third_part_model = await tf.node.loadSavedModel(path.join(this.appPath, 'model/saved_model/third_part_model'));
            this.models_loaded = true;
            console.log("Models Loaded");
        }
    }

    savePaths(imagePaths, pathType) {
        if (pathType == "color") {
            this.color_paths = imagePaths;
        } else if (pathType == "distance") {
            this.distance_paths = imagePaths;
        } else {
            this.line_art_paths = imagePaths;
        }

        if (this.color_paths != [] && this.line_art_paths != [] && this.distance_paths != []) {
            this.images_ready = true;
        }
    }

    async loadImages(indexRadio) {
        // To do: Choose which image will be colorized
        tf.tidy(() => {
            let color_frame_1 = await getTensorImage(this.color_paths[0], 256, 455);
            let color_frame_3 = await getTensorImage(this.color_paths[1], 256, 455);

            let line_frame_1 = await getTensorImage(this.line_art_paths[0], 256, 455);
            let line_frame_2 = await getTensorImage(this.line_art_paths[1], 256, 455);
            let line_frame_3 = await getTensorImage(this.line_art_paths[2], 256, 455);

            let distance_frame_1 = await getTensorImage(this.distance_paths[0], 256, 455);
            let distance_frame_2 = await getTensorImage(this.distance_paths[1], 256, 455);
            let distance_frame_3 = await getTensorImage(this.distance_paths[2], 256, 455);

            color_frame_1 = color_frame_1.slice([0, 0, 100, 0], [1, 256, 256, 3]);
            // color_frame_2 = color_frame_2.slice([0, 0, 100, 0], [1, 256, 256, 3]);
            color_frame_3 = color_frame_3.slice([0, 0, 100, 0], [1, 256, 256, 3]);

            line_frame_1 = line_frame_1.slice([0, 0, 100, 0], [1, 256, 256, 3]);
            line_frame_2 = line_frame_2.slice([0, 0, 100, 0], [1, 256, 256, 3]);
            line_frame_3 = line_frame_3.slice([0, 0, 100, 0], [1, 256, 256, 3]);

            distance_frame_1 = distance_frame_1.slice([0, 0, 100, 0], [1, 256, 256, 3]);
            distance_frame_2 = distance_frame_2.slice([0, 0, 100, 0], [1, 256, 256, 3]);
            distance_frame_3 = distance_frame_3.slice([0, 0, 100, 0], [1, 256, 256, 3]);
            
            // let input = [line_frame_2, distance_frame_2, color_frame_1, line_frame_1, distance_frame_1, color_frame_3, line_frame_3, distance_frame_3];
            let input = [line_frame_1, distance_frame_1, color_frame_1, line_frame_2, distance_frame_2, color_frame_3, line_frame_3, distance_frame_3];

            return input;
        });
    }

    async predictFirstPart(indexRadio) {
        const input = await this.loadImages(indexRadio);
        
        let [res_input, style_vector, Y_trans_sim] = await this.first_part_model.predict(input);

        let first_output = await tf.node.encodeJpeg(Y_trans_sim.squeeze(0).mul(255).clipByValue(0, 255).cast('int32'), "rgb");

        const finalImagePath = `${this.main_path}/${Date.now()}_first_output.jpg`;
        fs.writeFileSync(finalImagePath, first_output);

        return [style_vector, res_input, finalImagePath];
    }

    async predictSecondPart(style_vector, res_input) {
        let [x, Y_trans_mid] = await this.second_part_model.predict([res_input, style_vector]);

        let second_output = await tf.node.encodeJpeg(Y_trans_mid.squeeze(0).mul(255).clipByValue(0, 255).cast('int32'), "rgb");

        const finalImagePath = `${this.main_path}/${Date.now()}_second_output.jpg`

        fs.writeFileSync(finalImagePath, second_output);

        return [x, finalImagePath];
    }

    async predictFinalPart(x) {
        let y_trans = await this.third_part_model.predict(x);

        let final_output = await tf.node.encodeJpeg(y_trans.squeeze(0).mul(255).clipByValue(0, 255).cast('int32'), "rgb");

        const finalImagePath = `${this.main_path}/${Date.now()}_final_output.jpg`

        fs.writeFileSync(finalImagePath, final_output);

        return finalImagePath;
    }
}

module.exports = ColorModel;

To load images and use them in the model, we can use the Jimp library like in the getTensorImage function, then transform the images to tensors.

To load the models we need to put the model files in an special folder. Electron will build the app and leave this folder in the resources folder so we can load the model files once the app is builded, the variable this.appPath will helps us to find the route of the model files. You should check the complete code on GitHub to see all the app structure. To access to the appPath variable, we need to use:

enableRemoteModule: true

when we create the worker window and import:

const appPath = process.resourcesPath;

You may notice that we load 3 models, the original model outputs 3 images from different parts of the model to help the gradient flow. We can take advantage of this to split the model and show each output to give feedback to the user about how the process is going. For example, the predictFirstPart function executes the first model first_part_model and generates an image from the output:

let first_output = await tf.node.encodeJpeg(Y_trans_sim.squeeze(0).mul(255).clipByValue(0, 255).cast('int32'), "rgb");

We return the remaining outputs and the generated image path so we can use the outputs in the second model and the image path in the main window to show to the user.

Finally, we have the Javascript file from the worker window, this file is in charge of the communication with the class that we have just seen:

const { ipcRenderer } = require('electron');
const ColorModel = require('./model.js');

const { app } = require('electron').remote; // use main modules from the renderer process

const appPath = process.resourcesPath;
const colorModel = new ColorModel(appPath);

(async () => {
    await colorModel.loadModel();
})();

ipcRenderer.on('send-model-paths', (event, arg) => {
    console.log("from worker 1");

    const imagesPaths = arg.imagesPaths;
    const mainPath = arg.mainPath;
    const name = arg.name;

    colorModel.savePaths(imagesPaths, name);
    colorModel.main_path = mainPath;
});

ipcRenderer.on('run-model', async (event, arg) => {
    const indexRadio = arg.indexRadio;

    let [style_vector, res_input, firstFinalImagePath] = await colorModel.predictFirstPart(indexRadio);

    event.sender.send('first-model-done', {firstFinalImagePath});

    let [x, secondFinalImagePath] = await colorModel.predictSecondPart(style_vector, res_input);

    event.sender.send('second-model-done', {secondFinalImagePath});

    let finalImagePath = await colorModel.predictFinalPart(x);

    event.sender.send('final-model-done', {finalImagePath});
});

In this case, we have two events, the first event receives the image paths and saves them so the model can load the images when the user clicks the run model button.

The second event starts the model execution, we can notice how we execute our 3 models and send the generated image paths to the main process so the latter can send them to the process renderer and show to the user the progress of the model.

Build The App

To build the app we need to add to the package.json file the following section:

  "build": {
    "mac": {
      "category": "productivity"
    },
    "files": [
      "index.js",
      "src"
    ],
    "extraResources": ["model/**/*"]
  }

the files section indicates to electron what files are needed to run the app, the extraResources section move all the model files to a different folder so we can localize that folder and load the models when the user is running the app.

One important thing to mention is that the source code of the final app can be extracted in a trivial way, so if the code from the app contains important information or you don't want the user to look at the source code then you should move part of the code to a backend as Discord does. You can also use C or C++ to write parts of the code that will be compiled and saved as binaries in the final app.

In the case of the model files, these are also stored in the final app and can be accessed easily.

To build the final app you can run:

npm run dist

The app still needs some improvements like warning alerts when the user selects a wrong folder without files, or better management to know when the models are ready to be executed, etc.

Categories