During our initial development of PerceptiLabs Beta, we generated our visual modeling tool as a native, platform-specific executable for Windows, Mac, and Linux.
We recently switched to a new “browser-based” version that runs a local copy of our kernel on your machine. So, we thought we’d take a few minutes to discuss how this works and why we chose this architecture over traditional, platform-specific executables.
A Two-Part Architecture
Rather than provide developers with a monolithic executable, we have chosen to break our implementation into two components:
- PerceptiLabs “kernel”: a PyPI package that you install with pip. This is a cross-platform Python implementation that provides all of the functionality you will need to work with machine learning (ML) models in PerceptiLabs.
- Front end: when you are ready to build your model, you access our user interface via your web browser by navigating to: https://ml.perceptilabs.com/. The front end uses the kernel installed on your machine for its functionality. You can also access that link using the “Build a Model” button found in various parts of our website.
In a nutshell, you run
pip install perceptilabs, to install our package into your local Python environment. After installation, you then execute the PerceptiLabs kernel by running
perceptilabs on the command line. Note that the kernel collects user data, but no data is collected from your dataset. Once running, you simply navigate to the front end URL in your browser, and start working on your model in PerceptiLabs.
RationaleSo why did we decide on this type of architecture to distribute PerceptiLabs?
Here at PerceptiLabs, we’re big fans of Jupyter notebook. So when we set out to create the best experience possible, we wanted to create something that had Jupyter notebook’s interactive, web-based feel, while providing the best performance possible.
PerceptiLabs is inherently coupled with Python, both in its implementation, and due to the fact that it visually exposes Python code for each component in an ML model. And since Python is prevalent amongst data scientists and ML model builders, we wanted to make things as simple as possible for you. With our architecture, most, if not all users, will have the frontend (browser) available, and will only need to install our kernel using
pip install perceptilabs.
Next, this architecture allows us to run our core in your existing Python environment. This means you can completely customize which Python dependencies you want to use. Prior to this architecture, we had to provide the dependencies with the download, which limited you to using only the libraries that we packaged with the tool. Now you have the freedom to integrate any Python dependency that you see fit for your project in PerceptiLabs. Within PerceptiLabs itself, it’s just a matter of using the import statement when updating code for any component in your model.
Also, the installation of our kernel into your Python environment brings additional benefits. If you’re using a virtual environment such as Conda, you can install multiple concurrent versions of the kernel if required. You can then start up the environment of your choice, and execute a specific version of the kernel.
Note too, that the just-in-time tutorial provided in our original app, is also built into our browser-based version. So, you can get tips to get started, or prompts to assist you while you are building or training your model.
Our PerceptiLabs visual modeling tool is certainly a work in progress as we continue to develop it and bring you more great features and functionality for ML. As such, we’ll be updating the kernel from time to time and will notify you via our community channels. Upgrades are as simple as running
pip install — upgrade perceptilabsand of course you have the flexibility to choose which version you want.
And of course you have the flexibility to choose which version you want.
Based on the benefits highlighted above, we believe this architecture is the best approach, and will provide the lowest barrier to entry for most ML users. We look forward to your feedback and seeing all of the cool models that you create in PerceptiLabs.
For more information about installing our kernel, see our documentation here .