Ah 2020! From global healthcare issues to revolutions in how technology is being adopted and even repurposed, it has been quite a year.
At the end of each year, it's always fun to pause and think about machine learning (ML) trends which have seen phenomenal growth, especially around tools, resources, and accessibility to information.
As developers of PerceptiLabs visual modeling tool, we always look at these trends and ask what's next, as we continue to enhance our tool's capabilities. Doing so means looking into the crystal ball, or black box (depending on your tool of choice 😏), to see what the future holds for ML. Having recently gone through this exercise, here’s our top three predictions for ML in 2021.
1. Availability of Cutting Edge Models
In number three spot sits the growth of available cutting-edge models. As ML becomes more widely adopted we're seeing a parallel trend towards open access to models. One contributing factor is that large ML companies are constantly raising the bar for model performance. They're able to do this because they have large and comprehensive datasets with which they can train models, backed by teams of dedicated ML practitioners.
Many smaller and medium-sized companies and organizations want to leverage these high-performance models but may not be able to build them from scratch. So, many are turning to transfer learning, so that they can build upon or even repurpose these models which have already gone through extensive training. Conversely, many of the large enterprises who do have the resources to develop such models, have recognized that they can still benefit from outside contributions made to their models.
Open source and public models are also used by students, hobbyists, and other such groups who are experimenting with ML, some of whom use or contribute to these models to enhance their career growth.
2. Better Supporting Tools for ML
In second place for top ML predictions in 2021 sits more comprehensive tool support for ML practitioners.
It's no longer enough to just produce a working ML model that can make fairly good predictions. Today's ML practitioners demand model interpretability, the need to understand why predictions are being made that is, to peer into the proverbial black box if you like, and then decide if a model should go into production. This is particularly important in enterprise where predictions are often scrutinized for societal factors including ethics, social justice, and fairness.
The use of model cards has become a powerful tool for model development, and we expect them to be even more common place in 2021. Essentially these cards, which are more like design documents in practice, formally describe all aspects of a model. Their content can include:
- Detailed Overview: summarizes the model's purpose.
- Specifications: types of layers/neural networks, inputs, and outputs.
- Logistics: authors, date, links to additional documentation, how to cite the model, license.
- Intended Use: applicable use(s), domain constraints, etc.
- Limitations and Considerations: speed/accuracy constraints, ethical and privacy issues, potential for bias, etc.
- Training: data sources, test environments and equipment, etc.
- Target and Actual performance metrics: metrics such as expected versus actual accuracy.
For some great examples of model cards, check out this collection from MediaPipe.
Another key tool is visualization. The ability to visualize a model during design, training, and even in an audit is invaluable in and of itself. This is where PerceptiLabs shines as it offers both a GUI and visual API for TensorFlow.
These aspects compliment model cards because team members can constantly evaluate the model against what is specified on a card. For additional insight, check out An Overview of TensorFlow and how PerceptiLabs Makes it Easier.
Here at PerceptiLabs we're also looking at going beyond visualization with functionality to support interpretation. Soon we'll be adding new libraries which not only allow you to see what data is being used, but to provide insight as to which part of that data (e.g., certain parts of an image, certain columns of CSV data, etc.) is having the most impact predictions.
3. ML at the Edge
And finally the moment you've been waiting for, our number one ML prediction for 2021 (drum roll): ML at the edge.
We're seeing a growing trend towards inference at the edge, a segment we expect will grow substantially in 2021. There are a number of factors for this including the growth of IoT and greater reliance on devices for remote work. But to gain an appreciation for this trend, it's helpful to compare and contrast inference at the edge against "cloud-backed ML", which is found in both enterprise-oriented and consumer devices (e.g., Google Mini).
Cloud-backed ML probably conjures up images of tiny devices with internet access which collect data, send that data to the cloud for inference, and in some cases receive data back on the device (e.g., to perform some action). Such a deployment is necessary for many situations (e.g., for banks detecting fraud) and well-suited in situations in which longer latency may not be an issue, where third-party cloud hosting is required, etc. That said, the growth of 5G might make latency a thing of the past.
However, edge devices are rapidly gaining the processing power necessary to perform inference at the edge. Take Coral by Google for example, which features an on-board tensor processing unit (TPU) and can handle numerous IoT use cases (e.g., analyze images and voices). With such technology packed into a small form factor, inference is now possible without the need for an internet connection and cloud back end. This setup also adds security by keeping all of the collected data on the device, an aspect which is further enhanced on air-gapped devices.
From a technical standpoint, such deployments often demand smaller ML models which can be transferred quickly and fit into the limited storage of embedded devices. A popular solution here is to use quantization–reducing the numeric precision–of weights in models to reduce the model's size. Of course determining the right amount of quantization must be balanced against the inevitable reduction in accuracy. For more information, check out our Coral Sign Language Tutorial that demonstrates the use of Full Integer Quantization during model export in PerceptiLabs to reduce weights from 32-bit floats to 8-bit fixed point values and how to load the model onto a Coral Dev Board.
2020 is a year for the record books, but will be remembered as a time when ingenuity overcame even the most daunting of challenges. We like to think that this not only applies to global events but also to the evolution and democratization of ML tools, resources, and information.
And while we're on the topic of 2020, one of our primary predictions for THIS year was the rise of MLOps. MLOps can be adopted at different levels as described in MLOps: An "Ops" Just for Machine Learning and we've seen growth in this area as we predicted back in February.
Overall, it's been an interesting year to reflect on, and we look forward to seeing how our predictions for ML unfold in the new year.