Menu
How to choose a cloud machine learning platform

How to choose a cloud machine learning platform

12 capabilities every cloud machine learning platform should provide to support the complete machine learning lifecycle

Credit: Dreamstime

In order to create effective machine learning and deep learning models, you need copious amounts of data, a way to clean the data and perform feature engineering on it, and a way to train models on your data in a reasonable amount of time.

Then you need a way to deploy your models, monitor them for drift over time, and retrain them as needed.

You can do all of that on-premises if you have invested in compute resources and accelerators such as GPUs, but you may find that if your resources are adequate, they are also idle much of the time.

On the other hand, it can sometimes be more cost-effective to run the entire pipeline in the cloud, using large amounts of compute resources and accelerators as needed, and then releasing them.

The major cloud providers — and a number of minor clouds too — have put significant effort into building out their machine learning platforms to support the complete machine learning lifecycle, from planning a project to maintaining a model in production. How do you determine which of these clouds will meet your needs? Here are 12 capabilities every end-to-end machine learning platform should provide.

Be close to your data

If you have the large amounts of data needed to build precise models, you don’t want to ship it halfway around the world. The issue here isn’t distance, however, it’s time: Data transmission speed is ultimately limited by the speed of light, even on a perfect network with infinite bandwidth. Long distances mean latency.

The ideal case for very large data sets is to build the model where the data already resides, so that no mass data transmission is needed. Several databases support that to a limited extent.

The next best case is for the data to be on the same high-speed network as the model-building software, which typically means within the same data centre. Even moving the data from one data centre to another within a cloud availability zone can introduce a significant delay if you have terabytes (TB) or more. You can mitigate this by doing incremental updates.

The worst case would be if you have to move big data long distances over paths with constrained bandwidth and high latency. The trans-Pacific cables going to Australia are particularly egregious in this respect.

Support an ETL or ELT pipeline

ETL (export, transform, and load) and ELT (export, load, and transform) are two data pipeline configurations that are common in the database world. Machine learning and deep learning amplify the need for these, especially the transform portion. ELT gives you more flexibility when your transformations need to change, as the load phase is usually the most time-consuming for big data.

In general, data in the wild is noisy. That needs to be filtered. Additionally, data in the wild has varying ranges: One variable might have a maximum in the millions, while another might have a range of -0.1 to -0.001. For machine learning, variables must be transformed to standardised ranges to keep the ones with large ranges from dominating the model. Exactly which standardised range depends on the algorithm used for the model.

Support an online environment for model building

The conventional wisdom used to be that you should import your data to your desktop for model building. The sheer quantity of data needed to build good machine learning and deep learning models changes the picture: You can download a small sample of data to your desktop for exploratory data analysis and model building, but for production models you need to have access to the full data.

Web-based development environments such as Jupyter Notebooks, JupyterLab, and Apache Zeppelin are well suited for model building. If your data is in the same cloud as the notebook environment, you can bring the analysis to the data, minimising the time-consuming movement of data.

Support scale-up and scale-out training

The compute and memory requirements of notebooks are generally minimal, except for training models. It helps a lot if a notebook can spawn training jobs that run on multiple large virtual machines or containers. It also helps a lot if the training can access accelerators such as GPUs, TPUs, and FPGAs; these can turn days of training into hours.

Support AutoML and automatic feature engineering

Not everyone is good at picking machine learning models, selecting features (the variables that are used by the model), and engineering new features from the raw observations. Even if you’re good at those tasks, they are time-consuming and can be automated to a large extent.

AutoML systems often try many models to see which result in the best objective function values, for example the minimum squared error for regression problems. The best AutoML systems can also perform feature engineering, and use their resources effectively to pursue the best possible models with the best possible sets of features.

Support the best machine learning and deep learning frameworks

Most data scientists have favourite frameworks and programming languages for machine learning and deep learning. For those who prefer Python, Scikit-learn is often a favourite for machine learning, while TensorFlow, PyTorch, Keras, and MXNet are often top picks for deep learning.

In Scala, Spark MLlib tends to be preferred for machine learning. In R, there are many native machine learning packages, and a good interface to Python. In Java, H2O.ai rates highly, as do Java-ML and Deep Java Library.

The cloud machine learning and deep learning platforms tend to have their own collection of algorithms, and they often support external frameworks in at least one language or as containers with specific entry points. In some cases you can integrate your own algorithms and statistical methods with the platform’s AutoML facilities, which is quite convenient.

Some cloud platforms also offer their own tuned versions of major deep learning frameworks. For example, AWS has an optimised version of TensorFlow that it claims can achieve nearly-linear scalability for deep neural network training.

Read more on the next page...


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags machine learning

Events

Featured

Slideshows

Meet the Reseller News 30 Under 30 Tech Awards 2020 winners

Meet the Reseller News 30 Under 30 Tech Awards 2020 winners

This year’s Reseller News 30 Under 30 Tech Awards were held as an integral part of the first entirely virtual Emerging Leaders​ forum, an annual event dedicated to identifying, educating and showcasing the New Zealand technology market’s rising stars. The 30 Under 30 Tech Awards 2020 recognised the outstanding achievements and business excellence of 30 talented individuals​, across both young leaders and those just starting out. In this slideshow, Reseller News honours this year's winners and captures their thoughts about how their ideas of leadership have changed over time.​

Meet the Reseller News 30 Under 30 Tech Awards 2020 winners
Reseller News Exchange Auckland: Beyond the myths — how partners can master cloud security

Reseller News Exchange Auckland: Beyond the myths — how partners can master cloud security

This exclusive Reseller News Exchange event in Auckland explored the challenges facing the partner community on the cloud security frontier, as well as market trends, customer priorities and how the channel can capitalise on the opportunities available. In association with Arrow, Bitdefender, Exclusive Networks, Fortinet and Palo Alto Networks. Photos by Gino Demeer.

Reseller News Exchange Auckland: Beyond the myths — how partners can master cloud security
Reseller News welcomes industry figures at 2020 Hall of Fame lunch

Reseller News welcomes industry figures at 2020 Hall of Fame lunch

Reseller News welcomed 2019 inductees - Leanne Buer, Ross Jenkins and Terry Dunn - to the fourth running of the Reseller News Hall of Fame lunch, held at the French Cafe in Auckland. The inductees discussed the changing face of the IT channel ecosystem in New Zealand and what it means to be a Reseller News Hall of Fame inductee. Photos by Gino Demeer.

Reseller News welcomes industry figures at 2020 Hall of Fame lunch
Show Comments