Wednesday 29 November 2017

AWS Greengrass adds feature for Machine Learning Inference

AWS Greengrass Machine Learning (ML) Inference makes it easy to perform ML inference locally on AWS Greengrass devices using models that are built and trained in the cloud.  Until now, building and training ML models and running ML inference was done almost exclusively in the cloud.  Training ML models requires massive computing resources so it is a natural fit for the cloud.  With AWS Greengrass ML Inference your AWS Greengrass devices can make smart decisions quickly as data is being generated, even when they are disconnected.

The capability simplifies each step of deploying ML, including accessing ML models, deploying models to devices, building and deploying ML frameworks, creating inference apps, and utilizing on device accelerators such as GPUs and FPGAs.  For example, you can access a deep learning model built and trained in Amazon SageMaker directly from the AWS Greengrass console and then download it to your device as part of an AWS Greengrass group.  AWS Greengrass ML Inference includes a prebuilt Apache MXNet framework to install on AWS Greengrass devices so you don't have to create this from scratch.  The pre-built Apache MXNet package for NVIDIA Jetson, Intel Apollo Lake, and Raspberry Pi devices can be downloaded directly from the cloud or can be included as part of the software in your AWS Greengrass group.

AWS Greengrass ML Inference also includes prebuilt AWS Lambda templates that you use to create an inference app quickly.  The provided Lambda blueprint shows you common tasks such as loading models, importing Apache MXNet, and taking actions based on predictions.

In many applications, your ML model will perform better when you fully utilize all the hardware resources available on the device, and AWS Greengrass ML Inference helps with this. To let your application access the hardware resources on your device, you declare them as a local resource in your AWS Greengrass group in the AWS Greengrass console.

To use these features, please sign up for the preview.



from What's New http://ift.tt/2AogUm4

No comments:

Post a Comment