Аплайнсы
The pre-configured and ready-to-use runtime environment for the Udacity's Machine Learning Engineer Nanodegree program (nd009t). It includes Python 2.7, TensorFlow 1.0.0 and Keras 2.0.2. The stack also includes CUDA and cuDNN, and is optimized for running on NVidia GPU.
The pre-configured and ready-to-use runtime environment for the Udacity's Machine Learning Engineer Nanodegree program (nd009t). It includes Python 2.7, TensorFlow 1.0.0 and Keras 2.0.2. The software stack is optimized for running on CPU.
The pre-configured and ready-to-use runtime environment for the Stanford's CS224n course: Natural Language Processing with Deep Learning. It includes Python 2.7 and TensorFlow 1.4.1. The stack also includes CUDA and cuDNN, and is optimized for running on NVidia GPU.
The pre-configured and ready-to-use runtime environment for the Stanford's CS224n course: Natural Language Processing with Deep Learning. It includes Python 2.7 and TensorFlow 1.4.1. The software stack is optimized for running on CPU.
The pre-configured and ready-to-use runtime environment for the MIT 6.S094 course: Deep Learning for Self-Driving Cars, 2017. It includes Python 2.7, TensorFlow 0.12.1 and OpenCV 3.3.0. The stack also includes CUDA and cuDNN, and is optimized for running on NVidia GPU.
The pre-configured and ready-to-use runtime environment for the MIT 6.S094 course: Deep Learning for Self-Driving Cars, 2017. It includes Python 2.7, TensorFlow 0.12.1 and OpenCV 3.3.0. The software stack is optimized for running on CPU.
A pre-configured and fully integrated software stack with Caffe2, a lightweight, modular, and scalable deep learning framework. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with Caffe2, a lightweight, modular, and scalable deep learning framework. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on CPU.