Version
Аплайнсы
The maximum performance one-click install solution for Wordpress 4, a free and open-source content management system (CMS), running on completely integrated, pre-configured and optimized LEMP stack with the freshest version of PHP 7.
A pre-configured and optimized for better performance LEMP environment for web-applications with the next generation of PHP version 7. It is similiar to the LAMP stack, where Apache is replaced with the lightweight yet powerful Nginx, and PHP works in `php-fpm` mode.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, Python 2.7, and Jupiter Notebook, a browser-based interactive notebook for programming, mathematics, and data science. The stack is designed for research and development tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, Python 3.6, and Jupiter Notebook, a browser-based interactive notebook for programming, mathematics, and data science. The stack is designed for research and development tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, Python 3.6, and Jupiter Notebook, a browser-based interactive notebook for programming, mathematics, and data science. The stack is designed for research and development tasks and optimized for running on CPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, Python 2.7, and Jupiter Notebook, a browser-based interactive notebook for programming, mathematics, and data science. The stack is designed for research and development tasks and optimized for running on CPU.
A pre-configured and fully integrated software stack with Theano, a numerical computation library for Python, and Python 3.6. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.