如何下载tp|mxnet

作者: 如何下载tp
2024-03-13 06:15:50

Apache MXNet | A flexible and efficient library for deep learning.

Apache MXNet | A flexible and efficient library for deep learning.

1.9.1

master

1.9.1

1.8.0

1.7.0

1.6.0

1.5.0

1.4.1

1.3.1

1.2.1

1.1.0

1.0.0

0.12.1

0.11.0

x

master

1.9.1

1.8.0

1.7.0

1.6.0

1.5.0

1.4.1

1.3.1

1.2.1

1.1.0

1.0.0

0.12.1

0.11.0

Get Started

Features

Ecosystem

Docs & Tutorials

Trusted By

GitHub

Apache

Apache Software Foundation

License

Security

Privacy

Events

Sponsorship

Thanks

1.9.1

master

1.9.1

1.8.0

1.7.0

1.6.0

1.5.0

1.4.1

1.3.1

1.2.1

1.1.0

1.0.0

0.12.1

0.11.0

This project has retired. For details please refer to its

Attic page.

APACHE MXNET: A FLEXIBLE AND EFFICIENT LIBRARY FOR DEEP LEARNING

A truly open source deep learning framework suitedfor flexible research

prototyping and

production.

Get Started

Key Features &Capabilities

All Features ›

Hybrid Front-End

A hybrid front-end seamlessly transitions between Gluon eager imperative mode and symbolic mode to provide both flexibility and speed.

Distributed Training

Scalable distributed training and performance optimization in research and production is enabled by the dual Parameter Server and Horovod support.

8 Language Bindings

Deep integration into Python and support for Scala, Julia, Clojure, Java, C++, R and Perl.

Tools & Libraries

A thriving ecosystem of tools and libraries extends MXNet and enable use-cases in computer vision, NLP, time series and more.

Ecosystem

All Projects ›

Explore a rich ecosystem of libraries, tools, and more to support development.

D2L.ai

An interactive deep learning book with code, math, and discussions. Used at Berkeley, University of Washington and more.

GluonCV

GluonCV is a computer vision toolkit with rich model zoo. From object detection to pose estimation.

GluonNLP

GluonNLP provides state-of-the-art deep learning models in NLP. For engineers and researchers to fast prototype research ideas and products.

GluonTS

Gluon Time Series (GluonTS) is the Gluon toolkit for probabilistic time series modeling, focusing on deep learning-based models.

Community

Join the Apache MXNet scientific community to contribute, learn, and get

answers to your questions.

GitHub

Report bugs, request features, discuss issues, and more.

Discuss Forum

Browse and join discussions on deep learning with MXNet and Gluon.

Slack

Discuss advanced topics. Request access by mail dev@mxnet.apache.org

Resources

Mailing lists

Developer Wiki

Jira Tracker

Github Roadmap

Blog

Forum

Contribute

apache/mxnet apachemxnet apachemxnet

A flexible and efficient library for deep learning.

"Copyright © 2017-2022, The Apache Software Foundation. Licensed under the Apache License, Version 2.0. Apache MXNet, MXNet, Apache, the Apache

feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the

Apache Software Foundation."

GitHub - apache/mxnet: Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

GitHub - apache/mxnet: Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Skip to content

Toggle navigation

Sign in

Product

Actions

Automate any workflow

Packages

Host and manage packages

Security

Find and fix vulnerabilities

Codespaces

Instant dev environments

Copilot

Write better code with AI

Code review

Manage code changes

Issues

Plan and track work

Discussions

Collaborate outside of code

Explore

All features

Documentation

GitHub Skills

Blog

Solutions

For

Enterprise

Teams

Startups

Education

By Solution

CI/CD & Automation

DevOps

DevSecOps

Resources

Learning Pathways

White papers, Ebooks, Webinars

Customer Stories

Partners

Open Source

GitHub Sponsors

Fund open source developers

The ReadME Project

GitHub community articles

Repositories

Topics

Trending

Collections

Pricing

Search or jump to...

Search code, repositories, users, issues, pull requests...

Search

Clear

Search syntax tips

Provide feedback

We read every piece of feedback, and take your input very seriously.

Include my email address so I can be contacted

Cancel

Submit feedback

Saved searches

Use saved searches to filter your results more quickly

Name

Query

To see all available qualifiers, see our documentation.

Cancel

Create saved search

Sign in

Sign up

You signed in with another tab or window. Reload to refresh your session.

You signed out in another tab or window. Reload to refresh your session.

You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

apache

/

mxnet

Public archive

Notifications

Fork

6.8k

Star

20.7k

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

mxnet.apache.org

License

Apache-2.0 license

20.7k

stars

6.8k

forks

Branches

Tags

Activity

Star

Notifications

Code

Issues

1.8k

Pull requests

205

Discussions

Actions

Projects

10

Wiki

Security

Insights

Additional navigation options

Code

Issues

Pull requests

Discussions

Actions

Projects

Wiki

Security

Insights

apache/mxnet

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

 masterBranchesTagsGo to fileCodeFolders and filesNameNameLast commit messageLast commit dateLatest commit History11,896 Commits.github.github  3rdparty3rdparty  benchmarkbenchmark  cdcd  cici  cmakecmake  configconfig  contrib/tvmopcontrib/tvmop  cpp-packagecpp-package  dockerdocker  docsdocs  exampleexample  includeinclude  licenseslicenses  pluginplugin  pythonpython  srcsrc  teststests  toolstools  .asf.yaml.asf.yaml  .clang-format.clang-format  .clang-tidy.clang-tidy  .cmakelintrc.cmakelintrc  .codecov.yml.codecov.yml  .git-blame-ignore-revs.git-blame-ignore-revs  .gitattributes.gitattributes  .gitignore.gitignore  .gitmodules.gitmodules  .licenserc.yaml.licenserc.yaml  .mxnet_root.mxnet_root  CMakeLists.txtCMakeLists.txt  CODEOWNERSCODEOWNERS  CODE_OF_CONDUCT.mdCODE_OF_CONDUCT.md  CONTRIBUTORS.mdCONTRIBUTORS.md  DNNL_README.mdDNNL_README.md  LICENSELICENSE  NEWS.mdNEWS.md  NOTICENOTICE  README.mdREADME.md  SECURITY.mdSECURITY.md  conftest.pyconftest.py  doap.rdfdoap.rdf  prospector.yamlprospector.yaml  pytest.inipytest.ini  rat-excludesrat-excludes  readthedocs.ymlreadthedocs.yml  snap.pythonsnap.python  View all filesRepository files navigationREADMECode of conductApache-2.0 licenseSecurity

Apache MXNet for Deep Learning

Apache MXNet is a deep learning framework designed for both efficiency and flexibility.

It allows you to mix symbolic and imperative programming

to maximize efficiency and productivity.

At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly.

A graph optimization layer on top of that makes symbolic execution fast and memory efficient.

MXNet is portable and lightweight, scalable to many GPUs and machines.

Apache MXNet is more than a deep learning project. It is a community

on a mission of democratizing AI. It is a collection of blue prints and guidelines

for building deep learning systems, and interesting insights of DL systems for hackers.

Licensed under an Apache-2.0 license.

Branch

Build Status

master

v1.x

Features

NumPy-like programming interface, and is integrated with the new, easy-to-use Gluon 2.0 interface. NumPy users can easily adopt MXNet and start in deep learning.

Automatic hybridization provides imperative programming with the performance of traditional symbolic programming.

Lightweight, memory-efficient, and portable to smart devices through native cross-compilation support on ARM, and through ecosystem projects such as TVM, TensorRT, OpenVINO.

Scales up to multi GPUs and distributed setting with auto parallelism through ps-lite, Horovod, and BytePS.

Extensible backend that supports full customization, allowing integration with custom accelerator libraries and in-house hardware without the need to maintain a fork.

Support for Python, Java, C++, R, Scala, Clojure, Go, Javascript, Perl, and Julia.

Cloud-friendly and directly compatible with AWS and Azure.

Contents

Installation

Tutorials

Ecosystem

API Documentation

Examples

Stay Connected

Social Media

What's New

1.9.1 Release - MXNet 1.9.1 Release.

1.8.0 Release - MXNet 1.8.0 Release.

1.7.0 Release - MXNet 1.7.0 Release.

1.6.0 Release - MXNet 1.6.0 Release.

1.5.1 Release - MXNet 1.5.1 Patch Release.

1.5.0 Release - MXNet 1.5.0 Release.

1.4.1 Release - MXNet 1.4.1 Patch Release.

1.4.0 Release - MXNet 1.4.0 Release.

1.3.1 Release - MXNet 1.3.1 Patch Release.

1.3.0 Release - MXNet 1.3.0 Release.

1.2.0 Release - MXNet 1.2.0 Release.

1.1.0 Release - MXNet 1.1.0 Release.

1.0.0 Release - MXNet 1.0.0 Release.

0.12.1 Release - MXNet 0.12.1 Patch Release.

0.12.0 Release - MXNet 0.12.0 Release.

0.11.0 Release - MXNet 0.11.0 Release.

Apache Incubator - We are now an Apache Incubator project.

0.10.0 Release - MXNet 0.10.0 Release.

0.9.3 Release - First 0.9 official release.

0.9.1 Release (NNVM refactor) - NNVM branch is merged into master now. An official release will be made soon.

0.8.0 Release

Ecosystem News

oneDNN for Faster CPU Performance

MXNet Memory Monger, Training Deeper Nets with Sublinear Memory Cost

Tutorial for NVidia GTC 2016

MXNet.js: Javascript Package for Deep Learning in Browser (without server)

Guide to Creating New Operators (Layers)

Go binding for inference

Stay Connected

Channel

Purpose

Follow MXNet Development on Github

See what's going on in the MXNet project.

MXNet Confluence Wiki for Developers

MXNet developer wiki for information related to project development, maintained by contributors and developers. To request write access, send an email to send request to the dev list .

dev@mxnet.apache.org mailing list

The "dev list". Discussions about the development of MXNet. To subscribe, send an email to dev-subscribe@mxnet.apache.org .

discuss.mxnet.io

Asking & answering MXNet usage questions.

Apache Slack #mxnet Channel

Connect with MXNet and other Apache developers. To join the MXNet slack channel send request to the dev list .

Follow MXNet on Social Media

Get updates about new features and events.

Social Media

Keep connected with the latest MXNet news and updates.

Apache MXNet on Twitter

Contributor and user blogs about MXNet

Discuss MXNet on r/mxnet

Apache MXNet YouTube channel

Apache MXNet on LinkedIn

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao,

Bing Xu, Chiyuan Zhang, and Zheng Zhang.

MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems.

In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

About

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

mxnet.apache.org

Topics

mxnet

Resources

Readme

License

Apache-2.0 license

Code of conduct

Code of conduct

Security policy

Security policy

Activity

Custom properties

Stars

20.7k

stars

Watchers

1.1k

watching

Forks

6.8k

forks

Report repository

Releases

34

Apache MXNet (incubating) 1.9.1 patch release

Latest

May 10, 2022

+ 33 releases

Packages

0

No packages published

Used by 7k

+ 6,986

Contributors

875

+ 861 contributors

Languages

C++

48.6%

Python

34.7%

Jupyter Notebook

7.6%

Cuda

6.1%

CMake

0.9%

Shell

0.7%

Other

1.4%

Footer

© 2024 GitHub, Inc.

Footer navigation

Terms

Privacy

Security

Status

Docs

Contact

Manage cookies

Do not share my personal information

You can’t perform that action at this time.

Apache MXNet - Wikipedia

Apache MXNet - Wikipedia

Jump to content

Main menu

Main menu

move to sidebar

hide

Navigation

Main pageContentsCurrent eventsRandom articleAbout WikipediaContact usDonate

Contribute

HelpLearn to editCommunity portalRecent changesUpload file

Search

Search

Create account

Log in

Personal tools

Create account Log in

Pages for logged out editors learn more

ContributionsTalk

Contents

move to sidebar

hide

(Top)

1Features

Toggle Features subsection

1.1Scalability

1.2Flexibility

1.3Multiple languages

1.4Portability

1.5Cloud Support

2See also

3References

Toggle the table of contents

Apache MXNet

7 languages

CatalàفارسیFrançais한국어РусскийУкраїнська中文

Edit links

ArticleTalk

English

ReadEditView history

Tools

Tools

move to sidebar

hide

Actions

ReadEditView history

General

What links hereRelated changesUpload fileSpecial pagesPermanent linkPage informationCite this pageGet shortened URLDownload QR codeWikidata item

Print/export

Download as PDFPrintable version

From Wikipedia, the free encyclopedia

This article contains content that is written like an advertisement. Please help improve it by removing promotional content and inappropriate external links, and by adding encyclopedic content written from a neutral point of view. (April 2020) (Learn how and when to remove this template message)

Apache MXNetDeveloper(s)Apache Software FoundationStable release1.9.1[1]

/ 10 May 2022; 21 months ago (10 May 2022)

Repositorygithub.com/apache/incubator-mxnet

Written inC++, Python, R, Java, Julia, JavaScript, Scala, Go, PerlOperating systemWindows, macOS, LinuxTypeLibrary for machine learning and deep learningLicenseApache License 2.0Websitemxnet.apache.org

Apache MXNet is an open-source deep learning software framework that trains and deploys deep neural networks. It is scalable, allows fast model training, and supports a flexible programming model and multiple programming languages (including C++, Python, Java, Julia, MATLAB, JavaScript, Go, R, Scala, Perl, and Wolfram Language). The MXNet library is portable and can scale to multiple GPUs[2] and machines. It was co-developed by Carlos Guestrin at the University of Washington (along with GraphLab).[3]

As of September 2023, it is no longer actively developed.[4]

Features[edit]

Apache MXNet is a scalable deep learning framework that supports deep learning models, such as convolutional neural networks (CNNs) and long short-term memory networks (LSTMs).

Scalability[edit]

MXNet can be distributed on dynamic cloud infrastructure using a distributed parameter server (based on research at Carnegie Mellon University, Baidu, and Google[5]). With multiple GPUs or CPUs, the framework approaches linear scale.

Flexibility[edit]

MXNet supports both imperative and symbolic programming. The framework allows developers to track, debug, save checkpoints, modify hyperparameters, and perform early stopping.

Multiple languages[edit]

MXNet supports Python, R, Scala, Clojure, Julia, Perl, MATLAB, and JavaScript for front-end development and C++ for back-end optimization.

Portability[edit]

Supports deployment of a trained model to low-end devices for inference, such as mobile devices (using Amalgamation[6]), Internet of things devices (using AWS Greengrass), serverless computing (using AWS Lambda), or containers. These low-end environments can have only weaker CPU or limited memory (RAM) and should be able to use the models that were trained on a higher-level environment (GPU-based cluster, for example)

Cloud Support[edit]

MXNet is supported by public cloud providers including Amazon Web Services (AWS)[7] and Microsoft Azure.[8] Amazon has chosen MXNet as its deep learning framework of choice at AWS.[9][10] Currently, MXNet is supported by Intel, Baidu, Microsoft, Wolfram Research, and research institutions such as Carnegie Mellon, MIT, the University of Washington, and the Hong Kong University of Science and Technology.[11]

See also[edit]

Comparison of deep learning software

Differentiable programming

References[edit]

^ "Release 1.9.1". 10 May 2022. Retrieved 30 June 2022.

^ "Building Deep Neural Networks in the Cloud with Azure GPU VMs, MXNet and Microsoft R Server". Microsoft. 15 September 2016. Archived from the original on August 15, 2023. Retrieved 13 May 2017.

^ "Carlos Guestrin". guestrin.su.domains. Archived from the original on September 22, 2023.

^ "Apache MXNet - Apache Attic".

^ "Scaling Distributed Machine Learning with the Parameter Server" (PDF). Archived (PDF) from the original on August 13, 2023. Retrieved 2014-10-08.

^ "Amalgamation". Archived from the original on 2018-08-08. Retrieved 2018-05-08.

^ "Apache MXNet on AWS - Deep Learning on the Cloud". Amazon Web Services, Inc. Retrieved 13 May 2017.

^ "Building Deep Neural Networks in the Cloud with Azure GPU VMs, MXNet and Microsoft R Server". Microsoft TechNet Blogs. 15 September 2016. Retrieved 6 September 2017.

^ "MXNet - Deep Learning Framework of Choice at AWS - All Things Distributed". www.allthingsdistributed.com. 22 November 2016. Retrieved 13 May 2017.

^ "Amazon Has Chosen This Framework to Guide Deep Learning Strategy". Fortune. Retrieved 13 May 2017.

^ "MXNet, Amazon's deep learning framework, gets accepted into Apache Incubator". Retrieved 2017-03-08.

vteDeep learning software

Comparison

Open source

Apache MXNet

Apache SINGA

Caffe

Deeplearning4j

DeepSpeed

Dlib

Keras

Microsoft Cognitive Toolkit

ML.NET

OpenNN

PyTorch

TensorFlow

Theano

Torch

ONNX

OpenVINO

MindSpore

Proprietary

Apple Core ML

IBM Watson

Neural Designer

Wolfram Mathematica

MATLAB Deep Learning Toolbox

Category

vteThe Apache Software FoundationTop-levelprojects

Accumulo

ActiveMQ

Airavata

Airflow

Allura

Ambari

Ant

Aries

Arrow

Apache HTTP Server

APR

Avro

Axis

Axis2

Beam

Bloodhound

Brooklyn

Calcite

Camel

CarbonData

Cassandra

Cayenne

CloudStack

Cocoon

Cordova

CouchDB

cTAKES

CXF

Derby

Directory

Drill

Druid

Empire-db

Felix

Flex

Flink

Flume

FreeMarker

Geronimo

Groovy

Guacamole

Gump

Hadoop

HBase

Helix

Hive

Iceberg

Ignite

Impala

Jackrabbit

James

Jena

JMeter

Kafka

Kudu

Kylin

Lucene

Mahout

Maven

MINA

mod_perl

MyFaces

Mynewt

NiFi

NetBeans

Nutch

NuttX

OFBiz

Oozie

OpenEJB

OpenJPA

OpenNLP

OрenOffice

ORC

PDFBox

Parquet

Phoenix

POI

Pig

Pinot

Pivot

Qpid

Roller

RocketMQ

Samza

Shiro

SINGA

Sling

Solr

Spark

Storm

SpamAssassin

Struts 1

Struts 2

Subversion

Superset

SystemDS

Tapestry

Thrift

Tika

TinkerPop

Tomcat

Trafodion

Traffic Server

UIMA

Velocity

Wicket

Xalan

Xerces

XMLBeans

Yetus

ZooKeeper

Commons

BCEL

BSF

Daemon

Jelly

Logging

Incubator

Taverna

Other projects

Batik

FOP

Ivy

Log4j

Attic

Apex

AxKit

Beehive

Bluesky

iBATIS

Click

Continuum

Deltacloud

Etch

Giraph

Hama

Harmony

Jakarta

Marmotta

MXNet

ODE

River

Shale

Slide

Sqoop

Stanbol

Tuscany

Wave

XML

Licenses

Apache License

Category

Retrieved from "https://en.wikipedia.org/w/index.php?title=Apache_MXNet&oldid=1188246778"

Categories: Deep learning softwareFree statistical softwareApache Software FoundationCross-platform free softwareSoftware using the Apache licenseHidden categories: Articles with a promotional tone from April 2020All articles with a promotional tone

This page was last edited on 4 December 2023, at 05:22 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License 4.0;

additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About Wikipedia

Disclaimers

Contact Wikipedia

Code of Conduct

Developers

Statistics

Cookie statement

Mobile view

Toggle limited content width

mxnet · PyPI

mxnet · PyPI

Skip to main content

Switch to mobile version

Warning

Some features may not work without JavaScript. Please try enabling it if you encounter problems.

Search PyPI

Search

Help

Sponsors

Log in

Register

Menu

Help

Sponsors

Log in

Register

Search PyPI

Search

mxnet 1.9.1

pip install mxnet

Copy PIP instructions

Latest version

Released:

May 17, 2022

Apache MXNet is an ultra-scalable deep learning framework. This version uses openblas and MKLDNN.

Navigation

Project description

Release history

Download files

Project links

Homepage

Statistics

GitHub statistics:

Stars:

Forks:

Open issues:

Open PRs:

View statistics for this project via Libraries.io, or by using our public dataset on Google BigQuery

Meta

License: Apache Software License (Apache 2.0)

Maintainers

mxnet

mxnet-bln

mxnet-ci

Classifiers

Development Status

5 - Production/Stable

Intended Audience

Developers

Education

Science/Research

License

OSI Approved :: Apache Software License

Programming Language

C++

Cython

Other

Perl

Python

Python :: 3.5

Python :: 3.6

Python :: 3.7

Python :: 3.8

Python :: Implementation :: CPython

Topic

Scientific/Engineering

Scientific/Engineering :: Artificial Intelligence

Scientific/Engineering :: Mathematics

Software Development

Software Development :: Libraries

Software Development :: Libraries :: Python Modules

Project description

Project details

Release history

Download files

Project description

Apache MXNet (Incubating) Python Package

Apache MXNet is a deep learning framework designed for both efficiency and flexibility.

It allows you to mix the flavours of deep learning programs together to maximize the efficiency and your productivity.

For feature requests on the PyPI package, suggestions, and issue reports, create an issue by clicking here.

Prerequisites

This package supports Linux, Mac OSX, and Windows platforms. You may also want to check:

mxnet-cu112 with CUDA-11.2 support.

mxnet-cu110 with CUDA-11.0 support.

mxnet-cu102 with CUDA-10.2 support.

mxnet-cu101 with CUDA-10.1 support.

mxnet-cu100 with CUDA-10.0 support.

mxnet-native CPU variant without MKLDNN.

To use this package on Linux you need the libquadmath.so.0 shared library. On

Debian based systems, including Ubuntu, run sudo apt install libquadmath0 to

install the shared library. On RHEL based systems, including CentOS, run sudo yum install libquadmath to install the shared library. As libquadmath.so.0 is

a GPL library and MXNet part of the Apache Software Foundation, MXNet must not

redistribute libquadmath.so.0 as part of the Pypi package and users must

manually install it.

To install for other platforms (e.g. Windows, Raspberry Pi/ARM) or other versions, check Installing MXNet for instructions on building from source.

Installation

To install, use:

pip install mxnet

Project details

Project links

Homepage

Statistics

GitHub statistics:

Stars:

Forks:

Open issues:

Open PRs:

View statistics for this project via Libraries.io, or by using our public dataset on Google BigQuery

Meta

License: Apache Software License (Apache 2.0)

Maintainers

mxnet

mxnet-bln

mxnet-ci

Classifiers

Development Status

5 - Production/Stable

Intended Audience

Developers

Education

Science/Research

License

OSI Approved :: Apache Software License

Programming Language

C++

Cython

Other

Perl

Python

Python :: 3.5

Python :: 3.6

Python :: 3.7

Python :: 3.8

Python :: Implementation :: CPython

Topic

Scientific/Engineering

Scientific/Engineering :: Artificial Intelligence

Scientific/Engineering :: Mathematics

Software Development

Software Development :: Libraries

Software Development :: Libraries :: Python Modules

Release history

Release notifications |

RSS feed

2.0.0b1

pre-release

Mar 25, 2022

2.0.0a0

pre-release

Mar 31, 2021

This version

1.9.1

May 17, 2022

1.9.0

Dec 18, 2021

1.8.0.post0

Mar 30, 2021

1.8.0

Mar 21, 2021

1.7.0.post2

Feb 7, 2021

1.7.0.post1

Aug 28, 2020

1.6.0

Feb 21, 2020

1.5.1.post0

Oct 1, 2019

1.5.1

Sep 30, 2019

1.5.0

Jul 19, 2019

1.4.1

May 14, 2019

1.4.0.post0

Mar 8, 2019

1.4.0

Feb 28, 2019

1.3.1

Nov 27, 2018

1.3.0.post0

Sep 14, 2018

1.3.0

Sep 12, 2018

1.2.1.post1

Jul 31, 2018

1.2.1.post0

Jul 30, 2018

1.2.1

Jul 17, 2018

1.2.0

May 21, 2018

1.1.0.post0

Apr 7, 2018

1.1.0

Feb 20, 2018

1.0.0.post4

Jan 14, 2018

1.0.0.post3

Jan 6, 2018

1.0.0.post2

Dec 23, 2017

1.0.0.post1

Dec 8, 2017

1.0.0.post0

Dec 5, 2017

1.0.0

Dec 4, 2017

0.12.1

Nov 16, 2017

0.12.0

Oct 30, 2017

0.11.0

Aug 20, 2017

0.10.0.post2

Jul 18, 2017

0.10.0

May 26, 2017

0.9.5.post2

May 24, 2017

0.9.5.post1

May 12, 2017

0.9.5

May 2, 2017

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

mxnet-1.9.1-py3-none-manylinux2014_x86_64.whl

(49.1 MB

view hashes)

Uploaded

May 17, 2022

py3

mxnet-1.9.1-py3-none-manylinux2014_aarch64.whl

(35.8 MB

view hashes)

Uploaded

May 17, 2022

py3

mxnet-1.9.1-py3-none-macosx_10_13_x86_64.whl

(39.5 MB

view hashes)

Uploaded

May 17, 2022

py3

Close

Hashes for mxnet-1.9.1-py3-none-manylinux2014_x86_64.whl

Hashes for mxnet-1.9.1-py3-none-manylinux2014_x86_64.whl

Algorithm

Hash digest

SHA256

65d5dac162c87a14d138d888b54494d515036d9047ae804ff51f770bd02197a6

Copy

MD5

361ba1b69adc578e3362fca04b162d2a

Copy

BLAKE2b-256

2783b307ad1f6e1ff60f8e418cb87f53646192681b417aadc1f40a345ba64637

Copy

Close

Close

Hashes for mxnet-1.9.1-py3-none-manylinux2014_aarch64.whl

Hashes for mxnet-1.9.1-py3-none-manylinux2014_aarch64.whl

Algorithm

Hash digest

SHA256

5e51a0c05d99f8f1b3b5e7c02170be57af2e6edb3ad9af2cb9551ace3e22942c

Copy

MD5

44716d5e97938206be3a01dc698a91d4

Copy

BLAKE2b-256

d8b8584d191e3edd007e80cdeb3cf9f34cdeaeb5919193a66b9164af57a9803f

Copy

Close

Close

Hashes for mxnet-1.9.1-py3-none-macosx_10_13_x86_64.whl

Hashes for mxnet-1.9.1-py3-none-macosx_10_13_x86_64.whl

Algorithm

Hash digest

SHA256

73c045f65ad05fe9ca3c4202e10471703b57231f8ac8b05d973ec2ab362178fb

Copy

MD5

cbfa68f814f566edee18f4acfc2b1713

Copy

BLAKE2b-256

b7e8abb7b59e18e2e856ac71858b49be1b8ca84fe7099146bf98c5f8e867b46a

Copy

Close

Help

Installing packages

Uploading packages

User guide

Project name retention

FAQs

About PyPI

PyPI on Twitter

Infrastructure dashboard

Statistics

Logos & trademarks

Our sponsors

Contributing to PyPI

Bugs and feedback

Contribute on GitHub

Translate PyPI

Sponsor PyPI

Development credits

Using PyPI

Code of conduct

Report security issue

Privacy policy

Terms of use

Acceptable Use Policy

Status:

all systems operational

Developed and maintained by the Python community, for the Python community.

Donate today!

"PyPI", "Python Package Index", and the blocks logos are registered trademarks of the Python Software Foundation.

© 2024 Python Software Foundation

Site map

Switch to desktop version

English

español

français

日本語

português (Brasil)

українська

Ελληνικά

Deutsch

中文 (简体)

中文 (繁體)

русский

עברית

esperanto

Supported by

AWS

Cloud computing and Security Sponsor

Datadog

Monitoring

Fastly

CDN

Google

Download Analytics

Microsoft

PSF Sponsor

Pingdom

Monitoring

Sentry

Error logging

StatusPage

Status page

Get Started | Apache MXNet

Get Started | Apache MXNet

1.9.1

master

1.9.1

1.8.0

1.7.0

1.6.0

1.5.0

1.4.1

1.3.1

1.2.1

1.1.0

1.0.0

0.12.1

0.11.0

x

master

1.9.1

1.8.0

1.7.0

1.6.0

1.5.0

1.4.1

1.3.1

1.2.1

1.1.0

1.0.0

0.12.1

0.11.0

Get Started

Features

Ecosystem

Docs & Tutorials

Trusted By

GitHub

Apache

Apache Software Foundation

License

Security

Privacy

Events

Sponsorship

Thanks

1.9.1

master

1.9.1

1.8.0

1.7.0

1.6.0

1.5.0

1.4.1

1.3.1

1.2.1

1.1.0

1.0.0

0.12.1

0.11.0

This project has retired. For details please refer to its

Attic page.

Get Started

Apache MXNet Tutorials

Build and install Apache MXNet from source

To build and install Apache MXNet from the official Apache Software Foundation

signed source code please follow our Building From Source guide.

The signed source releases are available here

Platform and use-case specific instructions for using Apache MXNet

Please indicate your preferred configuration below to see specific instructions.

MXNet Version

v1.9.1

v1.9.1

v1.8.0

v1.7.0

v1.6.0

v1.5.1

v1.4.1

v1.3.1

v1.2.1

v1.1.0

v1.0.0

v0.12.1

v0.11.0

OS / Platform

Linux

MacOS

Windows

Cloud

Devices

Language

Python

Scala

Java

Clojure

R

Julia

Perl

Cpp

GPU / CPU

GPU

CPU

Device

Raspberry Pi

NVIDIA Jetson

Distribution

Pip

Docker

Build from Source

WARNING: the following PyPI package names are provided for your convenience but

they point to packages that are not provided nor endorsed by the Apache

Software Foundation. As such, they might contain software components with more

restrictive licenses than the Apache License and you’ll need to decide whether

they are appropriate for your usage. The packages linked here contain GPL GCC

Runtime Library components. Like all Apache Releases, the official Apache MXNet

releases consist of source code only and are found at the Download

page.

Run the following command:

pip install mxnet

pip install mxnet==1.8.0.post0

Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled

in pip packages by default.

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform

performance library of basic building blocks for deep learning applications.

The library is optimized for Intel Architecture Processors, Intel Processor

Graphics and Xe architecture-based Graphics. Support for other architectures

such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is

experimental.

oneDNN is intended for deep learning applications and framework developers

interested in improving application performance on Intel CPUs and GPUs, more

details can be found here.

You can find performance numbers in the

MXNet tuning guide.

To install native MXNet without oneDNN, run the following command:

pip install mxnet-native==1.8.0.post0

pip install mxnet==1.7.0.post2

Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled

in pip packages by default.

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform

performance library of basic building blocks for deep learning applications.

The library is optimized for Intel Architecture Processors, Intel Processor

Graphics and Xe architecture-based Graphics. Support for other architectures

such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is

experimental.

Please note that the Linux CPU pip wheels for AArch64 platforms are built with

oneDNN with Arm Performance Libraries (APL) integrated. Because APL's license

is not compatible with Apache license, you would need to

manually install APL

in your system.

oneDNN is intended for deep learning applications and framework developers

interested in improving application performance on Intel CPUs and GPUs, more

details can be found here.

You can find performance numbers in the

MXNet tuning guide.

To install native MXNet without oneDNN, run the following command:

pip install mxnet-native==1.7.0

pip install mxnet==1.6.0

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.6.0

pip install mxnet==1.5.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.5.1

pip install mxnet==1.4.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.4.1

pip install mxnet==1.3.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.3.1

pip install mxnet==1.2.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.2.1

pip install mxnet==1.1.0

pip install mxnet==1.0.0

pip install mxnet==0.12.1

For MXNet 0.12.0:

pip install mxnet==0.12.0

pip install mxnet==0.11.0

You can then validate your MXNet installation.

NOTES:

mxnet-cu101 means the package is built with CUDA/cuDNN and the CUDA version is

10.1.

All MKL pip packages are experimental prior to version 1.3.0.

WARNING: the following links and names of binary distributions are provided for

your convenience but they point to packages that are not provided nor endorsed

by the Apache Software Foundation. As such, they might contain software

components with more restrictive licenses than the Apache License and you’ll

need to decide whether they are appropriate for your usage. Like all Apache

Releases, the official Apache MXNet releases consist of source code

only and are found at

the Download page.

Docker images with MXNet are available at DockerHub.

After you installed Docker on your machine, you can use them via:

$ docker pull mxnet/python

You can list docker images to see if mxnet/python docker image pull was successful.

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python latest 00d026968b3c 3 weeks ago 1.41 GB

You can then validate the installation.

Please follow the build from source instructions linked above.

WARNING: the following PyPI package names are provided for your convenience but

they point to packages that are not provided nor endorsed by the Apache

Software Foundation. As such, they might contain software components with more

restrictive licenses than the Apache License and you’ll need to decide whether

they are appropriate for your usage. The packages linked here contain

proprietary parts of the NVidia CUDA SDK and GPL GCC Runtime Library components.

Like all Apache Releases, the official Apache MXNet releases

consist of source code only and are found at the Download

page.

Run the following command:

$ pip install mxnet-cu102

$ pip install mxnet-cu102==1.8.0.post0

$ pip install mxnet-cu102==1.7.0

$ pip install mxnet-cu102==1.6.0

$ pip install mxnet-cu101==1.5.1

$ pip install mxnet-cu101==1.4.1

$ pip install mxnet-cu92==1.3.1

$ pip install mxnet-cu92==1.2.1

$ pip install mxnet-cu91==1.1.0

$ pip install mxnet-cu90==1.0.0

$ pip install mxnet-cu90==0.12.1

$ pip install mxnet-cu80==0.11.0

You can then validate your MXNet installation.

NOTES:

mxnet-cu101 means the package is built with CUDA/cuDNN and the CUDA version is

10.1.

All MKL pip packages are experimental prior to version 1.3.0.

CUDA should be installed first. Starting from version 1.8.0, CUDNN and NCCL should be installed as well.

Important: Make sure your installed CUDA (CUDNN/NCCL if applicable) version matches the CUDA version in the pip package.

Check your CUDA version with the following command:

nvcc --version

You can either upgrade your CUDA install or install the MXNet package that supports your CUDA version.

WARNING: the following links and names of binary distributions are provided for

your convenience but they point to packages that are not provided nor endorsed

by the Apache Software Foundation. As such, they might contain software

components with more restrictive licenses than the Apache License and you’ll

need to decide whether they are appropriate for your usage. Like all Apache

Releases, the official Apache MXNet releases consist of source code

only and are found at

the Download page.

Docker images with MXNet are available at DockerHub.

Please follow the NVidia Docker installation

instructions to enable the usage

of GPUs from the docker containers.

After you installed Docker on your machine, you can use them via:

$ docker pull mxnet/python:gpu # Use sudo if you skip Step 2

You can list docker images to see if mxnet/python docker image pull was successful.

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python gpu 493b2683c269 3 weeks ago 4.77 GB

You can then validate the installation.

Please follow the build from source instructions linked above.

You will need to R v3.4.4+ and build MXNet from source. Please follow the

instructions linked above.

You can use the Maven packages defined in the following dependency to include MXNet in your Java

project. The Java API is provided as a subset of the Scala API and is intended for inference only.

Please refer to the MXNet-Java setup guide for a detailed set of

instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-cpu

[1.5.0, )

You can use the Maven packages defined in the following dependency to include MXNet in your Clojure

project. To maximize leverage, the Clojure package has been built on the existing Scala package. Please

refer to the MXNet-Scala setup guide for a detailed set of instructions

to help you with the setup process that is required to use the Clojure dependency.

org.apache.mxnet.contrib.clojure

clojure-mxnet-linux-cpu

Previously available binaries distributed via Maven have been removed as they

redistributed Category-X binaries in violation of Apache Software Foundation

(ASF) policies.

At this point in time, no third-party binary Java packages are available. Please

follow the build from source instructions linked above.

Please follow the build from source instructions linked above.

Please follow the build from source instructions linked above.

To use the C++ package, build from source the USE_CPP_PACKAGE=1 option. Please

refer to the build from source instructions linked above.

WARNING: the following PyPI package names are provided for your convenience but

they point to packages that are not provided nor endorsed by the Apache

Software Foundation. As such, they might contain software components with more

restrictive licenses than the Apache License and you’ll need to decide whether

they are appropriate for your usage. The packages linked here contain GPL GCC

Runtime Library components. Like all Apache Releases, the official Apache MXNet

releases consist of source code only and are found at the Download

page.

Run the following command:

pip install mxnet

pip install mxnet==1.8.0.post0

Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled

in pip packages by default.

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform

performance library of basic building blocks for deep learning applications.

The library is optimized for Intel Architecture Processors, Intel Processor

Graphics and Xe architecture-based Graphics. Support for other architectures

such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is

experimental.

oneDNN is intended for deep learning applications and framework developers

interested in improving application performance on Intel CPUs and GPUs, more

details can be found here.

You can find performance numbers in the

MXNet tuning guide.

To install native MXNet without oneDNN, run the following command:

pip install mxnet-native==1.8.0.post0

pip install mxnet==1.7.0.post2

Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled

in pip packages by default.

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform

performance library of basic building blocks for deep learning applications.

The library is optimized for Intel Architecture Processors, Intel Processor

Graphics and Xe architecture-based Graphics. Support for other architectures

such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is

experimental.

Please note that the Linux CPU pip wheels for AArch64 platforms are built with

oneDNN with Arm Performance Libraries (APL) integrated. Because APL's license

is not compatible with Apache license, you would need to

manually install APL

in your system.

oneDNN is intended for deep learning applications and framework developers

interested in improving application performance on Intel CPUs and GPUs, more

details can be found here.

You can find performance numbers in the

MXNet tuning guide.

To install native MXNet without oneDNN, run the following command:

pip install mxnet-native==1.7.0

pip install mxnet==1.6.0

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.6.0

pip install mxnet==1.5.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.5.1

pip install mxnet==1.4.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.4.1

pip install mxnet==1.3.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.3.1

pip install mxnet==1.2.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.2.1

pip install mxnet==1.1.0

pip install mxnet==1.0.0

pip install mxnet==0.12.1

For MXNet 0.12.0:

pip install mxnet==0.12.0

pip install mxnet==0.11.0

You can then validate your MXNet installation.

NOTES:

mxnet-cu101 means the package is built with CUDA/cuDNN and the CUDA version is

10.1.

All MKL pip packages are experimental prior to version 1.3.0.

WARNING: the following links and names of binary distributions are provided for

your convenience but they point to packages that are not provided nor endorsed

by the Apache Software Foundation. As such, they might contain software

components with more restrictive licenses than the Apache License and you’ll

need to decide whether they are appropriate for your usage. Like all Apache

Releases, the official Apache MXNet releases consist of source code

only and are found at

the Download page.

Docker images with MXNet are available at DockerHub.

After you installed Docker on your machine, you can use them via:

$ docker pull mxnet/python

You can list docker images to see if mxnet/python docker image pull was successful.

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python latest 00d026968b3c 3 weeks ago 1.41 GB

You can then validate the installation.

Please follow the build from source instructions linked above.

Please follow the build from source instructions linked above.

You will need to R v3.4.4+ and build MXNet from source. Please follow the

instructions linked above.

You can use the Maven packages defined in the following dependency to include MXNet in your Java

project. The Java API is provided as a subset of the Scala API and is intended for inference only.

Please refer to the MXNet-Java setup guide for a detailed set of

instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-cpu

[1.5.0, )

You can use the Maven packages defined in the following dependency to include MXNet in your Clojure

project. To maximize leverage, the Clojure package has been built on the existing Scala package. Please

refer to the MXNet-Scala setup guide for a detailed set of instructions

to help you with the setup process that is required to use the Clojure dependency.

org.apache.mxnet.contrib.clojure

clojure-mxnet-linux-cpu

Previously available binaries distributed via Maven have been removed as they

redistributed Category-X binaries in violation of Apache Software Foundation

(ASF) policies.

At this point in time, no third-party binary Java packages are available. Please

follow the build from source instructions linked above.

Please follow the build from source instructions linked above.

Please follow the build from source instructions linked above.

To use the C++ package, build from source the USE_CPP_PACKAGE=1 option. Please

refer to the build from source instructions linked above.

WARNING: the following PyPI package names are provided for your convenience but

they point to packages that are not provided nor endorsed by the Apache

Software Foundation. As such, they might contain software components with more

restrictive licenses than the Apache License and you’ll need to decide whether

they are appropriate for your usage. The packages linked here contain GPL GCC

Runtime Library components. Like all Apache Releases, the official Apache MXNet

releases consist of source code only and are found at the Download

page.

Run the following command:

pip install mxnet

pip install mxnet==1.8.0.post0

Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled

in pip packages by default.

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform

performance library of basic building blocks for deep learning applications.

The library is optimized for Intel Architecture Processors, Intel Processor

Graphics and Xe architecture-based Graphics. Support for other architectures

such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is

experimental.

oneDNN is intended for deep learning applications and framework developers

interested in improving application performance on Intel CPUs and GPUs, more

details can be found here.

You can find performance numbers in the

MXNet tuning guide.

To install native MXNet without oneDNN, run the following command:

pip install mxnet-native==1.8.0.post0

pip install mxnet==1.7.0.post2

Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled

in pip packages by default.

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform

performance library of basic building blocks for deep learning applications.

The library is optimized for Intel Architecture Processors, Intel Processor

Graphics and Xe architecture-based Graphics. Support for other architectures

such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is

experimental.

Please note that the Linux CPU pip wheels for AArch64 platforms are built with

oneDNN with Arm Performance Libraries (APL) integrated. Because APL's license

is not compatible with Apache license, you would need to

manually install APL

in your system.

oneDNN is intended for deep learning applications and framework developers

interested in improving application performance on Intel CPUs and GPUs, more

details can be found here.

You can find performance numbers in the

MXNet tuning guide.

To install native MXNet without oneDNN, run the following command:

pip install mxnet-native==1.7.0

pip install mxnet==1.6.0

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.6.0

pip install mxnet==1.5.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.5.1

pip install mxnet==1.4.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.4.1

pip install mxnet==1.3.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.3.1

pip install mxnet==1.2.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find

performance numbers in the

MXNet tuning guide.

pip install mxnet-mkl==1.2.1

pip install mxnet==1.1.0

pip install mxnet==1.0.0

pip install mxnet==0.12.1

For MXNet 0.12.0:

pip install mxnet==0.12.0

pip install mxnet==0.11.0

You can then validate your MXNet installation.

NOTES:

mxnet-cu101 means the package is built with CUDA/cuDNN and the CUDA version is

10.1.

All MKL pip packages are experimental prior to version 1.3.0.

Please follow the build from source instructions linked above.

WARNING: the following PyPI package names are provided for your convenience but

they point to packages that are not provided nor endorsed by the Apache

Software Foundation. As such, they might contain software components with more

restrictive licenses than the Apache License and you’ll need to decide whether

they are appropriate for your usage. The packages linked here contain

proprietary parts of the NVidia CUDA SDK and GPL GCC Runtime Library components.

Like all Apache Releases, the official Apache MXNet releases

consist of source code only and are found at the Download

page.

Run the following command:

$ pip install mxnet-cu102

$ pip install mxnet-cu102==1.8.0.post0

$ pip install mxnet-cu102==1.7.0

$ pip install mxnet-cu102==1.6.0

$ pip install mxnet-cu101==1.5.1

$ pip install mxnet-cu101==1.4.1

$ pip install mxnet-cu92==1.3.1

$ pip install mxnet-cu92==1.2.1

$ pip install mxnet-cu91==1.1.0

$ pip install mxnet-cu90==1.0.0

$ pip install mxnet-cu90==0.12.1

$ pip install mxnet-cu80==0.11.0

You can then validate your MXNet installation.

NOTES:

mxnet-cu101 means the package is built with CUDA/cuDNN and the CUDA version is

10.1.

All MKL pip packages are experimental prior to version 1.3.0.

CUDA should be installed first. Starting from version 1.8.0, CUDNN and NCCL should be installed as well.

Important: Make sure your installed CUDA (CUDNN/NCCL if applicable) version matches the CUDA version in the pip package.

Check your CUDA version with the following command:

nvcc --version

You can either upgrade your CUDA install or install the MXNet package that supports your CUDA version.

Please follow the build from source instructions linked above.

You will need to R v3.4.4+ and build MXNet from source. Please follow the

instructions linked above.

You can use the Maven packages defined in the following dependency to include MXNet in your Java

project. The Java API is provided as a subset of the Scala API and is intended for inference only.

Please refer to the MXNet-Java setup guide for a detailed set of

instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-cpu

[1.5.0, )

You can use the Maven packages defined in the following dependency to include MXNet in your Clojure

project. To maximize leverage, the Clojure package has been built on the existing Scala package. Please

refer to the MXNet-Scala setup guide for a detailed set of instructions

to help you with the setup process that is required to use the Clojure dependency.

org.apache.mxnet.contrib.clojure

clojure-mxnet-linux-cpu

Previously available binaries distributed via Maven have been removed as they

redistributed Category-X binaries in violation of Apache Software Foundation

(ASF) policies.

At this point in time, no third-party binary Java packages are available. Please

follow the build from source instructions linked above.

Please follow the build from source instructions linked above.

Please follow the build from source instructions linked above.

To use the C++ package, build from source the USE_CPP_PACKAGE=1 option. Please

refer to the build from source instructions linked above.

MXNet is available on several cloud providers with GPU support. You can also

find GPU/CPU-hybrid support for use cases like scalable inference, or even

fractional GPU support with AWS Elastic Inference.

WARNING: the following cloud provider packages are provided for your convenience

but they point to packages that are not provided nor endorsed by the Apache

Software Foundation. As such, they might contain software components with more

restrictive licenses than the Apache License and you’ll need to decide whether

they are appropriate for your usage. Like all Apache Releases, the official

Apache MXNet releases consist of source code only and are found at

the Download page.

Alibaba

NVIDIA

VM

Amazon Web Services

Amazon SageMaker - Managed training and deployment of

MXNet models

AWS Deep Learning AMI - Preinstalled

Conda environments

for Python 2 or 3 with MXNet, CUDA, cuDNN, MKL-DNN, and AWS Elastic Inference

Dynamic Training on

AWS -

experimental manual EC2 setup or semi-automated CloudFormation setup

NVIDIA VM

Google Cloud Platform

NVIDIA

VM

Microsoft Azure

NVIDIA

VM

Oracle Cloud

NVIDIA VM

All NVIDIA VMs use the NVIDIA MXNet Docker

container.

Follow the container usage

instructions found in

NVIDIA’s container repository.

MXNet should work on any cloud provider’s CPU-only instances. Follow the Python

pip install instructions, Docker instructions, or try the following preinstalled

option.

WARNING: the following cloud provider packages are provided for your convenience

but they point to packages that are not provided nor endorsed by the Apache

Software Foundation. As such, they might contain software components with more

restrictive licenses than the Apache License and you’ll need to decide whether

they are appropriate for your usage. Like all Apache Releases, the official

Apache MXNet releases consist of source code only and are found at

the Download page.

Amazon Web Services

AWS Deep Learning AMI - Preinstalled

Conda environments

for Python 2 or 3 with MXNet and MKL-DNN.

MXNet supports the Debian based Raspbian ARM based operating system so you can run MXNet on

Raspberry Pi 3B

devices.

These instructions will walk through how to build MXNet for the Raspberry Pi and install the

Python bindings

for the library.

You can do a dockerized cross compilation build on your local machine or a native build

on-device.

The complete MXNet library and its requirements can take almost 200MB of RAM, and loading

large models with

the library can take over 1GB of RAM. Because of this, we recommend running MXNet on the

Raspberry Pi 3 or

an equivalent device that has more than 1 GB of RAM and a Secure Digital (SD) card that has

at least 4 GB of

free memory.

Quick installation

You can use this pre-built Python

wheel

on a

Raspberry Pi 3B with Stretch. You will likely need to install several dependencies to get

MXNet to work.

Refer to the following Build section for details.

Docker installation

Step 1 Install Docker on your machine by following the docker installation

instructions.

Note - You can install Community Edition (CE)

Step 2 [Optional] Post installation steps to manage Docker as a non-root user.

Follow the four steps in this docker

documentation

to allow managing docker containers without sudo.

Build

This cross compilation build is experimental.

Please use a Native build with gcc 4 as explained below, higher compiler versions

currently cause test

failures on ARM.

The following command will build a container with dependencies and tools,

and then compile MXNet for ARMv7.

You will want to run this on a fast cloud instance or locally on a fast PC to save time.

The resulting artifact will be located in build/mxnet-x.x.x-py2.py3-none-any.whl.

Copy this file to your Raspberry Pi.

The previously mentioned pre-built wheel was created using this method.

ci/build.py -p armv7

Install using a pip wheel

Your Pi will need several dependencies.

Install MXNet dependencies with the following:

sudo apt-get update

sudo apt-get install -y \

apt-transport-https \

build-essential \

ca-certificates \

cmake \

curl \

git \

libatlas-base-dev \

libcurl4-openssl-dev \

libjemalloc-dev \

liblapack-dev \

libopenblas-dev \

libopencv-dev \

libzmq3-dev \

ninja-build \

python-dev \

python-pip \

software-properties-common \

sudo \

unzip \

virtualenv \

wget

Install virtualenv with:

sudo pip install virtualenv

Create a Python 2.7 environment for MXNet with:

virtualenv -p `which python` mxnet_py27

You may use Python 3, however the wine bottle detection

example

for the

Pi with camera requires Python 2.7.

Activate the environment, then install the wheel we created previously, or install this

prebuilt

wheel.

source mxnet_py27/bin/activate

pip install mxnet-x.x.x-py2.py3-none-any.whl

Test MXNet with the Python interpreter:

$ python

>>> import mxnet

If there are no errors then you’re ready to start using MXNet on your Pi!

Native Build

Installing MXNet from source is a two-step process:

Build the shared library from the MXNet C++ source code.

Install the supported language-specific packages for MXNet.

Step 1 Build the Shared Library

On Raspbian versions Wheezy and later, you need the following dependencies:

Git (to pull code from GitHub)

libblas (for linear algebraic operations)

libopencv (for computer vision operations. This is optional if you want to save RAM and

Disk Space)

A C++ compiler that supports C++ 11. The C++ compiler compiles and builds MXNet source

code. Supported

compilers include the following:

G++ (4.8 or later). Make sure to use gcc 4 and not 5 or 6

as there are

known bugs with these compilers.

Clang (3.9 - 6)

Install these dependencies using the following commands in any directory:

sudo apt-get update

sudo apt-get -y install git cmake ninja-build build-essential g++-4.9 c++-4.9 liblapack*

libblas* libopencv*

libopenblas* python3-dev python-dev virtualenv

Clone the MXNet source code repository using the following git command in your home

directory:

git clone https://github.com/apache/mxnet.git --recursive

cd mxnet

Build:

mkdir -p build && cd build

cmake \

-DUSE_SSE=OFF \

-DUSE_CUDA=OFF \

-DUSE_OPENCV=ON \

-DUSE_OPENMP=ON \

-DUSE_MKL_IF_AVAILABLE=OFF \

-DUSE_SIGNAL_HANDLER=ON \

-DCMAKE_BUILD_TYPE=Release \

-GNinja ..

ninja -j$(nproc)

Some compilation units require memory close to 1GB, so it’s recommended that you enable swap

as

explained below and be cautious about increasing the number of jobs when building (-j)

Executing these commands start the build process, which can take up to a couple hours, and

creates a file

called libmxnet.so in the build directory.

If you are getting build errors in which the compiler is being killed, it is likely that the

compiler is running out of memory (especially if you are on Raspberry Pi 1, 2 or Zero, which

have

less than 1GB of RAM), this can often be rectified by increasing the swapfile size on the Pi

by

editing the file /etc/dphys-swapfile and changing the line CONF_SWAPSIZE=100 to

CONF_SWAPSIZE=1024,

then running:

sudo /etc/init.d/dphys-swapfile stop

sudo /etc/init.d/dphys-swapfile start

free -m # to verify the swapfile size has been increased

Step 2 Build cython modules (optional)

$ pip install Cython

$ make cython # You can set the python executable with `PYTHON` flag, e.g., make cython

PYTHON=python3

MXNet tries to use the cython modules unless the environment variable

MXNET_ENABLE_CYTHON is set to 0.

If loading the cython modules fails, the default behavior is falling back to ctypes without

any warning. To

raise an exception at the failure, set the environment variable MXNET_ENFORCE_CYTHON to

1. See

here for more details.

Step 3 Install MXNet Python Bindings

To install Python bindings run the following commands in the MXNet directory:

cd python

pip install --upgrade pip

pip install -e .

Note that the -e flag is optional. It is equivalent to --editable and means that if you

edit the source

files, these changes will be reflected in the package installed.

Alternatively you can create a whl package installable with pip with the following command:

ci/docker/runtime_functions.sh build_wheel python/ $(realpath build)

You are now ready to run MXNet on your Raspberry Pi device. You can get started by following

the tutorial on

Real-time Object Detection with MXNet On The Raspberry

Pi.

Note - Because the complete MXNet library takes up a significant amount of the Raspberry

Pi’s limited RAM,

when loading training data or large models into memory, you might have to turn off the GUI

and terminate

running processes to free RAM.

NVIDIA Jetson Devices

To install MXNet on a Jetson TX or Nano, please refer to the Jetson installation

guide.

Resources

Mailing lists

Developer Wiki

Jira Tracker

Github Roadmap

Blog

Forum

Contribute

apache/mxnet apachemxnet apachemxnet

A flexible and efficient library for deep learning.

"Copyright © 2017-2022, The Apache Software Foundation. Licensed under the Apache License, Version 2.0. Apache MXNet, MXNet, Apache, the Apache

feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the

Apache Software Foundation."

Apache MXNet – What Is It and Why Does It Matter?

Apache MXNet – What Is It and Why Does It Matter?

NVIDIA Home

NVIDIA Home

Menu

Menu icon

Menu

Menu icon

Close

Close icon

Close

Close icon

Close

Close icon

Caret down icon

Accordion is closed, click to open.

Caret down icon

Accordion is closed, click to open.

Caret up icon

Accordion is open, click to close.

Caret right icon

Click to expand

Caret right icon

Click to expand

Caret right icon

Click to expand menu.

Caret left icon

Click to collapse menu.

Caret left icon

Click to collapse menu.

Caret left icon

Click to collapse menu.

Shopping Cart

Click to see cart items

Search icon

Click to search

Skip to main content

Artificial Intelligence Computing Leadership from NVIDIA

Main Menu

Products

Hardware

Software

Gaming and Creating

GeForce Graphics Cards

Laptops

G-SYNC Monitors

Studio

SHIELD TV

Laptops and Workstations

Laptops

NVIDIA RTX in Desktop Workstations

NVIDIA RTX in Professional Laptops

NVIDIA RTX-Powered AI Workstations

Cloud and Data Center

Overview

Grace CPU

DGX Platform

EGX Platform

IGX Platform

HGX Platform

NVIDIA MGX

NVIDIA OVX

DRIVE Sim

Networking

Overview

DPUs and SuperNICs

Ethernet

InfiniBand

GPUs

GeForce

NVIDIA RTX / Quadro

Data Center

Embedded Systems

Jetson

DRIVE AGX

Clara AGX

Application Frameworks

AI Inference - Triton

Automotive - DRIVE

Cloud-AI Video Streaming - Maxine

Computational Lithography - cuLitho

Cybersecurity - Morpheus

Data Analytics - RAPIDS

Healthcare - Clara

High-Performance Computing

Intelligent Video Analytics - Metropolis

Large Language Models - NeMo Framework

Metaverse Applications - Omniverse

Recommender Systems - Merlin

Robotics - Isaac

Speech AI - Riva

Telecommunications - Aerial

Apps and Tools

Application Catalog

NGC Catalog

NVIDIA NGC

3D Workflows - Omniverse

Data Center

GPU Monitoring

NVIDIA RTX Experience

NVIDIA RTX Desktop Manager

RTX Accelerated Creative Apps

Video Conferencing

AI Workbench

Gaming and Creating

GeForce NOW Cloud Gaming

GeForce Experience

NVIDIA Broadcast App

Animation - Machinima

Modding - RTX Remix

Studio

Infrastructure

AI Enterprise Suite

Cloud Native Support

Cluster Management

Edge Deployment Management

IO Acceleration

Networking

Virtual GPU

Cloud Services

Omniverse

NeMo

BioNeMo

Picasso

Private Registry

Base Command

Fleet Command

Solutions

AI and Data Science

Overview

AI Inference

AI Workflows

Conversational AI

Data Analytics and Processing

Generative AI

Machine Learning

Prediction and Forecasting

Speech AI

Data Center and Cloud Computing

Overview

Accelerated Computing for Enterprise IT

Cloud Computing

Colocation

Edge Computing

MLOps

Networking

Virtualization

Design and Simulation

Overview

3D Avatars

Augmented and Virtual Reality

Digital Twins

Engineering Simulation

OpenUSD Workflows

Rendering

Robotics and Edge Computing

Overview

Edge Deployment Management

Edge Solutions

Industrial

Intelligent Video Analytics

Robotics

High-Performance Computing

Overview

HPC and AI

Scientific Visualization

Simulation and Modeling

Self-Driving Vehicles

Overview

AI Training

Chauffeur

Concierge

HD Mapping

Safety

Simulation

Industries

Industries

Overview

Architecture, Engineering, Construction & Operations

Automotive

Consumer Internet

Cybersecurity

Energy

Financial Services

Healthcare and Life Sciences

Higher Education

Game Development

Industrial Sector

Manufacturing

Media and Entertainment

US Public Sector

Restaurants

Retail and CPG

Robotics

Smart Cities and Spaces

Supercomputing

Telecommunications

Transportation

For You

Creatives/Designers

Data Scientists

Developers

Executives

Gamers

IT Professionals

Researchers

Roboticists

Startups

NVIDIA Studio

Overview

Accelerated Apps

Products

Compare

Shop

Industries

Media and Entertainment

Manufacturing

Architecture, Engineering, and Construction

All Industries >

Solutions

Data Center/Cloud

Laptops/Desktops

Augmented and Virtual Reality

Multi-Display

Rendering

Metaverse - Omniverse

Graphics Virtualization

Engineering Simulation

Industries

Financial Services

Consumer Internet

Healthcare

Higher Education

Retail

Public Sector

All Industries >

Solutions

AI Inference

AI Workflows

Conversational AI

Data Analytics

Deep Learning Training

Generative AI

Machine Learning

Prediction and Forecasting

Speech AI

Software

AI Enterprise Suite

AI Inference - Triton

AI Workflows

Avatar - Tokkio

Cybersecurity - Morpheus

Data Analytics - RAPIDS

Apache Spark

AI Workbench

Large Language Models - NeMo Framework

Logistics and Route Optimization - cuOpt

Recommender Systems - Merlin

Speech AI - Riva

NGC Overview

NGC Software Catalog

Open Source Software

Products

PC

Laptops & Workstations

Data Center

Cloud

Resources

Professional Services

Technical Training

Startups

AI Accelerator Program

Content Library

NVIDIA Research

Developer Blog

Kaggle Grandmasters

Developer Resources

Join the Developer Program

NGC Catalog

NVIDIA NGC

Technical Training

News

Blog

Forums

Open Source Portal

NVIDIA GTC

Startups

Developer Home

Application Frameworks

AI Inference - Triton

Automotive - DRIVE

Cloud-AI Video Streaming - Maxine

Computational Lithography - cuLitho

Cybersecurity - Morpheus

Data Analytics - RAPIDS

Healthcare - Clara

High-Performance Computing

Intelligent Video Analytics - Metropolis

Large Language Models - NeMo Framework

Metaverse Applications - Omniverse

Recommender Systems - Merlin

Robotics - Isaac

Speech AI - Riva

Telecommunications - Aerial

Top SDKs and Libraries

Parallel Programming - CUDA Toolkit

Edge AI applications - Jetpack

BlueField data processing - DOCA

Accelerated Libraries - CUDA-X Libraries

Deep Learning Inference - TensorRT

Deep Learning Training - cuDNN

Deep Learning Frameworks

Conversational AI - NeMo

Intelligent Video Analytics - DeepStream

NVIDIA Unreal Engine 4

Ray Tracing - RTX

Video Decode/Encode

Automotive - DriveWorks SDK

GeForce

Overview

GeForce Graphics Cards

Gaming Laptops

G-SYNC Monitors

RTX Games

GeForce Experience

GeForce Drivers

Forums

Support

Shop

GeForce NOW

Overview

Download

Games

Pricing

FAQs

Forums

Support

SHIELD

Overview

Compare

Shop

FAQs

Knowledge Base

Solutions

Data Center (On-Premises)

Edge Computing

Cloud Computing

Networking

Virtualization

Enterprise IT Solutions

Software

AI Enterprise Suite

Cloud Native Support

Cluster Management

Edge Deployment Management

AI Inference - Triton

IO Acceleration

Networking

Virtual GPU

Apps and Tools

Data Center

GPU Monitoring

NVIDIA Quadro Experience

NVIDIA RTX Desktop Manager

Resources

Data Center & IT Resources

Technical Training and Certification

Enterprise Support

Drivers

Security

Product Documentation

Forums

NVIDIA Research Home

Research Areas

AI Playground

Video Highlights

COVID-19

NGC Catalog

Technical Training

Startups

News

Developer Blog

Open Source Portal

Cambridge-1 Supercomputer

3D Deep Learning Research

Products

AI Training - DGX

Edge Computing - EGX

Embedded Computing - Jetson

Software

Robotics - Isaac ROS

Simulation - Isaac Sim

TAO Toolkit

Vision AI - Deepstream SDK

Edge Deployment Management

Synthetic Data Generation - Replicator

Use Cases

Healthcare and Life Sciences

Manufacturing

Public Sector

Retail

Robotics

More >

Resources

NVIDIA Blog

Robotics Research

Developer Blog

Technical Training

Startups

Shop

Drivers

Support

Log In

Log Out

Skip to main content

0

Login

LogOut

NVIDIA

NVIDIA logo

Main Menu

Products

Hardware

Gaming and Creating

GeForce Graphics Cards

Laptops

G-SYNC Monitors

Studio

SHIELD TV

Laptops and Workstations

Laptops

NVIDIA RTX in Desktop Workstations

NVIDIA RTX in Professional Laptops

NVIDIA RTX-Powered AI Workstations

Cloud and Data Center

Overview

Grace CPU

DGX Platform

EGX Platform

IGX Platform

HGX Platform

NVIDIA MGX

NVIDIA OVX

DRIVE Sim

Networking

Overview

DPUs and SuperNICs

Ethernet

InfiniBand

GPUs

GeForce

NVIDIA RTX / Quadro

Data Center

Embedded Systems

Jetson

DRIVE AGX

Clara AGX

Software

Application Frameworks

AI Inference - Triton

Automotive - DRIVE

Cloud-AI Video Streaming - Maxine

Computational Lithography - cuLitho

Cybersecurity - Morpheus

Data Analytics - RAPIDS

Healthcare - Clara

High-Performance Computing

Intelligent Video Analytics - Metropolis

Large Language Models - NeMo Framework

Metaverse Applications - Omniverse

Recommender Systems - Merlin

Robotics - Isaac

Speech AI - Riva

Telecommunications - Aerial

Apps and Tools

Application Catalog

NGC Catalog

NVIDIA NGC

3D Workflows - Omniverse

Data Center

GPU Monitoring

NVIDIA RTX Experience

NVIDIA RTX Desktop Manager

RTX Accelerated Creative Apps

Video Conferencing

AI Workbench

Gaming and Creating

GeForce NOW Cloud Gaming

GeForce Experience

NVIDIA Broadcast App

Animation - Machinima

Modding - RTX Remix

Studio

Infrastructure

AI Enterprise Suite

Cloud Native Support

Cluster Management

Edge Deployment Management

IO Acceleration

Networking

Virtual GPU

Cloud Services

Omniverse

NeMo

BioNeMo

Picasso

Private Registry

Base Command

Fleet Command

Solutions

AI and Data Science

Overview

AI Inference

AI Workflows

Conversational AI

Data Analytics and Processing

Generative AI

Machine Learning

Prediction and Forecasting

Speech AI

Data Center and Cloud Computing

Overview

Accelerated Computing for Enterprise IT

Cloud Computing

Colocation

Edge Computing

MLOps

Networking

Virtualization

Design and Simulation

Overview

3D Avatars

Augmented and Virtual Reality

Digital Twins

Engineering Simulation

OpenUSD Workflows

Rendering

Robotics and Edge Computing

Overview

Edge Deployment Management

Edge Solutions

Industrial

Intelligent Video Analytics

Robotics

High-Performance Computing

Overview

HPC and AI

Scientific Visualization

Simulation and Modeling

Self-Driving Vehicles

Overview

AI Training

Chauffeur

Concierge

HD Mapping

Safety

Simulation

Industries

Industries

Overview

Architecture, Engineering, Construction & Operations

Automotive

Consumer Internet

Cybersecurity

Energy

Financial Services

Healthcare and Life Sciences

Higher Education

Game Development

Industrial Sector

Manufacturing

Media and Entertainment

US Public Sector

Restaurants

Retail and CPG

Robotics

Smart Cities and Spaces

Supercomputing

Telecommunications

Transportation

For You

Creatives/Designers

NVIDIA Studio

Overview

Accelerated Apps

Products

Compare

Shop

Industries

Media and Entertainment

Manufacturing

Architecture, Engineering, and Construction

All Industries >

Solutions

Data Center/Cloud

Laptops/Desktops

Augmented and Virtual Reality

Multi-Display

Rendering

Metaverse - Omniverse

Graphics Virtualization

Engineering Simulation

Data Scientists

Industries

Financial Services

Consumer Internet

Healthcare

Higher Education

Retail

Public Sector

All Industries >

Solutions

AI Inference

AI Workflows

Conversational AI

Data Analytics

Deep Learning Training

Generative AI

Machine Learning

Prediction and Forecasting

Speech AI

Software

AI Enterprise Suite

AI Inference - Triton

AI Workflows

Avatar - Tokkio

Cybersecurity - Morpheus

Data Analytics - RAPIDS

Apache Spark

AI Workbench

Large Language Models - NeMo Framework

Logistics and Route Optimization - cuOpt

Recommender Systems - Merlin

Speech AI - Riva

NGC Overview

NGC Software Catalog

Open Source Software

Products

PC

Laptops & Workstations

Data Center

Cloud

Resources

Professional Services

Technical Training

Startups

AI Accelerator Program

Content Library

NVIDIA Research

Developer Blog

Kaggle Grandmasters

Developers

Developer Resources

Join the Developer Program

NGC Catalog

NVIDIA NGC

Technical Training

News

Blog

Forums

Open Source Portal

NVIDIA GTC

Startups

Developer Home

Application Frameworks

AI Inference - Triton

Automotive - DRIVE

Cloud-AI Video Streaming - Maxine

Computational Lithography - cuLitho

Cybersecurity - Morpheus

Data Analytics - RAPIDS

Healthcare - Clara

High-Performance Computing

Intelligent Video Analytics - Metropolis

Large Language Models - NeMo Framework

Metaverse Applications - Omniverse

Recommender Systems - Merlin

Robotics - Isaac

Speech AI - Riva

Telecommunications - Aerial

Top SDKs and Libraries

Parallel Programming - CUDA Toolkit

Edge AI applications - Jetpack

BlueField data processing - DOCA

Accelerated Libraries - CUDA-X Libraries

Deep Learning Inference - TensorRT

Deep Learning Training - cuDNN

Deep Learning Frameworks

Conversational AI - NeMo

Intelligent Video Analytics - DeepStream

NVIDIA Unreal Engine 4

Ray Tracing - RTX

Video Decode/Encode

Automotive - DriveWorks SDK

Executives

Gamers

GeForce

Overview

GeForce Graphics Cards

Gaming Laptops

G-SYNC Monitors

RTX Games

GeForce Experience

GeForce Drivers

Forums

Support

Shop

GeForce NOW

Overview

Download

Games

Pricing

FAQs

Forums

Support

SHIELD

Overview

Compare

Shop

FAQs

Knowledge Base

IT Professionals

Solutions

Data Center (On-Premises)

Edge Computing

Cloud Computing

Networking

Virtualization

Enterprise IT Solutions

Software

AI Enterprise Suite

Cloud Native Support

Cluster Management

Edge Deployment Management

AI Inference - Triton

IO Acceleration

Networking

Virtual GPU

Apps and Tools

Data Center

GPU Monitoring

NVIDIA Quadro Experience

NVIDIA RTX Desktop Manager

Resources

Data Center & IT Resources

Technical Training and Certification

Enterprise Support

Drivers

Security

Product Documentation

Forums

Researchers

NVIDIA Research Home

Research Areas

AI Playground

Video Highlights

COVID-19

NGC Catalog

Technical Training

Startups

News

Developer Blog

Open Source Portal

Cambridge-1 Supercomputer

3D Deep Learning Research

Roboticists

Products

AI Training - DGX

Edge Computing - EGX

Embedded Computing - Jetson

Software

Robotics - Isaac ROS

Simulation - Isaac Sim

TAO Toolkit

Vision AI - Deepstream SDK

Edge Deployment Management

Synthetic Data Generation - Replicator

Use Cases

Healthcare and Life Sciences

Manufacturing

Public Sector

Retail

Robotics

More >

Resources

NVIDIA Blog

Robotics Research

Developer Blog

Technical Training

Startups

Startups

Shop

Drivers

Support

This site requires Javascript in order to view all its content. Please enable Javascript in order to access all the functionality of this web site. Here are the instructions how to enable JavaScript in your web browser.

#

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

X

Y

Z

Apache MXNet

Apache MXNet is a flexible and scalable deep learning framework that supports many deep learning models, programming languages, and features a development interface that’s highly regarded for its ease of use.

 

What is Apache MXNet?

MXNet is an open-source deep learning framework that allows you to define, train, and deploy deep neural networks on a wide array of devices, from cloud infrastructure to mobile devices. It’s highly scalable, allowing for fast model training, and supports a flexible programming model and multiple languages.

MXNet lets you mix symbolic and imperative programming flavors to maximize both efficiency and productivity. It’s built on a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient.

The MXNet library is portable and lightweight. It’s accelerated with the NVIDIA Pascal™ GPUs and scales across multiple GPUs and multiple nodes, allowing you to train models faster.

Why Apache MXNet?

Apache MXNet offers the following key features and benefits:

Hybrid frontend: The imperative symbolic hybrid Gluon API provides an easy way to prototype, train, and deploy models without sacrificing training speed. Developers need just a few lines of Gluon code to build linear regression, CNN, and recurrent LSTM models for such uses as object detection, speech recognition, and recommendation engines.

Scalability:  Designed from the ground up for cloud infrastructure, MXNet uses a distributed parameter server that can achieve an almost linear scale using multiple GPUs or CPUs. Deep learning workloads can be distributed across multiple GPUs with near-linear scalability and auto-scaling. Tests run by Amazon Web Services found that MXNet performed 109 times faster across a cluster of 128 GPUs than with a single GPU. It’s because of the ability to scale to multiple GPUs (across multiple hosts) along with development speed and portability that AWS adopted MXNet as its deep learning framework of choice over alternatives such as TensorFlow, Theano, and Torch.

Ecosystem: MXNet has toolkits and libraries for computer vision, natural language processing, time series, and more.

Languages: MXNet-supported languages include Python, C++, R, Scala, Julia, Matlab, and JavaScript. MXNet also compiles to C++, producing a lightweight neural network model representation that can run on everything from low-powered devices like Raspberry Pi to cloud servers.

How does MXNet work?

Created by a consortium of academic institutions and incubated at the Apache Software Foundation, MXNet (or “mix-net”) was designed to blend the advantages of different programming approaches to deep learning model development—imperative, which specifies exactly “how” computation is performed, and declarative or symbolic, which focuses on “what” should be performed.

Image reference: https://www.cs.cmu.edu/~muli/file/mxnet-learning-sys.pdf

Imperative Programming Mode

MXNet’s NDArray, with imperative programming, is MXNet’s primary tool for storing and transforming data. NDArray is used to represent and manipulate the inputs and outputs of a model as multi-dimensional arrays. NDArray is similar to NumPy’s ndarrays, but they can run on GPUs to accelerate computing. 

Imperative programming has the advantage that it’s familiar to developers with procedural programming backgrounds, it’s more natural for parameter updates and interactive debugging.

Symbolic Programming Mode

Neural networks transform input data by applying layers of nested functions to input parameters. Each layer consists of a linear function followed by a nonlinear transformation.  The goal of deep learning is to optimize these parameters (consisting of weights and biases) by computing their partial derivatives (gradients) with respect to a loss metric. In forward propagation, the neural network takes the input parameters and outputs a confidence score to the nodes in the next layer until the output layer is reached where the error of the score is calculated. With backpropagation inside of a process called gradient descent, the errors are sent back through the network again and the weights are adjusted, improving the model.

Graphs are data structures consisting of connected nodes (called vertices) and edges. Every modern framework for deep learning is based on the concept of graphs, where neural networks are represented as a graph structure of computations. 

MXNet symbolic programming allows functions to be defined abstractly through computation graphs. With symbolic programming, complex functions are first expressed in terms of placeholder values. Then these functions can be executed by binding them to real values. Symbolic programming also provides predefined neural network layers allowing to express large models concisely with less repetitive work and better performance.

Image reference: https://www.cs.cmu.edu/~muli/file/mxnet-learning-sys.pdf

Symbolic programming has the following advantages:

The clear boundaries of a computation graph provide more optimization opportunities by the backend MXNet executor

It’s easier to specify the computation graph for neural network configurations

Hybrid Programming Mode with the Gluon API

One of the key advantages of MXNet is its included hybrid programming interface, Gluon, which bridges the gap between the imperative and symbolic interfaces while keeping the capabilities and advantages of both. Gluon is an easy-to-learn language that produces fast portable models. With the Gluon API, you can prototype your model imperatively using NDArray. Then you can switch to symbolic mode with the hybridize command for faster model training and inference. In symbolic mode, the model runs faster as an optimized graph with the backend MXNet executor and can be easily exported for inference in different language bindings like java or C++.

Why MXNet Is Better on GPUs

Architecturally, the CPU is composed of just a few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously.

Because neural nets are created from large numbers of identical neurons, they’re highly parallel by nature. This parallelism maps naturally to GPUs, providing a significant computation speed-up over CPU-only training. GPUs have become the platform of choice for training large, complex neural network-based systems for this reason. The parallel nature of inference operations also lend themselves well for execution on GPUs.

With improved algorithms, bigger datasets, and GPU-accelerated computation, deep learning neural networks have become an indispensable tool for image recognition, speech recognition, language translation, and more in numerous industries. MXNet was developed with the goal to offer powerful tools to help developers exploit the full capabilities of GPUs and cloud computing. 

Simply stated, the more GPUs you put to work on an MXNet training algorithm, the faster the job completes. The framework is a standout in scalable performance with nearly linear speed improvements as more GPUs are brought to bear. MXNet was also designed to scale automatically according to available GPUs, a plus for performance tuning.

Use Cases

Smartphone Apps

MXNet is well-suited for image recognition, and its ability to support models that run on low-power, limited-memory platforms make it a good choice for mobile phone deployment. Models built with MXNet have been shown to provide highly reliable image recognition results running natively on laptop computers. Combining local and cloud processors could enable powerful distributed applications in areas like augmented reality, object, and scene identification.

Voice and image recognition applications also have intriguing possibilities for people with disabilities. For example, mobile apps could help vision-impaired people to better perceive their surroundings and people with hearing impairments to translate voice conversations into text.

Autonomous Vehicles

Self-driving cars and trucks must process an enormous amount of data to make decisions in near-real-time. The complex networks that are developing to support fleets of autonomous vehicles use distributed processing to an unprecedented degree to coordinate everything from the braking decisions in a single car to traffic management across an entire city.

TuSimple—which is building an autonomous freight network of mapped routes that allow for autonomous cargo shipments across the southwestern U.S.—chose MXNet as its foundational platform for artificial intelligence model development. The company is bringing self-driving technology to an industry plagued with a chronic driver shortage as well as high overhead due to accidents, shift schedules, and fuel inefficiencies.

TuSimple chose MXNet because of its cross-platform portability, training efficiency, and scalability. One factor was a benchmark that compared MXNet against TensorFlow and found that in an environment with eight GPUs, MXNet was faster, more memory-efficient, and more accurate.

Why MXNet Matters to…

Data scientists

Machine learning is a growing part of the data science landscape. For those who are unfamiliar with the fine points of deep learning model development, MXNet is a good place to start. Its broad language support, Gluon API, and flexibility are well-suited to organizations developing their own deep learning skill sets. Amazon’s endorsement ensures that MXNet will be around for the long term and that the third-party ecosystem will continue to grow. Many experts recommend MXNet as a good starting point for future excursions into more complex deep learning frameworks.

Machine learning researchers

MXNet is often used by researchers for its ability to prototype quickly, which makes it easier to transform their research ideas into models and assess results. It also supports imperative programming which gives much more control to the researchers for computation. This particular framework has also shown significant performance on certain types of the model when compared to other frameworks due to the great utilization of CPUs and GPUs.

Software developers

Flexibility is a valued commodity in software engineering and MXNet is about as flexible a deep learning framework as can be found. In addition to its broad language support, it works with various data formats, including Amazon S3 cloud storage, and can scale up or down to fit most platforms. In 2019, MXNet added support for Horovod, a distributed learning framework developed by Uber. This offers software engineers even more flexibility in specifying deployment environments, which may include everything from laptops to cloud servers.

MXNet with NVIDIA GPUs

MXNet recommends NVIDIA GPUs to train and deploy neural networks because it offers significantly more computation power compared to CPUs, providing huge performance boosts in raining and inference. Developers can easily get started with MXNet using NGC (NVIDIA GPU Cloud). Here, users can pull containers having pre-trained models available on a variety of tasks such as computer vision, natural language processing, etc. with all the dependencies and framework in one container. With NVIDA TensorRT™, inference performance can be improved significantly on MXNet when using NVIDIA GPUs.

NVIDIA Deep Learning for Developers

GPU-accelerated deep learning frameworks offer the flexibility to design and train custom deep neural networks and provide interfaces to commonly used programming languages such as Python and C/C++. Widely used deep learning frameworks such as MXNet, PyTorch, TensorFlow, and others rely on NVIDIA GPU-accelerated libraries to deliver high-performance, multi-GPU accelerated training.

NVIDIA GPU Accelerated, End-to-End Data Science

The NVIDIA RAPIDS™ suite of open-source software libraries, built on CUDA-X AI, provides the ability to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA CUDA primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.

With the RAPIDS GPU DataFrame, data can be loaded onto GPUs using a Pandas-like interface, and then used for various connected machine learning and graph analytics algorithms without ever leaving the GPU. This level of interoperability is made possible through libraries like Apache Arrow. This allows acceleration for end-to-end pipelines—from data prep to machine learning to deep learning.

RAPIDS supports device memory sharing between many popular data science libraries. This keeps data on the GPU and avoids costly copying back and forth to host memory.

Next Steps

Learn more:

GPU-Accelerated Data Science with RAPIDS | NVIDIA

xNet Deep Learning Framework and GPU Acceleration | NVIDIA Data Center

NVIDIA deep learning home page.

The NVIDIA Deep Learning Institute

NVIDIA developers site

Company Information

About Us

Company Overview

Investors

Venture Capital (NVentures)

NVIDIA Foundation

Research

Social Responsibility

Technologies

Careers

News and Events

Newsroom

Company Blog

Technical Blog

Webinars

Stay Informed

Events Calendar

GTC AI Conference

NVIDIA On-Demand

Popular Links

Developers

Partners

Startups and VCs

Documentation

Technical Training

Training for IT Professionals

Professional Services for Data Science

Follow NVIDIA

Facebook

Twitter

LinkedIn

Instagram

YouTube

NVIDIA

United States

Privacy Policy

Manage My Privacy

Do Not Sell or Share My Data

Terms of Service

Accessibility

Corporate Policies

Product Security

Contact

Copyright © 2024 NVIDIA Corporation

Installing MXNet — mxnet documentation

Installing MXNet — mxnet documentation

Install

Gluon

About

Dive into Deep Learning

GluonCV Toolkit

GluonNLP Toolkit

API

Python

C++

Clojure

Java

Julia

Perl

R

Scala

Docs

FAQ

Tutorials

Examples

Architecture

Developer Wiki

Model Zoo

ONNX

Community

Forum

Github

Contribute

Ecosystem

Powered By

1.5.0master1.7.01.6.01.5.01.4.11.3.11.2.11.1.01.0.00.12.10.11.0

Install

Tutorials

Gluon

About

The Straight Dope (Tutorials)

GluonCV Toolkit

GluonNLP Toolkit

API

Python

C++

Clojure

Java

Julia

Perl

R

Scala

Docs

FAQ

Tutorials

Examples

Architecture

Developer Wiki

Gluon Model Zoo

ONNX

Community

Forum

Github

Contribute

Ecosystem

Powered By

1.5.0master1.6.01.5.01.4.11.3.11.2.11.1.01.0.00.12.10.11.0

MXNet APIs

MXNet Architecture

MXNet Community

MXNet FAQ

About Gluon

Installing MXNet

NVIDIA VM

NVIDIA VM

NVIDIA VM

NVIDIA VM

Quick installation

Docker installation

Build

Install using a pip wheel

Native Build

Nvidia Jetson TX family

Source Download

MXNet Model Zoo

Tutorials

Installing MXNet¶

Indicate your preferred configuration. Then, follow the customized commands to install MXNet.

v1.5.0

v1.5.0

v1.4.1

v1.3.1

v1.2.1

v1.1.0

v1.0.0

v0.12.1

v0.11.0

master

Linux

MacOS

Windows

Cloud

Devices

Python

Scala

Java

Clojure

R

Julia

Perl

Cpp

GPU

CPU

Raspberry Pi

NVIDIA Jetson

Pip

Docker

Build from Source

$ pip install mxnet

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find performance numbers in the MXNet tuning guide.

$ pip install mxnet-mkl

$ pip install mxnet==1.4.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find performance numbers in the MXNet tuning guide.

$ pip install mxnet-mkl==1.4.1

$ pip install mxnet==1.3.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find performance numbers in the MXNet tuning guide.

$ pip install mxnet-mkl==1.3.1

$ pip install mxnet==1.2.1

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find performance numbers in the MXNet tuning guide.

$ pip install mxnet-mkl==1.2.1

$ pip install mxnet==1.1.0

$ pip install mxnet==1.0.0

$ pip install mxnet==0.12.1

For MXNet 0.12.0:

$ pip install mxnet==0.12.0

$ pip install mxnet==0.11.0

$ pip install mxnet --pre

MKL-DNN enabled pip packages are optimized for Intel hardware. You can find performance numbers in the MXNet tuning guide.

$ pip install mxnet-mkl --pre

Check the chart below for other options, refer to PyPI for other MXNet pip packages, or validate your MXNet installation.

NOTES:

mxnet-cu101mkl means the package is built with CUDA/cuDNN and MKL-DNN enabled and the CUDA version is 10.1.

All MKL pip packages are experimental prior to version 1.3.0.

Docker images with MXNet are available at Docker Hub.

Step 1 Install Docker on your machine by following the docker installation instructions.

Note - You can install Community Edition (CE) to get started with MXNet.

Step 2 [Optional] Post installation steps to manage Docker as a non-root user.

Follow the four steps in this docker documentation to allow managing docker containers without sudo.

If you skip this step, you need to use sudo each time you invoke Docker.

Step 3 Pull the MXNet docker image.

$ docker pull mxnet/python # Use sudo if you skip Step 2

You can list docker images to see if mxnet/python docker image pull was successful.

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python latest 00d026968b3c 3 weeks ago 1.41 GB

Using the latest MXNet with Intel MKL-DNN is recommended for the fastest inference speeds with MXNet.

$ docker pull mxnet/python:1.3.0_cpu_mkl # Use sudo if you skip Step 2

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python 1.3.0_cpu_mkl deaf9bf61d29 4 days ago 678 MB

Step 4 Validate the installation.

To build from source, refer to the MXNet Ubuntu installation guide.

$ pip install mxnet-cu101

$ pip install mxnet-cu101==1.4.1

$ pip install mxnet-cu92==1.3.1

$ pip install mxnet-cu92==1.2.1

$ pip install mxnet-cu91==1.1.0

$ pip install mxnet-cu90==1.0.0

$ pip install mxnet-cu90==0.12.1

$ pip install mxnet-cu80==0.11.0

$ pip install mxnet-cu101 --pre

MXNet offers MKL pip packages that will be much faster when running on Intel hardware.

Check the chart below for other options, refer to PyPI for other MXNet pip packages, or validate your MXNet installation.

NOTES:

mxnet-cu101mkl means the package is built with CUDA/cuDNN and MKL-DNN enabled and the CUDA version is 10.1.

All MKL pip packages are experimental prior to version 1.3.0.

CUDA should be installed first. Instructions can be found in the CUDA dependencies section of the MXNet Ubuntu installation guide.

Important: Make sure your installed CUDA version matches the CUDA version in the pip package. Check your CUDA version with the following command:

nvcc --version

You can either upgrade your CUDA install or install the MXNet package that supports your CUDA version.

Docker images with MXNet are available at Docker Hub.

Step 1 Install Docker on your machine by following the docker installation instructions.

Note - You can install Community Edition (CE) to get started with MXNet.

Step 2 [Optional] Post installation steps to manage Docker as a non-root user.

Follow the four steps in this docker documentation to allow managing docker containers without sudo.

If you skip this step, you need to use sudo each time you invoke Docker.

Step 3 Install nvidia-docker-plugin following the installation instructions. nvidia-docker-plugin is required to enable the usage of GPUs from the docker containers.

Step 4 Pull the MXNet docker image.

$ docker pull mxnet/python:gpu # Use sudo if you skip Step 2

You can list docker images to see if mxnet/python docker image pull was successful.

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python gpu 493b2683c269 3 weeks ago 4.77 GB

Using the latest MXNet with Intel MKL-DNN is recommended for the fastest inference speeds with MXNet.

$ docker pull mxnet/python:1.3.0_cpu_mkl # Use sudo if you skip Step 2

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python 1.3.0_gpu_cu92_mkl adcb3ab19f50 4 days ago 4.23 GB

Step 5 Validate the installation.

Refer to the MXNet Ubuntu installation guide.

The default version of R that is installed with apt-get is insufficient. You will need to first install R v3.4.4+ and build MXNet from source.

After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet R bindings with the following, assuming that incubator-mxnet is the source directory you used to build MXNet as follows:

$ cd incubator-mxnet

$ make rpkg

The default version of R that is installed with apt-get is insufficient. You will need to first install R v3.4.4+ and build MXNet from source.

After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet R bindings with the following, assuming that incubator-mxnet is the source directory you used to build MXNet as follows:

$ cd incubator-mxnet

$ make rpkg

You can use the Maven packages defined in the following dependency to include MXNet in your Scala project. Please refer to the MXNet-Scala setup guide for a detailed set of instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-gpu

You can use the Maven packages defined in the following dependency to include MXNet in your Scala project. Please refer to the MXNet-Scala setup guide for a detailed set of instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-cpu

You can use the Maven packages defined in the following dependency to include MXNet in your Clojure project. To maximize leverage, the Clojure package has been built on the existing Scala package. Please refer to the MXNet-Scala setup guide for a detailed set of instructions to help you with the setup process that is required to use the Clojure dependency.

org.apache.mxnet.contrib.clojure

clojure-mxnet-linux-gpu

You can use the Maven packages defined in the following dependency to include MXNet in your Clojure project. To maximize leverage, the Clojure package has been built on the existing Scala package. Please refer to the MXNet-Scala setup guide for a detailed set of instructions to help you with the setup process that is required to use the Clojure dependency.

org.apache.mxnet.contrib.clojure

clojure-mxnet-linux-cpu

You can use the Maven packages defined in the following dependency to include MXNet in your Java project. The Java API is provided as a subset of the Scala API and is intended for inference only. Please refer to the MXNet-Java setup guide for a detailed set of instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-gpu

[1.5.0, )

You can use the Maven packages defined in the following dependency to include MXNet in your Java project. The Java API is provided as a subset of the Scala API and is intended for inference only. Please refer to the MXNet-Java setup guide for a detailed set of instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-cpu

[1.5.0, )

Refer to the Julia section of the MXNet Ubuntu installation guide.

Refer to the Perl section of the MXNet Ubuntu installation guide.

To enable the C++ package, build from source using `make USE_CPP_PACKAGE=1`.

Refer to the MXNet C++ setup guide for more info.

$ pip install mxnet

$ pip install mxnet==1.4.1

$ pip install mxnet==1.3.1

$ pip install mxnet==1.2.1

$ pip install mxnet==1.1.0

$ pip install mxnet==1.0.0

$ pip install mxnet=0.12.1

$ pip install mxnet==0.11.0

$ pip install mxnet --pre

MXNet offers MKL pip packages that will be much faster when running on Intel hardware.

Check the chart below for other options, refer to PyPI for other MXNet pip packages, or validate your MXNet installation.

NOTES:

mxnet-cu101mkl means the package is built with CUDA/cuDNN and MKL-DNN enabled and the CUDA version is 10.1.

All MKL pip packages are experimental prior to version 1.3.0.

Docker images with MXNet are available at Docker Hub.

Step 1 Install Docker on your machine by following the docker installation instructions.

Note - You can install Community Edition (CE) to get started with MXNet.

Step 2 Pull the MXNet docker image.

$ docker pull mxnet/python

You can list docker images to see if mxnet/python docker image pull was successful.

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python latest 00d026968b3c 3 weeks ago 1.41 GB

Using the latest MXNet with Intel MKL-DNN is recommended for the fastest inference speeds with MXNet.

$ docker pull mxnet/python:1.3.0_cpu_mkl # Use sudo if you skip Step 2

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python 1.3.0_cpu_mkl deaf9bf61d29 4 days ago 678 MB

Step 4 Validate the installation.

To build from source, refer to the MXNet macOS installation guide.

MXNet developers should refer to the MXNet wiki’s Developer Setup on Mac.

This option is only available by building from source. Refer to the MXNet macOS installation guide.

Refer to the MXNet macOS installation guide.

MXNet developers should refer to the MXNet wiki’s Developer Setup on Mac.

To run MXNet you also should have OpenCV and OpenBLAS installed. You may install them with brew as follows:

brew install opencv

brew install openblas

To ensure MXNet R package runs with the version of OpenBLAS installed, create a symbolic link as follows:

ln -sf /usr/local/opt/openblas/lib/libopenblas.dylib /usr/local/opt/openblas/lib/libopenblasp-r0.3.1.dylib

Note: packages for 3.6.x are not yet available.

Install 3.5.x of R from CRAN. The latest is v3.5.3.

You can build MXNet-R from source, or you can use a pre-built binary:

cran <- getOption("repos")

cran["dmlc"] <- "https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/"

options(repos = cran)

install.packages("mxnet")

Will be available soon.

You can use the Maven packages defined in the following dependency to include MXNet in your Scala project. Please refer to the MXNet-Scala setup guide for a detailed set of instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-osx-x86_64-cpu

Not available at this time.

You can use the Maven packages defined in the following dependency to include MXNet in your Clojure project. To maximize leverage, the Clojure package has been built on the existing Scala package. Please refer to the MXNet-Scala setup guide for a detailed set of instructions to help you with the setup process that is required to use the Clojure dependency.

org.apache.mxnet.contrib.clojure

clojure-mxnet-osx-cpu

Not available at this time.

You can use the Maven packages defined in the following dependency to include MXNet in your Java project. The Java API is provided as a subset of the Scala API and is intended for inference only. Please refer to the MXNet-Java setup guide for a detailed set of instructions to help you with the setup process.

org.apache.mxnet

mxnet-full_2.11-linux-x86_64-cpu

[1.5.0, )

Not available at this time.

Refer to the Julia section of the MXNet macOS installation guide.

Refer to the Perl section of the MXNet macOS installation guide.

To enable the C++ package, build from source using `make USE_CPP_PACKAGE=1`.

Refer to the MXNet C++ setup guide for more info.

For more installation options, refer to the MXNet macOS installation guide.

$ pip install mxnet

$ pip install mxnet==1.4.1

$ pip install mxnet==1.3.1

$ pip install mxnet==1.2.1

$ pip install mxnet==1.1.0

$ pip install mxnet==1.0.0

$ pip install mxnet==0.12.1

$ pip install mxnet==0.11.0

$ pip install mxnet --pre

MXNet offers MKL pip packages that will be much faster when running on Intel hardware.

Check the chart below for other options, refer to PyPI for other MXNet pip packages, or validate your MXNet installation.

NOTES:

mxnet-cu101mkl means the package is built with CUDA/cuDNN and MKL-DNN enabled and the CUDA version is 10.1.

All MKL pip packages are experimental prior to version 1.3.0.

Docker images with MXNet are available at Docker Hub.

Step 1 Install Docker on your machine by following the docker installation instructions.

Note - You can install Community Edition (CE) to get started with MXNet.

Step 2 [Optional] Post installation steps to manage Docker as a non-root user.

Follow the four steps in this docker documentation to allow managing docker containers without sudo.

If you skip this step, you need to use sudo each time you invoke Docker.

Step 3 Pull the MXNet docker image.

$ docker pull mxnet/python # Use sudo if you skip Step 2

You can list docker images to see if mxnet/python docker image pull was successful.

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python latest 00d026968b3c 3 weeks ago 1.41 GB

Using the latest MXNet with Intel MKL-DNN is recommended for the fastest inference speeds with MXNet.

$ docker pull mxnet/python:1.3.0_cpu_mkl # Use sudo if you skip Step 2

$ docker images # Use sudo if you skip Step 2

REPOSITORY TAG IMAGE ID CREATED SIZE

mxnet/python 1.3.0_cpu_mkl deaf9bf61d29 4 days ago 678 MB

Step 4 Validate the installation.

Refer to the MXNet Windows installation guide

$ pip install mxnet-cu101

$ pip install mxnet-cu101==1.4.1

$ pip install mxnet-cu92==1.3.1

$ pip install mxnet-cu92==1.2.1

$ pip install mxnet-cu91==1.1.0

$ pip install mxnet-cu90==1.0.0

$ pip install mxnet-cu90==0.12.1

$ pip install mxnet-cu80==0.11.0

$ pip install mxnet-cu101 --pre

MXNet offers MKL pip packages that will be much faster when running on Intel hardware.

Check the chart below for other options, refer to PyPI for other MXNet pip packages, or validate your MXNet installation.

NOTES:

mxnet-cu101mkl means the package is built with CUDA/cuDNN and MKL-DNN enabled and the CUDA version is 10.1.

All MKL pip packages are experimental prior to version 1.3.0.

Anaconda is recommended.

CUDA should be installed first. Instructions can be found in the CUDA dependencies section of the MXNet Ubuntu installation guide.

Important: Make sure your installed CUDA version matches the CUDA version in the pip package. Check your CUDA version with the following command:

nvcc --version

Refer to #8671 for status on CUDA 9.1 support.

You can either upgrade your CUDA install or install the MXNet package that supports your CUDA version.

To build from source, refer to the MXNet Windows installation guide.

Note: packages for 3.6.x are not yet available.

Install 3.5.x of R from CRAN.

You can build MXNet-R from source, or you can use a pre-built binary:

cran <- getOption("repos")

cran["dmlc"] <- "https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/"

options(repos = cran)

install.packages("mxnet")

To run MXNet you also should have OpenCV and OpenBLAS installed.

You can build MXNet-R from source, or you can use a pre-built binary:

cran <- getOption("repos")

cran["dmlc"] <- "https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/GPU/cu92"

options(repos = cran)

install.packages("mxnet")

Change cu92 to cu90, cu91 or cuda100 based on your CUDA toolkit version. Currently, MXNet supports these versions of CUDA.

Note : You also need to have cuDNN installed on Windows. Check out this guide on the steps for installation.

MXNet-Scala for Windows is not yet available.

MXNet-Clojure for Windows is not yet available.

MXNet-Java for Windows is not yet available.

Refer to the Julia section of the MXNet Windows installation guide.

Refer to the Perl section of the MXNet Windows installation guide.

To enable the C++ package, build from source using `make USE_CPP_PACKAGE=1`.

Refer to the MXNet C++ setup guide for more info.

For more installation options, refer to the MXNet Windows installation guide.

MXNet is available on several cloud providers with GPU support. You can also find GPU/CPU-hybrid support for use cases like scalable inference, or even fractional GPU support with AWS Elastic Inference.

Alibaba

NVIDIA VM

Amazon Web Services

Amazon SageMaker - Managed training and deployment of MXNet models

AWS Deep Learning AMI - Preinstalled Conda environments for Python 2 or 3 with MXNet, CUDA, cuDNN, MKL-DNN, and AWS Elastic Inference

Dynamic Training on AWS - experimental manual EC2 setup or semi-automated CloudFormation setup

NVIDIA VM

Google Cloud Platform

NVIDIA VM

Microsoft Azure

NVIDIA VM

Oracle Cloud

NVIDIA VM

All NVIDIA VMs use the NVIDIA MXNet Docker container.

Follow the container usage instructions found in NVIDIA’s container repository.

MXNet should work on any cloud provider's CPU-only instances. Follow the Python pip install instructions, Docker instructions, or try the following preinstalled option.

Amazon Web Services

AWS Deep Learning AMI - Preinstalled Conda environments for Python 2 or 3 with MXNet and MKL-DNN.

MXNet supports the Debian based Raspbian ARM based operating system so you can run MXNet on Raspberry Pi 3B devices.

These instructions will walk through how to build MXNet for the Raspberry Pi and install the Python bindings for the library.

You can do a dockerized cross compilation build on your local machine or a native build on-device.

The complete MXNet library and its requirements can take almost 200MB of RAM, and loading large models with the library can take over 1GB of RAM. Because of this, we recommend running MXNet on the Raspberry Pi 3 or an equivalent device that has more than 1 GB of RAM and a Secure Digital (SD) card that has at least 4 GB of free memory.

Quick installation¶

You can use this pre-built Python wheel on a Raspberry Pi 3B with Stretch. You will likely need to install several dependencies to get MXNet to work. Refer to the following Build section for details.

Docker installation¶

Step 1 Install Docker on your machine by following the docker installation instructions.

Note - You can install Community Edition (CE)

Step 2 [Optional] Post installation steps to manage Docker as a non-root user.

Follow the four steps in this docker documentation to allow managing docker containers without sudo.

Build¶

This cross compilation build is experimental.

Please use a Native build with gcc 4 as explained below, higher compiler versions currently cause test failures on ARM.

The following command will build a container with dependencies and tools,

and then compile MXNet for ARMv7.

You will want to run this on a fast cloud instance or locally on a fast PC to save time.

The resulting artifact will be located in build/mxnet-x.x.x-py2.py3-none-any.whl.

Copy this file to your Raspberry Pi.

The previously mentioned pre-built wheel was created using this method.

ci/build.py -p armv7

Install using a pip wheel¶

Your Pi will need several dependencies.

Install MXNet dependencies with the following:

sudo apt-get update

sudo apt-get install -y \

apt-transport-https \

build-essential \

ca-certificates \

cmake \

curl \

git \

libatlas-base-dev \

libcurl4-openssl-dev \

libjemalloc-dev \

liblapack-dev \

libopenblas-dev \

libopencv-dev \

libzmq3-dev \

ninja-build \

python-dev \

python-pip \

software-properties-common \

sudo \

unzip \

virtualenv \

wget

Install virtualenv with:

sudo pip install virtualenv

Create a Python 2.7 environment for MXNet with:

virtualenv -p `which python` mxnet_py27

You may use Python 3, however the wine bottle detection example for the Pi with camera requires Python 2.7.

Activate the environment, then install the wheel we created previously, or install this prebuilt wheel.

source mxnet_py27/bin/activate

pip install mxnet-x.x.x-py2.py3-none-any.whl

Test MXNet with the Python interpreter:

$ python

>>> import mxnet

If there are no errors then you’re ready to start using MXNet on your Pi!

Native Build¶

Installing MXNet from source is a two-step process:

Build the shared library from the MXNet C++ source code.

Install the supported language-specific packages for MXNet.

Step 1 Build the Shared Library

On Raspbian versions Wheezy and later, you need the following dependencies:

Git (to pull code from GitHub)

libblas (for linear algebraic operations)

libopencv (for computer vision operations. This is optional if you want to save RAM and Disk Space)

A C++ compiler that supports C++ 11. The C++ compiler compiles and builds MXNet source code. Supported compilers include the following:

G++ (4.8 or later). Make sure to use gcc 4 and not 5 or 6 as there are known bugs with these compilers.

Clang (3.9 - 6)

Install these dependencies using the following commands in any directory:

sudo apt-get update

sudo apt-get -y install git cmake ninja-build build-essential g++-4.9 c++-4.9 liblapack* libblas* libopencv* libopenblas* python3-dev python-dev virtualenv

Clone the MXNet source code repository using the following git command in your home directory:

git clone https://github.com/apache/incubator-mxnet.git --recursive

cd incubator-mxnet

Build:

mkdir -p build && cd build

cmake \

-DUSE_SSE=OFF \

-DUSE_CUDA=OFF \

-DUSE_OPENCV=ON \

-DUSE_OPENMP=ON \

-DUSE_MKL_IF_AVAILABLE=OFF \

-DUSE_SIGNAL_HANDLER=ON \

-DCMAKE_BUILD_TYPE=Release \

-GNinja ..

ninja -j$(nproc)

Some compilation units require memory close to 1GB, so it’s recommended that you enable swap as

explained below and be cautious about increasing the number of jobs when building (-j)

Executing these commands start the build process, which can take up to a couple hours, and creates a file called libmxnet.so in the build directory.

If you are getting build errors in which the compiler is being killed, it is likely that the

compiler is running out of memory (especially if you are on Raspberry Pi 1, 2 or Zero, which have

less than 1GB of RAM), this can often be rectified by increasing the swapfile size on the Pi by

editing the file /etc/dphys-swapfile and changing the line CONF_SWAPSIZE=100 to CONF_SWAPSIZE=1024,

then running:

sudo /etc/init.d/dphys-swapfile stop

sudo /etc/init.d/dphys-swapfile start

free -m # to verify the swapfile size has been increased

Step 2 Build cython modules (optional)

$ pip install Cython

$ make cython # You can set the python executable with `PYTHON` flag, e.g., make cython PYTHON=python3

MXNet tries to use the cython modules unless the environment variable MXNET_ENABLE_CYTHON is set to 0. If loading the cython modules fails, the default behavior is falling back to ctypes without any warning. To raise an exception at the failure, set the environment variable MXNET_ENFORCE_CYTHON to 1. See here for more details.

Step 3 Install MXNet Python Bindings

To install Python bindings run the following commands in the MXNet directory:

cd python

pip install --upgrade pip

pip install -e .

Note that the -e flag is optional. It is equivalent to --editable and means that if you edit the source files, these changes will be reflected in the package installed.

Alternatively you can create a whl package installable with pip with the following command:

ci/docker/runtime_functions.sh build_wheel python/ $(realpath build)

You are now ready to run MXNet on your Raspberry Pi device. You can get started by following the tutorial on Real-time Object Detection with MXNet On The Raspberry Pi.

Note - Because the complete MXNet library takes up a significant amount of the Raspberry Pi’s limited RAM, when loading training data or large models into memory, you might have to turn off the GUI and terminate running processes to free RAM.

Nvidia Jetson TX family¶

MXNet supports the Ubuntu Arch64 based operating system so you can run MXNet on NVIDIA Jetson Devices.

These instructions will walk through how to build MXNet for the Pascal based NVIDIA Jetson TX2 and install the corresponding python language bindings.

For the purposes of this install guide we will assume that CUDA is already installed on your Jetson device.

Install MXNet

Installing MXNet is a two-step process:

Build the shared library from the MXNet C++ source code.

Install the supported language-specific packages for MXNet.

Step 1 Build the Shared Library

You need the following additional dependencies:

Git (to pull code from GitHub)

libatlas (for linear algebraic operations)

libopencv (for computer vision operations)

python pip (to load relevant python packages for our language bindings)

Install these dependencies using the following commands in any directory:

sudo apt-get update

sudo apt-get -y install git build-essential libatlas-base-dev libopencv-dev graphviz python-pip

sudo pip install pip --upgrade

sudo pip install setuptools numpy --upgrade

sudo pip install graphviz==0.8.4 \

jupyter

Clone the MXNet source code repository using the following git command in your home directory:

git clone https://github.com/apache/incubator-mxnet.git --recursive

cd incubator-mxnet

Edit the Makefile to install the MXNet with CUDA bindings to leverage the GPU on the Jetson:

cp make/crosscompile.jetson.mk config.mk

Edit the Mshadow Makefile to ensure MXNet builds with Pascal’s hardware level low precision acceleration by editing 3rdparty/mshadow/make/mshadow.mk and adding the following after line 122:

MSHADOW_CFLAGS += -DMSHADOW_USE_PASCAL=1

Now you can build the complete MXNet library with the following command:

make -j $(nproc)

Executing this command creates a file called libmxnet.so in the mxnet/lib directory.

Step 2 Install MXNet Python Bindings

To install Python bindings run the following commands in the MXNet directory:

cd python

pip install --upgrade pip

pip install -e .

Note that the -e flag is optional. It is equivalent to --editable and means that if you edit the source files, these changes will be reflected in the package installed.

Add the mxnet folder to the path:

cd ..

export MXNET_HOME=$(pwd)

echo "export PYTHONPATH=$MXNET_HOME/python:$PYTHONPATH" >> ~/.rc

source ~/.rc

You are now ready to run MXNet on your NVIDIA Jetson TX2 device.

Source Download¶

Download your required version of MXNet and build from source.

Table Of Contents

Installing MXNet

Quick installation

Docker installation

Build

Install using a pip wheel

Native Build

Nvidia Jetson TX family

Source Download

Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

"Copyright © 2017-2018, The Apache Software Foundation

Apache MXNet, MXNet, Apache, the Apache feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the Apache Software Foundation."

MXNet: A Growing Deep Learning Framework

MXNet: A Growing Deep Learning Framework

search

menu

icon-carat-right

cmu-wordmark

About

Leadership

Divisions

Work with Us

Collaboration with CMU

History of Innovation at the SEI

Our Work

Agile

Artificial Intelligence Engineering

Cyber Workforce Development

Cybersecurity Center Development

Cybersecurity Engineering

DevSecOps

Enterprise Risk and Resilience Management

All Topics

All Projects

Publications

Annual Reviews

Blog

Digital Library

Podcast Series

Software and Tools

Technical Papers

Vulnerability Notes Database

Webcasts

News and Events

News

Events

SEI Bulletin

Education and Outreach

Courses

Credentials

Curricula

License SEI Materials

Careers

Job Openings

Diversity, Equity and Inclusion

Internship Opportunities

Working at the SEI

Carnegie Mellon University

Software Engineering Institute

About

Leadership

Divisions

Work with Us

Collaboration with CMU

History of Innovation at the SEI

Our Work

Agile

Artificial Intelligence Engineering

Cloud Computing

Cyber Workforce Development

Cybersecurity Center Development

Cybersecurity Engineering

DevSecOps

All Topics

All Projects

Publications

Annual Reviews

Blog

Digital Library

Podcast Series

Software and Tools

Technical Papers

Vulnerability Notes Database

Webcasts Series

News and Events

News

Events

SEI Bulletin

Education and Outreach

Courses

Credentials

Curricula

License SEI Materials

Careers

Job Openings

Diversity, Equity, and Inclusion

Internship Opportunities

SEI Blog

Home

Publications

Blog

MXNet: A Growing Deep Learning Framework

Cite This Post

×

AMS

APA

Chicago

IEEE

BibTeX

AMS Citation

Mellon, J., 2022: MXNet: A Growing Deep Learning Framework. Carnegie Mellon University, Software Engineering Institute's Insights (blog), Accessed March 12, 2024, https://insights.sei.cmu.edu/blog/mxnet-a-growing-deep-learning-framework/.

Copy

APA Citation

Mellon, J. (2022, November 14). MXNet: A Growing Deep Learning Framework. Retrieved March 12, 2024, from https://insights.sei.cmu.edu/blog/mxnet-a-growing-deep-learning-framework/.

Copy

Chicago Citation

Mellon, Jeffrey. "MXNet: A Growing Deep Learning Framework." Carnegie Mellon University, Software Engineering Institute's Insights (blog). Carnegie Mellon's Software Engineering Institute, November 14, 2022. https://insights.sei.cmu.edu/blog/mxnet-a-growing-deep-learning-framework/.

Copy

IEEE Citation

J. Mellon, "MXNet: A Growing Deep Learning Framework," Carnegie Mellon University, Software Engineering Institute's Insights (blog). Carnegie Mellon's Software Engineering Institute, 14-Nov-2022 [Online]. Available: https://insights.sei.cmu.edu/blog/mxnet-a-growing-deep-learning-framework/. [Accessed: 12-Mar-2024].

Copy

BibTeX Code

@misc{mellon_2022,author={Mellon, Jeffrey},title={MXNet: A Growing Deep Learning Framework},month={Nov},year={2022},howpublished={Carnegie Mellon University, Software Engineering Institute's Insights (blog)},url={https://insights.sei.cmu.edu/blog/mxnet-a-growing-deep-learning-framework/},note={Accessed: 2024-Mar-12}}

Copy

MXNet: A Growing Deep Learning Framework

Jeffrey Mellon

November 14, 2022

PUBLISHED IN

Artificial Intelligence Engineering

CITE

Get Citation

SHARE

Deep learning refers to the component of machine learning (ML) that attempts to mimic the mechanisms deployed by the human brain. It consists of forming deep neural networks (DNNs), which have several (hidden) layers. Applications include virtual assistants (such as Alexa and Siri), detecting fraud, predicting election results, medical imaging for detecting and diagnosing diseases, driverless vehicles, and deepfake creation and detection. You may have heard of TensorFlow or PyTorch, which are widely used deep learning frameworks. As this post details, MXNet (pronounced mix-net) is Apache’s open-source spin on a deep-learning framework that supports building and training models in multiple languages, including Python, R, Scala, Julia, Java, Perl, and C++. An Overview of MXNetAlong with the aforementioned languages, trained MXNet models can be used for prediction in MATLAB and JavaScript. Regardless of the model-building language, MXNet calls optimized C++ as the back-end engine. Moreover, it is scalable and runs on systems ranging from mobile devices to distributed graphics processing unit (GPU) clusters. Not only does the MXNet framework enable fast model training, it scales automatically to the number of available GPUs across multiple hosts and multiple machines. MXNet also supports data synchronization over multiple devices with multiple users. MXNet research has been conducted at several universities, including Carnegie Mellon University, and Amazon uses it as its deep-learning framework due to its GPU capabilities and cloud computing integration.

Figure 1: MXNet architecture. Source:

https://mxnet.apache.org/versions/1.4.1/architecture/overview.html

Figure 1 describes MXNet’s capabilities. The MXNet engine allows for good resource utilization, parallelization, and reproducibility. Its KVStore is a distributed key-value store for data communication and synchronization over multiple devices. A user can push a key-value pair from a device to the store and pull the value on a key from the store.Imperative programming is a programming paradigm where the programmer explicitly instructs the machine on how to complete a task. It tends to follow a procedural rather than declarative style, which mimics the way the processor executes machine code. In other words, imperative programming does not focus on the logic of a computation or what the program will accomplish. Instead, it focuses on how to compute it as a series of tasks. Within MXNet, imperative programming specifies how to perform a computation (e.g., tensor operations). Examples of languages that incorporate imperative programming include C, C++, Python, and JavaScript. MXNet incorporates imperative programming using NDArray, which is useful for storing and transforming data, much like NumPy’s ndarray. Data is represented as multi-dimensional arrays that can run on GPUs to accelerate computing. Moreover MXNet contains data iterators that allow users to load images with their labels directly from directories. After retrieving the data, the data can be preprocessed and used to create batches of images and iterate through these batches before feeding them into a neural network.Lastly, MXNet provides the ability to blend symbolic and imperative programming. Symbolic programming specifies what computations to perform (e.g., declaration of a computation graph). Gluon, a hybrid programming interface, combines both imperative and symbolic interfaces, while keeping the capabilities and advantages of both. Importantly, Gluon is key to building and training neural networks, largely for image classification, deepfake detection, etc. There is a version of Gluon specifically designed for natural language processing (nlp), as well.Why MXNet Is Better for GPUsCPUs are composed of just a few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The parallel nature of neural networks (created from large numbers of identical neurons) maps naturally to GPUs, providing a significant computation speed-up over CPU-only training. Currently, GPUs are the platform of choice for training large, complex neural network-based systems. Overall, more GPUs on an MXNet training algorithm lead to significantly faster completion time due to the “embarrassingly parallel” nature of these computations.By default, MXNet relies on CPUs, but users can specify GPUs. For a CPU, MXNet will allocate data on the main memory and try to use as many CPU cores as possible, even if there is more than one CPU socket. If there are multiple GPUs, MXNet needs to specify which GPUs the NDArray will be allocated to. MXNet also requires users to move data between devices explicitly. The only requirement for performing an operation on a particular GPU is for users to guarantee that the inputs of the operation are already on that GPU. The output will be allocated on the same GPU, as well.Before delving into why MXNet is of interest to researchers, let’s compare available deep learning software. In Table 1 below, I outline some of the key similarities and differences between MXNet, TensorFlow, and PyTorch.

MXNet

TensorFlow

PyTorch

Year Released

2015

2015

2016

Creator

Apache

Google Brain

Facebook members

Deep Learning Priority

Yes

Yes

Yes

Open-source

Yes

Yes

Yes

CUDA support

Yes

Yes

Yes

Simple Interactive Debugging

Yes

No

Yes

Multi-GPU training

Yes

Yes

Yes

Frequent community updates

Lesser

Higher

Lesser

Easy Learning

Yes

No (interface changes after every update)

Yes (Typically if Python User)

Interface for Monitoring

Yes

Yes

Yes

Table 1: A Comparison of MXNet, TensorFlow, PyTorchSome other features that are hard to put into the chart includeTensorFlow, which generally does better on CPU than MXNet, but MXNet generally does better (speed and performance wise) than PyTorch and TensorFlow on GPUs.MXNet, which has good ease of learning, resource utilization, and computation speed specifically on GPUs.Why MXNet Looks PromisingWith the rise of disinformation campaigns, such as deepfakes, coupled with new remote work environments brought about by the onslaught of COVID, deep learning is increasingly important in the realm of cybersecurity. Deep learning consists of forming algorithms made up of deep neural networks (DNNs) that have multiple layers, several of which are hidden. These deep learning algorithms are used to create and detect deepfakes. As DarkReading noted in a January 2022 article, malicious actors are deploying increasingly sophisticated impersonation attempts and organizations must prepare for the increasingly sophisticated threat of deepfakes. “What was a cleverly written phishing email from a C-level email account in 2021 could become a well-crafted video or voice recording attempting to solicit the same sensitive information and resources in 2022 and beyond.”The rise in deepfakes has also brought about a rise in the number of available deep learning frameworks. MXNet appears able to compete with two of the top industry frameworks, and could be a suitable piece for further research or to use in one’s research projects, including deepfake detection, self-driving cars, fraud detection, and even natural language processing applications. Deepfake detection is already being researched here at the SEI by my colleagues Catherine Bernaciak, Shannon Gallagher, Thomas Scanlon, Dominic Ross, and myself.MXNet has limitations, including having a relatively small community of members that update it, which limits their ability to fix bugs, improve content, and add new features. MXNet is not as popular as TensorFlow and PyTorch, even though it is actively used by businesses like Amazon. Despite these limitations, MXNet is a computationally efficient, scalable, portable, fast framework that provides a user-friendly experience to users who rely on several varying programming languages. Its GPU capabilities and high performance make it a deep-learning framework that should be more widely used and known.In a future post, I will provide details on an interactive notebook that CERT researchers have developed to give users hands-on experience with MXNet.

Additional Resources

Several employees of the CERT Division presented at the SEI Deepfakes Day 2022 on August 30, 2022. Deepfakes Day is a workshop hosted by the SEI that addresses the origins of deepfake media content, its technical underpinnings, and the available methods to detect and mitigate against deepfakes. Dr. William Corvey, Program Manager for DARPA’s SemaFor program, delivered the keynote.Official MXNET GitHub site:https://github.com/apache/incubator-mxnetView the SEI Blog Post How Easy Is it to Make and Detect a Deepfake?More info on GANs and other types of networks is available at the following link: https://d2l.ai/chapter_generative-adversarial-networks/gan.html This link offers example tutorials that use MXNet.Also check out the following articles and papers:MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed SystemshThe Top 5 Deep Learning Frameworks to Watch in 2021 and Why TensorFlowTop 10 Deep Learning Frameworks in 2022 You Can’t IgnoreThe Ultimate Guide to Machine Learning FrameworksPopular Deep Learning Frameworks: An OverviewTop 8 Deep Learning Frameworks

Written By

Jeffrey Mellon

Author Page

Digital Library Publications

Send a Message

More By The Author

Explainability in Cybersecurity Data Science

November 20, 2023

By

Jeffrey Mellon,

Clarence Worrell

More In Artificial Intelligence Engineering

OpenAI Collaboration Yields 14 Recommendations for Evaluating LLMs for Cybersecurity

February 21, 2024

By

Jeff Gennari,

Shing-hon Lau,

Samuel J. Perl

Using ChatGPT to Analyze Your Code? Not So Fast

February 12, 2024

By

Mark Sherman

Creating a Large Language Model Application Using Gradio

December 4, 2023

By

Tyler Brooks

Generative AI Q&A: Applications in Software Engineering

November 16, 2023

By

John E. Robert,

Douglas Schmidt (Vanderbilt University)

Harnessing the Power of Large Language Models For Economic and Social Good: 4 Case Studies

September 25, 2023

By

Matthew Walsh,

Dominic A. Ross,

Clarence Worrell,

Alejandro Gomez

PUBLISHED IN

Artificial Intelligence Engineering

CITE

Get Citation

SHARE

Get updates on our latest work.

Sign up to have the latest post sent to your inbox weekly.

Subscribe

Get our RSS feed

More In Artificial Intelligence Engineering

OpenAI Collaboration Yields 14 Recommendations for Evaluating LLMs for Cybersecurity

February 21, 2024

By

Jeff Gennari,

Shing-hon Lau,

Samuel J. Perl

Using ChatGPT to Analyze Your Code? Not So Fast

February 12, 2024

By

Mark Sherman

Creating a Large Language Model Application Using Gradio

December 4, 2023

By

Tyler Brooks

Generative AI Q&A: Applications in Software Engineering

November 16, 2023

By

John E. Robert,

Douglas Schmidt (Vanderbilt University)

Harnessing the Power of Large Language Models For Economic and Social Good: 4 Case Studies

September 25, 2023

By

Matthew Walsh,

Dominic A. Ross,

Clarence Worrell,

Alejandro Gomez

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe

Get our RSS feed

Report a Vulnerability to CERT/CC

Subscribe to SEI Bulletin

Request Permission to Use SEI Materials

Carnegie Mellon University

Software Engineering Institute

4500 Fifth Avenue

Pittsburgh, PA 15213-2612

412-268-5800

Contact Us

Office Locations

Additional Sites Directory

Legal

Privacy Notice

CMU Ethics Hotline

www.sei.cmu.edu

2024

Carnegie Mellon University

What Is Apache MXNet? - Apache MXNet on AWS

What Is Apache MXNet? - Apache MXNet on AWSWhat Is Apache MXNet? - Apache MXNet on AWSAWSDocumentationApache MXNet on AWSDeveloper GuideWhat Is Apache MXNet?Apache MXNet (MXNet) is an open source deep learning framework that allows you to define,

train, and deploy deep neural networks on a wide array of platforms, from cloud

infrastructure to mobile devices. It is highly scalable, which allows for fast model

training, and it supports a flexible programming model and multiple languages. The MXNet library is portable and lightweight. It scales seamlessly on multiple GPUs on

multiple machines.MXNet supports programming in various languages including Python, R, Scala, Julia, and

Perl.This user guide has been deprecated and is no longer available. For more information on MXNet and related material, see the topics below.

MXNetMXNet is an Apache open source project. For more information about MXNet, see the

following open source documentation:

Getting

Started – Provides details for setting up MXNet on various

platforms, such as macOS, Windows, and Linux. It also explains how to set up

MXNet for use with different front-end languages, such as Python, Scala, R,

Julia, and Perl.

Tutorials –

Provides step-by-step procedures for various deep learning tasks using

MXNet.

API Reference – For more information, see MXNet. At the top of this page,

choose a language from the API menu.

For all other information, see MXNet.Deep Learning AMIs (DLAMI)AWS provides Deep Learning Amazon Machine Images (DLAMIs) optimized

for both CPU and GPU EC2 instances. The DLAMI User's Guide explains how to

set up MXNet on AWS using these AMIs. Amazon SageMakerYou can use MXNet with Amazon SageMaker to train a models using your own custom Apache MXNet code.

Amazon SageMaker is a fully managed machine learning service. With Amazon SageMaker, data scientists

and developers can quickly and easily build and train machine learning models, and then directly deploy

them into a production-ready hosted environment.For more information, see the Amazon SageMaker documentation. Javascript is disabled or is unavailable in your browser.To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions.Document ConventionsDid this page help you? - YesThanks for letting us know we're doing a good job!If you've got a moment, please tell us what we did right so we can do more of it.Did this page help you? - NoThanks for letting us know this page needs work. We're sorry we let you down.If you've got a moment, please tell us how we can make the documentation bett

Machine Learning Inference Framework - Apache MXNet on AWS - AWS

Machine Learning Inference Framework - Apache MXNet on AWS - AWS

Skip to main content

Click here to return to Amazon Web Services homepage

Contact Us

Support 

English 

My Account 

Sign In

Create an AWS Account

Products

Solutions

Pricing

Documentation

Learn

Partner Network

AWS Marketplace

Customer Enablement

Events

Explore More

Close

عربي

Bahasa Indonesia

Deutsch

English

Español

Français

Italiano

Português

Tiếng Việt

Türkçe

Ρусский

ไทย

日本語

한국어

中文 (简体)

中文 (繁體)

Close

My Profile

Sign out of AWS Builder ID

AWS Management Console

Account Settings

Billing & Cost Management

Security Credentials

AWS Personal Health Dashboard

Close

Support Center

Expert Help

Knowledge Center

AWS Support Overview

AWS re:Post

Click here to return to Amazon Web Services homepage

Get Started for Free

Contact Us

Products

Solutions

Pricing

Introduction to AWS

Getting Started

Documentation

Training and Certification

Developer Center

Customer Success

Partner Network

AWS Marketplace

Support

AWS re:Post

Log into Console

Download the Mobile App

Apache MXNet on AWS

Overview

Getting Started

Apache MXNet on AWS

Build machine learning applications that train quickly and run anywhere

Try Amazon SageMaker

Apache MXNet is a fast and scalable training and inference framework with an easy-to-use, concise API for machine learning.

MXNet includes the Gluon interface that allows developers of all skill levels to get started with deep learning on the cloud, on edge devices, and on mobile apps. In just a few lines of Gluon code, you can build linear regression, convolutional networks and recurrent LSTMs for object detection, speech recognition, recommendation, and personalization.

You can get started with MxNet on AWS with a fully-managed experience using Amazon SageMaker, a platform to build, train, and deploy machine learning models at scale. Or, you can use the AWS Deep Learning AMIs to build custom environments and workflows with MxNet as well as other frameworks including TensorFlow, PyTorch, Chainer, Keras, Caffe, Caffe2, and Microsoft Cognitive Toolkit.

Contribute to the Apache MXNet Project

Get involved at GitHub

Grab sample code, notebooks, and tutorial content at the GitHub project page.

Benefits of deep learning using MXNet

Ease-of-Use with Gluon

MXNet’s Gluon library provides a high-level interface that makes it easy to prototype, train, and deploy deep learning models without sacrificing training speed. Gluon offers high-level abstractions for predefined layers, loss functions, and optimizers. It also provides a flexible structure that is intuitive to work with and easy to debug.

Greater Performance

Deep learning workloads can be distributed across multiple GPUs with near-linear scalability, which means that extremely large projects can be handled in less time. As well, scaling is automatic depending on the number of GPUs in a cluster. Developers also save time and increase productivity by running serverless and batch-based inferencing.

For IoT & the Edge

In addition to handling multi-GPU training and deployment of complex models in the cloud, MXNet produces lightweight neural network model representations that can run on lower-powered edge devices like a Raspberry Pi, smartphone, or laptop and process data remotely in real-time.

Flexibility & Choice

MXNet supports a broad set of programming languages—including C++, JavaScript, Python, R, Matlab, Julia, Scala, Clojure, and Perl—so you can get started with languages that you already know. On the backend, however, all code is compiled in C++ for the greatest performance regardless of what language is used to build the models.

Customer momentum

Case studies

There are over 500 contributors to the MXNet project including developers from Amazon, NVIDIA, Intel, Samsung, and Microsoft. Learn about how customers are using MXNet for deep learning projects. For more case studies, see the AWS machine learning blog and the MXNet blog.

Uses deep learning to lower property damage losses from natural disasters

Celgene advances pharma research and discovery with the help of AI on MXNet

Curalate makes social sell with AI using Apache MXNet on AWS

Amazon SageMaker for machine learning

Learn more about Amazon SageMaker

Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.

Have more questions?

Contact us

Sign In to the Console

Learn About AWS

What Is AWS?

What Is Cloud Computing?

AWS Inclusion, Diversity & Equity

What Is DevOps?

What Is a Container?

What Is a Data Lake?

What is Generative AI?

AWS Cloud Security

What's New

Blogs

Press Releases

Resources for AWS

Getting Started

Training and Certification

AWS Solutions Library

Architecture Center

Product and Technical FAQs

Analyst Reports

AWS Partners

Developers on AWS

Developer Center

SDKs & Tools

.NET on AWS

Python on AWS

Java on AWS

PHP on AWS

JavaScript on AWS

Help

Contact Us

Get Expert Help

File a Support Ticket

AWS re:Post

Knowledge Center

AWS Support Overview

Legal

AWS Careers

Create an AWS Account

Amazon is an Equal Opportunity Employer:

Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age.

Language

عربي

Bahasa Indonesia

Deutsch

English

Español

Français

Italiano

Português

Tiếng Việt

Türkçe

Ρусский

ไทย

日本語

한국어

中文 (简体)

中文 (繁體)

Privacy

|

Site Terms

|

Cookie Preferences

|

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved.

Ending Support for Internet Explorer

Got it

AWS support for Internet Explorer ends on 07/31/2022. Supported browsers are Chrome, Firefox, Edge, and Safari.

Learn more »

Got it