MLlib

MLlib is Apache Spark's scalable machine learning library.

Ease of Use

Usable in Java, Scala, Python, and SparkR.

MLlib fits into Spark's APIs and interoperates with NumPy in Python (starting in Spark 0.9). You can use any Hadoop data source (e.g. HDFS, HBase, or local files), making it easy to plug into Hadoop workflows.

points = spark.textFile("hdfs://...")
              .map(parsePoint)

model = KMeans.train(points, k=10)
Calling MLlib in Python

Performance

High-quality algorithms, 100x faster than MapReduce.

Spark excels at iterative computation, enabling MLlib to run fast. At the same time, we care about algorithmic performance: MLlib contains high-quality algorithms that leverage iteration, and can yield better results than the one-pass approximations sometimes used on MapReduce.

Logistic regression in Hadoop and Spark

Easy to Deploy

Runs on existing Hadoop clusters and data.

If you have a Hadoop 2 cluster, you can run Spark and MLlib without any pre-installation. Otherwise, Spark is easy to run standalone or on EC2 or Mesos. You can read from HDFS, HBase, or any Hadoop data source.

Algorithms

MLlib contains the following algorithms and utilities:

  • logistic regression and linear support vector machine (SVM)
  • classification and regression tree
  • random forest and gradient-boosted trees
  • recommendation via alternating least squares (ALS)
  • clustering via k-means, bisecting k-means, Gaussian mixtures (GMM), and power iteration clustering
  • topic modeling via latent Dirichlet allocation (LDA)
  • survival analysis via accelerated failure time model
  • singular value decomposition (SVD) and QR decomposition
  • principal component analysis (PCA)
  • linear regression with L1, L2, and elastic-net regularization
  • isotonic regression
  • multinomial/binomial naive Bayes
  • frequent itemset mining via FP-growth and association rules
  • sequential pattern mining via PrefixSpan
  • summary statistics and hypothesis testing
  • feature transformations
  • model evaluation and hyper-parameter tuning

Refer to the MLlib guide for usage examples.

Community

MLlib is developed as part of the Apache Spark project. It thus gets tested and updated with each Spark release.

If you have questions about the library, ask on the Spark mailing lists.

MLlib is still a young project and welcomes contributions. If you'd like to submit an algorithm to MLlib, read how to contribute to Spark and send us a patch!

Getting Started

To get started with MLlib:

  • Download Spark. MLlib is included as a module.
  • Read the MLlib guide, which includes various usage examples.
  • Learn how to deploy Spark on a cluster if you'd like to run in distributed mode. You can also run locally on a multicore machine without any setup.