news

News: SystemML Release 0.8.0 - Distributed and Declarative Machine Learning

The Spark Technology Center team has just released SystemML 0.8.0.

SystemML 0.8.0 is the first binary release of SystemML since its initial migration to GitHub on August 16, 2015. This release represents 320+ patches from 14 contributors since that date. SystemML became publicly available on GitHub on August 27, 2015.

Extensive updates have been made to the project in several areas. These include APIs, data ingestion, optimizations, language and runtime operators, new algorithms, testing, and online documentation.

APIs

Improvements to MLContext and to MLPipeline wrappers

Data Ingestion

Data conversion utilities (from RDDs and DataFrames)
Data transformations on raw data sets

Optimizations

Extensions to compilation chain, including IPA
Improvements to parfor
Improved execution of concurrent Apache Spark jobs
New rewrites, including eager RDD caching and repartitioning
Improvements to buffer pool caching
Partitioning-preserving operations
On-demand creation of SparkContext
Efficient use of RDD checkpointing

Language and Runtime Operators

New matrix multiplication operators (e.g., ZipMM)
New multi-threaded readers and operators
Extended aggregation-outer operations for different relational operators
Sample capability

New Algorithms

Alternating Least Squares (Conjugate Gradient)
Cubic Splines (Conjugate Gradient and Direct Solve)

Testing

PyDML algorithm tests
Test suite refactoring
Improvements to performance tests

Online Documentation

GitHub README
Quick Start Guide
DML and PyDML Programming Guide
MLContext Programming Guide
Algorithms Reference
DML Language Reference
Debugger Guide
Documentation site available at http://sparktc.github.io/systemml/

Newsletter

You Might Also Enjoy

James Spyker
James Spyker
2 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More

Seth Dobrin
Seth Dobrin
2 months ago

Non-Obvious Application of Spark™ as a Cloud-Sync Tool

When most people think about Apache Spark™, they think about analytics and machine learning. In my upcoming talk at Spark Summit East, I'll talk about leveraging Spark in conjunction with Kafka, in a hybrid cloud environment, to apply the batch and micro-batch analytic capabilities to transactional data in place of performing traditional ETL. This application of these two open source tools is a no... Read More