apache spark

0 to Life-Changing App: We Found Data!

Could it be? Yes!

My team and I have finally found delightful data or, rather, the Goldilocks of data. Whatever you prefer. The important part is that the data is public. The data is big. The data is really big. The data is relevant to changing the world. Score.

Not sure what I'm talking about or what I'm doing? Look here and here.

Now that we have this dazzling data, what do we do with it? Below follows an official introduction to my team's life-changing app with Apache SystemML.

An Official Introduction to My Life-Changing App with Apache SystemML:

What is our app about?

A few days ago, one of my mentors came across a competition meant for life-changers everywhere: a competition that challenges researchers, developers, data scientists and the like to find a better way to predict the grade of breast cancer cells. Our app will try to come up with a solution for this challenge. We will look at various images of breast cancer cells, apply machine learning and determine the cell's current stage of cancer.

What data are we using?

We are using several hundred images of breast cancer tissue. Each image varies in size, but they hover around 2 GB each.

How will we be using the data?

While I cannot share too much information (this is a competition), I can confirm that we will be using Apache Spark and Apache SystemML! Feeling left without a lot of details? Don't worry, more blogs and tutorials on the technicalities of how I am using Spark and SystemML will come soon so that you can follow along and build your own app, as promised!

What is our goal?

Change lives, remember? Well, specifically, while we are not helping to diagnose cancer, our goal is to be able to look at an image and determine what grade of cancer it is. Why is this important? If all goes well, this app will help doctors save time and money by shortening the process in which the cancer grade is determined. When this process is shortened, patients receive the appropriate treatment and information at a much faster pace, which can literally help save lives. It will also hopefully be a great example of how machine learning can be applied to get real-world solutions.

What are the next steps?

Now that we've found our data, we will begin the process of applying machine learning to predict and gain insights. First, however, we must learn the foundations before we get to anything too fancy. In my coming blogs, I'll walk you through learning Scala, learning how to use Spark, learning how to work with image data...and then how to dive into machine learning.

Stay tuned! We will all be saving lives in no time!

By Madison J. Myers


You Might Also Enjoy

Kevin Bates
Kevin Bates
9 months ago

Limit Notebook Resource Consumption by Culling Kernels

There’s no denying that data analytics is the next frontier on the computational landscape. Companies are scrambling to establish teams of data scientists to better understand their clientele and how best to evolve product solutions to the ebb and flow of today’s business ecosystem. With Apache Hadoop and Apache Spark entrenched as the analytic engine and coupled with a trial-and-error model to... Read More

Gidon Gershinsky
Gidon Gershinsky
a year ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More