Big Data Platform Distributions week – Wrap up

Wrapping up a week of Big Data Platform comparisons. A closer look @ #Cloudera, #MapR and #Hortonworks.

This last week I have been taking a slightly closer look at 3 of the most well known Big Data Platform Distributions; Cloudera, MapR and Hortonworks. It’s interesting to see how different de various distributions look at the same data challenge.

Which Big Data Platform Distributions is the best?

The three different Big Data Platform Distributions have a different focus. Here are a few things that make each of the top three vendors stand out from each other:

  • Cloudera – Proven, user-friendly technology.
    • Use Case; Enterprise Data Hub. Let the Hadoop platform serve as a central data repository.
  • MapR – Stable platform with a generic file-system and fast processing.
    • Use Case; Integrated platform with a focus on streaming.
  • Hortonworks – 100% Open source with minimal investment.
    • Use Case; Modernising your traditional EDW.

There is no easy answer to the question; “Which Big Data Platform Distributions is the best?”. My answer would be; “It depends”. It depends on a various different factors:

  • Performance – MapR has extra focus on speed and performance and therefor developed its own file system (MapR-FS) as well as its own NoSQL database, MapR-DB
  • Scalability – Hadoop is known to scale very well. All three offer software to mange this effectively. Cloudera & MapR go for proprietary.
  • Reliability – Before Hadoop 2.0 the NameNode was the single point of failure (SOPF) in a HDFS-cluster. MapR has a different approach (more distributed) approach with its file system known as MapR File System (MapR-FS)
  • Manageability – Cloudera & MapR add (proprietary) management software to their distribution. Hortonworks chooses for their open-source equivalents.
  • Licenses – All three offer downloadable free versions of their software. Both Cloudera & MapR add additional features for their paying customers.
  • Support – All three are part of the Hadoop community as contributors & committers. They contribute and commit (updated) code back to the open source repository.
  • Upgrades – Cloudera & Hortonworks both are known for their quick adoption of new technologies. Hortonworks seems to be the quickest to get things production ready.
  • OS Support – Hortonworks supports the Microsoft Windows OS. Microsoft included Hortonworks and packaged it into its own HDInsight (both on-premise or in the Azure cloud).
  • Training – It looks like Cloudera offers the most complete and professional training program. This also reflected in the price.
  • Tutorials – All three offer various tutorials and sandboxes to get started

Back to the question; “Which Big Data Platform Distributions is the best?”. Go ahead and find out for yourself. Determine which of the points above are important to your situation and try it out for your self.

If you have anything to contribute, please let me know. I haven’t performed a thorough comparison, yet. Maybe Gartner can help out a bit as well.

Thanks for reading.

The Hortonworks Connected Data Platforms

As part of the Big Data Platform Distributions week, I will have a closer look at the Hortonworks distribution.

Hortonworks was founded in 2011 when 24 engineers from the original Hadoop team at Yahoo! formed Hortonworks. This included the founders Rob BeardenAlan Gates, Arun Murthy, Devaraj Das, Mahadev Konar, Owen O’Malley, Sanjay Radia, and Suresh Srinivas. The name Hortonworks refers to Horton the Elephant, which relates to the naming of Hadoop.

“The only way to deliver infrastructure platform technology is completely in open source.”

The Hortonworks solution aims to offer an platform to be able to process and store data-in-motion as well as data-at-rest. This platform is a combination of Hortonworks Data Flow (HDF) and Hortonworks Data Platforms (HDP®). This way Hortonworks is not only about doing Hadoop (HDP), but it is also connecting data platforms via HDF.

Since the birth of Hortonworks they have had a fundamental belief: “The only way to deliver infrastructure platform technology is completely in open source.” Hortonworks is  also member of the Open Data Platform Initiative; “A nonprofit organization committed to simplification & standardization of the Big Data ecosystem with common reference specifications and test suites”

Hortonworks Data Flow

The Hortonworks Data Flow solution for data-in-motion includes 3 key components:

  • Data Flow Management Systems – a drag-&-drop visual interface based on Apache Nifi / MiNifi. Apache NiFi is a robust and secure framework for routing, transforming, and delivering data across a multitude of systems. Apache MiNiFi (a light-weight agent) is created as a subproject of Apache Nifi and focuses on the collection of the data at the source.
  • Stream Processing – HDF supports Apache Storm and Kafka. The added value is in the GUI of Streaming Analytics Manager (SAM), which eliminates the need to code streaming data flows.
  • Enterprise Services – Making sure that everything works together in an enterprise environment. HDF supports Apache Ranger (Security) and Ambari (Provisioning, management and Monitoring). The Schema Registry builds a catalog so data streams can be reused.

HDF-Data-Motion-Platform-1024x532

Streaming Analytics Manager and Schema Registry are both open source projects. Until this moment they are not part of the Apache Software Foundation project.

Hortonworks Data Platforms

Hortonworks solution for data-at-rest is Hortonworks Data Platform (HDP). HDP consists of the following components.

Hortonworks Data Platform

Hortonworks is also available in the cloud with two specific products:

  • Azure HDInsight – a collaboration between Microsoft and Hortonworks to offer a Big Data Analytics platform on the Azure Cloud.
  • Hortonworks Data Cloud for AWS – deploy Hortonworks Data Cloud Hadoop clusters on AWS infrastructure.

How to get started?

The best way to get to know the Hortonworks product(s) is by getting your hands dirty. Hortonworks offers Sandboxes on a VM for both HDP as well as HDF. These VM’s come in different flavours, like VMWare, VirtualBox and Docker. Go and download a copy here. For questions and other interactions go to the Hortonworks community.

Thanks for reading.

Big Data Platform Distributions week

There is a lot to do when it comes to Big Data. All kinds of new / improved techniques to us use data. Have a look at things like Machine Learning, Deep Learning or Artificial Intelligence. All these techniques use (Big) Data. I will not go into the discussion what Big Data exactly means. In the end it’s all about data whether it is structured (e.g. relational, spreadsheet, etc.), semi-structured (e.g. log files) or un-structured (e.g. pictures, video’s).

This blog is the start of a series blogs to have a closer look at technical implementations of Big Data. I am aware of the fact that there is a whole world around Big Data. Things like (full) data architecture or the actual request for information are often forgotten. Also the field of tension between the Business and IT deserves special attention.

What is Hadoop?

If we look at data from a technical perspective, we see one term popping up every time; “Hadoop”. What is Hadoop and why would I need it?

“The Apache™ Hadoop® project is a project that develops open-source software for reliable, scalable, distributed computing.”

Hadoop (/həˈdp/) is based on work done at Google in the late 1990s/early 2000s. According to the co-founders of Hadoop; Doug Cutting & Mike Cafarella, Hadoop is originated from the Google File System-paper that was published in October 2003. Doug Cutting named the project after is son’s pet elephant.

Back at the time, Google had a challenge. They wanted to index the entire web which required massive amounts of storage and a new approach to process these large amounts of data. Google found a solution in the Google File System (GFS) and  Distributed MapReduce (described in a paper released in 2004).

Hadoop was first built as the Nutch-project. It was meant to serve as an infrastructure to crawl the web and store a search engine index for the crawled pages. HDFS is used as a distributed filesystem that can store data across thousands of servers Map/Reduce jobs across various machines, running the work close to the data.

According to the the project page, Hadoop is built around three core components;

Hadoop-logo

  • Distributed File System (HDFS) – Stores data
  • Hadoop MapReduce – Processes data
  • Hadoop Yarn – Schedules work

These core Hadoop components are surrounded by a whole ecosystem of Hadoop projects. This open source eco-system provides all kinds of projects to solve real data problems. There are projects to support the different challenges within a data-driven environment:

This list is just an impression of the possible Hadoop ecosystem projects. There is a more actual list here, which provides; “…a summary to keep the track of Hadoop related projects…”.

Why would I need Hadoop?

There are a few reasons why one would need Hadoop. The most important ones are that the current amount of data is growing faster than the ability of e.g. RDBMS systems to store and process it. Next to that, the traditional data storage alternatives are no longer cost effective. Hadoop offers a an approach on low-cost commodity hardware, which makes it easy to scale up and down when necessary. Data is distributed over this hardware when it is stored. The processing of this data takes place where it is stored.

One of the big challenges while setting up an Hadoop environment is; “Where to start?” Starting a Single-Node Hadoop Cluster could be a first step, but that is the start. What to do next? Which projects (and which version) to include? When to upgrade which project? Is the project already production ready? What about things like support (issues, bugs, technical assistance), service level agreements (SLA), compliance, etc.

There are several distributions which provide a solution to answer the above questions. an additional benefit is that the organisations behind these distributions are part of the Hadoop community. They contribute and commit (updated) code back to the open source repository.

For this series I will focus on three of the largest distributions within the community; ClouderaMapR and HortonWorks. Please check out my findings in the following blogposts:

Thanks for reading.