Big Data Platform Distributions week

There is a lot to do when it comes to Big Data. All kinds of new / improved techniques to us use data. Have a look at things like Machine Learning, Deep Learning or Artificial Intelligence. All these techniques use (Big) Data. I will not go into the discussion what Big Data exactly means. In the end it’s all about data whether it is structured (e.g. relational, spreadsheet, etc.), semi-structured (e.g. log files) or un-structured (e.g. pictures, video’s).

This blog is the start of a series blogs to have a closer look at technical implementations of Big Data. I am aware of the fact that there is a whole world around Big Data. Things like (full) data architecture or the actual request for information are often forgotten. Also the field of tension between the Business and IT deserves special attention.

What is Hadoop?

If we look at data from a technical perspective, we see one term popping up every time; “Hadoop”. What is Hadoop and why would I need it?

“The Apache™ Hadoop® project is a project that develops open-source software for reliable, scalable, distributed computing.”

Hadoop (/həˈdp/) is based on work done at Google in the late 1990s/early 2000s. According to the co-founders of Hadoop; Doug Cutting & Mike Cafarella, Hadoop is originated from the Google File System-paper that was published in October 2003. Doug Cutting named the project after is son’s pet elephant.

Back at the time, Google had a challenge. They wanted to index the entire web which required massive amounts of storage and a new approach to process these large amounts of data. Google found a solution in the Google File System (GFS) and  Distributed MapReduce (described in a paper released in 2004).

Hadoop was first built as the Nutch-project. It was meant to serve as an infrastructure to crawl the web and store a search engine index for the crawled pages. HDFS is used as a distributed filesystem that can store data across thousands of servers Map/Reduce jobs across various machines, running the work close to the data.

According to the the project page, Hadoop is built around three core components;

Hadoop-logo

  • Distributed File System (HDFS) – Stores data
  • Hadoop MapReduce – Processes data
  • Hadoop Yarn – Schedules work

These core Hadoop components are surrounded by a whole ecosystem of Hadoop projects. This open source eco-system provides all kinds of projects to solve real data problems. There are projects to support the different challenges within a data-driven environment:

This list is just an impression of the possible Hadoop ecosystem projects. There is a more actual list here, which provides; “…a summary to keep the track of Hadoop related projects…”.

Why would I need Hadoop?

There are a few reasons why one would need Hadoop. The most important ones are that the current amount of data is growing faster than the ability of e.g. RDBMS systems to store and process it. Next to that, the traditional data storage alternatives are no longer cost effective. Hadoop offers a an approach on low-cost commodity hardware, which makes it easy to scale up and down when necessary. Data is distributed over this hardware when it is stored. The processing of this data takes place where it is stored.

One of the big challenges while setting up an Hadoop environment is; “Where to start?” Starting a Single-Node Hadoop Cluster could be a first step, but that is the start. What to do next? Which projects (and which version) to include? When to upgrade which project? Is the project already production ready? What about things like support (issues, bugs, technical assistance), service level agreements (SLA), compliance, etc.

There are several distributions which provide a solution to answer the above questions. an additional benefit is that the organisations behind these distributions are part of the Hadoop community. They contribute and commit (updated) code back to the open source repository.

For this series I will focus on three of the largest distributions within the community; ClouderaMapR and HortonWorks. Please check out my findings in the following blogposts:

Thanks for reading.

 

Author: Daan Bakboord

I am an Oracle Big Data Analytics Consultant with great interest in anything closely related to the Oracle Big Data Analytics (OBIEE, BICS, OAC, Big Data, Data Integration, Data Visualization, Data Management, Data Architecture).

3 thoughts on “Big Data Platform Distributions week”

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s