Menu
Five things you need to know about Hadoop vs Apache Spark

Five things you need to know about Hadoop vs Apache Spark

They're sometimes viewed as competitors in the big-data space, but the growing consensus is that they're better together

Listen in on any conversation about big data, and you'll probably hear mention of Hadoop or Apache Spark. Here's a brief look at what they do and how they compare.

1: They do different things. Hadoop and Apache Spark are both big-data frameworks, but they don't really serve the same purposes. Hadoop is essentially a distributed data infrastructure: It distributes massive data collections across multiple nodes within a cluster of commodity servers, which means you don't need to buy and maintain expensive custom hardware. It also indexes and keeps track of that data, enabling big-data processing and analytics far more effectively than was possible previously. Spark, on the other hand, is a data-processing tool that operates on those distributed data collections; it doesn't do distributed storage.

2: You can use one without the other. Hadoop includes not just a storage component, known as the Hadoop Distributed File System, but also a processing component called MapReduce, so you don't need Spark to get your processing done. Conversely, you can also use Spark without Hadoop. Spark does not come with its own file management system, though, so it needs to be integrated with one -- if not HDFS, then another cloud-based data platform. Spark was designed for Hadoop, however, so many agree they're better together.

3: Spark is speedier. Spark is generally a lot faster than MapReduce because of the way it processes data. While MapReduce operates in steps, Spark operates on the whole data set in one fell swoop. "The MapReduce workflow looks like this: read data from the cluster, perform an operation, write results to the cluster, read updated data from the cluster, perform next operation, write next results to the cluster, etc.," explained Kirk Borne, principal data scientist at Booz Allen Hamilton. Spark, on the other hand, completes the full data analytics operations in-memory and in near real-time: "Read data from the cluster, perform all of the requisite analytic operations, write results to the cluster, done," Borne said. Spark can be as much as 10 times faster than MapReduce for batch processing and up to 100 times faster for in-memory analytics, he said.

4: You may not need Spark's speed. MapReduce's processing style can be just fine if your data operations and reporting requirements are mostly static and you can wait for batch-mode processing. But if you need to do analytics on streaming data, like from sensors on a factory floor, or have applications that require multiple operations, you probably want to go with Spark. Most machine-learning algorithms, for example, require multiple operations. Common applications for Spark include real-time marketing campaigns, online product recommendations, cybersecurity analytics and machine log monitoring.

5: Failure recovery: different, but still good. Hadoop is naturally resilient to system faults or failures since data are written to disk after every operation, but Spark has similar built-in resiliency by virtue of the fact that its data objects are stored in something called resilient distributed datasets distributed across the data cluster. "These data objects can be stored in memory or on disks, and RDD provides full recovery from faults or failures," Borne pointed out.

Follow Us

Join the New Zealand Reseller News newsletter!

Error: Please check your email address.

Featured

Slideshows

Reseller News launches inaugural Hall of Fame lunch

Reseller News launches inaugural Hall of Fame lunch

Reseller News welcomed 2015 and 2016 inductees - Darryl Swann, Dave Rosenberg, Gary Bigwood, Keith Watson, Mike Hill and Scott Green - to the inaugural Reseller News Hall of Fame lunch, held at the French Cafe in Auckland. The inductees discussed how the channel can collectively work together to benefit New Zealand, the Kiwi skills shortage and the future of the industry. Photos by Maria Stefina.

Reseller News launches inaugural Hall of Fame lunch
Educating from the epicentre - Why distributors are the pulse checkers of the channel

Educating from the epicentre - Why distributors are the pulse checkers of the channel

​As the channel changes and industry voices deepen, the need for clarity and insight heightens. Market misconceptions talk of an “under pressure” distribution space, with competitors in that fateful “race for relevance” across New Zealand. Amidst the cliched assumptions however, distribution is once again showing its strength, as a force to be listened to, rather than questioned. Traditionally, the role was born out of a need for vendors and resellers to find one another, acting as a bridge between the testing lab and the marketplace. Yet despite new technologies and business approaches shaking the channel to its very core, distributors remain tied to the epicentre - providing the voice of reason amidst a seismic industry shift. In looking across both sides of the vendor and partner fences, the middle concept of the three-tier chain remains centrally placed to understand the metrics of two differing worlds, as the continual pulse checkers of the local channel. This exclusive Reseller News Roundtable, in association with Dicker Data and rhipe, examined the pivotal role of distribution in understanding the health of the channel, educating from the epicentre as the market transforms at a rapid rate.

Educating from the epicentre - Why distributors are the pulse checkers of the channel
Kiwi channel reunites as After Hours kicks off 2017

Kiwi channel reunites as After Hours kicks off 2017

After Hours made a welcome return to the channel social calendar last night, with a bumper crowd of distributors, vendors and resellers descending on The Jefferson in Auckland to kickstart 2017. Photos by Maria Stefina.

Kiwi channel reunites as After Hours kicks off 2017
Show Comments