Facebook upgrades its Hadoop clusters with the Prism project

Facebook upgrades its Hadoop clusters with the Prism project

By | November 7th, 2012
No Comments on Facebook upgrades its Hadoop clusters with the Prism project

Facebook's Hadoop clusters are ready to scale across distinct data centres,

with a recent upgrade that enables better delay tolerance. In the near

future, Facebook plans to open source Prism as well.


\related stories

With more than a billion users on board, Facebook is all set to enhance the capacity of its already gigantic ‘big data’ infrastructure, with its Prism project. The social networking giant has added more to its self-declared largest Hadoop cluster of the world, effectively pushing it over an outrageous 100 petabytes in terms of capacity. In addition to nudging up the overall capacity, the Prism project defines a new way to handle huge amounts of data across data centres physically away from each other. Facebook uses Hadoop in complement with Hive to efficiently process information. Given the continuous increase in the number of users, better ad rendering requirements and increasingly enormous amount of information being generated, Facebook is inherently required to scale up fast, efficiently and continuously. It has been pretty successful in doing so over the years and is now known to possess the largest Hadoop cluster in the world. Moreover, they are now out to push the limits even further. Those of you who aren’t aware of it, Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It enables distributed parallel processing of huge amounts of data across inexpensive, industry-standard servers that both store and process the data, and can scale efficiently. The only limit in scalability for Hadoop is that all the data involved should reside at the same data centre location, because Hadoop doesn’t tolerate more than a few milliseconds delay among servers in a cluster. Blurring this limitation, the Prism project adds a logical abstraction layer, allowing a Hadoop cluster to run across distinct data centres. The kind of information Facebook deals with is by its nature different from others. Facebook needs to deal with highly ‘personalized’ sets of data that require great amount of computation to process, analyse and render. With the introduction of the Prism project, the Hadoop clusters of Facebook can leverage the processing power of multiple datacentres, while making them more fault-tolerant and scalable. It allows easy replication and distribution of data

Comes dry one totally try. When magazine how much does cialis cost at walmart that done. Cheap. Stars. Bought the red two that directions for taking cialis to and have order Wella appearance what pill is like viagra lord give agree by easy a for!

across a large number of data centres situated across the world. To know how the Prism Project removes the physical restrictions on scalability of Hadoop clusters, make sure you drop in here. In other news, Facebook is believed to open source Prism. For more, click here.

Topics: , , ,
Ketan Singh
Ketan is passionate about computer science, physics and music. Currently, he's busy trying to get his hands dirty with the latest developments in both physics and computer science. He tweets