Gone are the days when an RDBMS (Relational Database Management System) was the appropriate solution for every database need. Why? Let’s discuss two scenarios in this post.
Consider the “Three V’s of Big Data”: Volume, variety and velocity.
One of the biggest problems with RDBMSs is that they are not yet up to the demands of Big Data. The volume of required data handling today is skyrocketing. Facebook houses 1.5 PB (Peta Bytes) of uploaded photos. Google processes 20PB of data each day. Every 60 seconds over 204 million emails are exchanged, 3,600 photos are shared on Instagram and 2 million search queries are processed by Google. RDBMSs struggle in the face of such huge data volumes and RDBMS solutions capable of handling such volumes are extremely expensive.
Big Data also demands collection of an extremely wide variety of data types, but RDBMSs have inflexible schemas. The problem is that Big Data primarily comprises semi-structured data, such as social media sentiment analysis and text mining data, while RDBMSs are more suitable for structured data, such as weblog, sensor and financial data.
In addition, Big Data is accumulated at a very high velocity. Since RDBMSs are designed for steady data retention, rather than for rapid growth, using RDBMSs for Big Data is prohibitively expensive.
Finally, many modern day applications don’t require the strong-but-expensive guarantees offered by RDBMSs. While a growing number of web applications can tolerate weak consistency, they also require low and predictable response times, high availability, effective scalability, flexible schemas and geographically distributed data centers.
Read post to know more on Why traditional database systems fail to support “big data”?
Please share your thoughts on the comment section.