2001 Gartner research article by Doug Laney titled “3-D Data Management: Controlling Data Volume, Velocity, and Variety” now serves as the construct for big data. Fast data is often related to data velocity – that is defined as the rate of changes in linking data sets at different speeds. Big data models historical trends and patterns that allow business to find opportunities that are not obvious. Fast data allows business to use these models in real-time to influence results by insights as the data is generated. Fast Data is getting increasingly crucial for modern business as they are in constant pursuit to gain a competitive edge. Some common use cases are regulatory reporting, fraud detection, and surveillance.
Pivotal core components help to tackle both big data and fast data use cases in business enterprises with GemFire for big data and SQLFire for fast Data. GemFire is used to perform map-reduce jobs on the huge datasets. GemFire’s scatter-gather semantics gives the ability to analyze big data. Pivotal SQLFire helps businesses to move compute right into the data fabric that can cause as much as 75X speed-up for simple operations like pricing and risk. It can also be used to improve the time to detect patterns and anti-patterns for use cases like compliance and fraud detection. GemFire WAN Gateways enables business to have local access to the global analysis in the form of micro-cubes that are stored in “edge caches” so that they can be sliced and diced locally. When big data works together with the eXtreme Transaction Processing (XTP) of fast data, it paves way to new business models that are robust and accurate.