Today, the Pharmaceutical industry, like many, has its feet in both camps when it comes to Big Data. Some parts of the industry, such as genomics and drug discovery, were early adopters and today couldn't imagine life without Big Data technologies and approaches. Others are pushing their current approaches to near their limits, and are beginning to consider "what's next?"
Currently, the situation in the Big Data space is somewhat in flux. Large, established vendors now have their own Big Data offerings, which while attractive in some ways (technologies you know, vendor you have experience with, processes similar to that you've used before), they are no universal remedy (the cost alone for many of these can be truly eye watering). On the other hand, many of the original Open Source offerings for Big Data now have well established ecosystems around them, with plenty of (largely VC backed) organisations offering support, training, certification, validation.
Technology does not remain still, and new projects, products and approaches are launched all the time. Some of these new offerings build upon older ones, delivering completer solutions by filling in the technology stack, while others take the latest research from academia to re-visit the core of the current Big Data offerings and replace them with faster; more flexible and more scalable solutions.
For the industry's IT leaders, it can present a dilemma. Your existing vendors are constantly pitching Big Data solutions to you, normally requiring another Big Data system to calculate the price! Newer Big Data companies are now pitching supported and validated offerings to you for more reasonable fees, albeit with steeper learning curves, while your technologists are off exploring the latest solutions which do more, but often lack the support and validation that our industry requires.