MapR Launches Apache Drill v1.6 to Converge SQL and JSON
SAN JOSE, CA: MapR Technologies, a provider of converged data platform announces Apache Drill v1.6 which offers a new MapR-DB document database plugin, enhanced performance scaling, and optimized Tableau and BI tool experience
Drill has been on the path of rapid iterative releases. Over 6,000 BI analysts and developers worldwide have completed Drill training courses provided by the free On Demand training program from MapR.
Flexible and operational analytics on NoSQL – The new MapR DB-document database plugin allows analysts to perform SQL queries directlyon JSON data stored in MapR-DB tables. There are a variety of pushdown capabilities available with this plugin to provide optimal interactive experience.
Enhanced query performance – Drill v1.6 provides better query performance on data in Hadoop and NoSQL systems via numerous query planning improvements, such as partition pruning, metadata caching and other optimization improvements. Delivers up to 10-60X performance gains in query planning compared to the previous releases of Drill.
Better memory management – Drill v1.6 delivers greater stability and scale which enables customers to run not only larger but also more SQL workloads on a MapR cluster.
Improved integration with visualization tools like Tableau – the latest version offers metadata query performance improvements and introduces client impersonation for end-to-end security from the visualization tool to data in Hadoop. Version 1.6 also provides enhanced SQL Window functions.
Drill is used in media companies where content delivery network (CDN) files can be analyzed without requiring data transformations.
Apache Drill is a game changer for us,” says Edmon Begoli, CTO of PYA Analytics. “Most recently, we have been able to query, in less than 60 seconds, two years worth of flat PSV files of claims, billing, and clinical data from commercial and government entities, such as the Centers for Medicaid and Medicare Services. Drill has allowed us to bypass the traditional approach of ETL and data warehousing, convert flat files into efficient formats such as Parquet for improved performance, and use plain SQL against very large volumes of files."