Imply announces automatic schema discovery for Apache Druid and more milestone initiatives

Imply announces automatic schema discovery for Apache Druid and more milestone initiatives

Imply has unveiled the third milestone in Project Shapeshift, an initiative designed to evolve Apache Druid and solve the most pressing issues developers face when building real-time analytics applications.

This milestone introduces:

  • Schema auto-discovery: the ability for Druid to discover data fields and data types and continuously update tables automatically as they change
  • Shuffle joins: the ability to join large distributed tables without impact to query performance, powered by the new multi-stage query engine
  • Global expansion and new enhancements to Imply Polaris, the cloud database service for Apache Druid

Apache Druid is a popular open-source database and 2022 Datanami Readers Choice winner used by developers at thousands of companies including Confluent, Salesforce and Target.

Because of its performance at scale and under load – along with its comprehensive features for analyzing streaming data – Druid is relied on for operational visibility, rapid data exploration, customer-facing analytics and real-time decisioning. 

Project Shapeshift was announced at Druid Summit 2021 and it marked a strategic initiative from Imply to transform the developer experience for Druid across three pillars – cloud-native, simple and complete.

In March 2022, Imply announced the first milestone with the introduction of Imply Polaris, a cloud database service for Druid.

In September 2022, Imply announced the largest architectural expansion of Druid in its history with the addition of a multi-stage query engine. 

“Druid has always been engineered for speed, scale and streaming data. It’s why developers at Confluent, Netflix, Reddit and thousands of other companies choose Druid over other database alternatives,” said FJ Yang, Co-Founder and CEO of Imply.

“For the past year, the community has come together to bring new levels of operational ease of use and expanded functionality. This makes Druid not only a powerful database – but one developers love to use too.”

Companies including Atlassian, Reddit, and PayTM utilize Imply for Druid because its commercial distribution, software, and services simplify operations, eliminate production risks, and lower the overall cost of running Druid.

As a value-add to existing open-source users, Imply guarantees a reduction in the cost of running Druid through its Total Value Guarantee.

Project Shapeshift Milestone 3 includes major contributions to Apache Druid and new features for Imply Polaris.

Schema definition plays an essential role in query performance as a strongly-typed data structure makes it possible to columnarize, index, and optimize compression. But defining the schema when loading data carries operational burden on engineering teams, especially with ever-changing event data flowing through Apache Kafka and Amazon Kinesis. Databases such as MongoDB utilize a schemaless data structure as it provides developer flexibility and ease of ingestion – but at a cost to query performance.

Imply has also announced a new capability that makes Druid the first analytics database that can provide the performance of a strongly-typed data structure with the flexibility of a schemaless data structure.

Schema auto-discovery, now available in Druid 26.0, is a new feature that enables Druid to automatically discover data fields and data types and update tables to match changing data without an administrator. 

  • Auto detection of new tables: Druid can now auto-discover column names and data types during ingestion. For example, Druid will look at the ingested data and identify what dimensions need to be created and the data type for each dimension’s column.
  • Maintenance of existing tables: As schemas change, Druid will automatically discover the change – dimensions or data types are added, dropped, or changed in the source data – and adjust Druid tables to match the new schema without requiring the existing data to be reprocessed. 

“Now with Apache Druid you can have a schemaless experience in a high-performance, real-time analytics database,” said Gian Merlino, PMC Chair for Apache Druid and CTO of Imply.

“You don’t have to give up having strongly-typed data in favor of flexibility as schema auto-discovery can do it for you.”

Anand Venugopal, Director of ISV Alliances at Confluent, said: “Druid handling real-time schema changes is a big step forward for the streaming ecosystem.

“We see streaming data typically ingested in real-time and often coming from a variety of sources, which can lead to more frequent changes in data structure. Imply has now made Apache Druid simple and scalable to deliver real-time insights on those streams – even as data evolves.”

In Druid 26.0, Apache Druid has expanded join capabilities and now supports large complex joins.

While Druid has supported joins since version 0.18, the previous join capabilities were limited to maintain high CPU efficiency for query performance. When queries required joining large data sets, external ETL tools were utilized to pre-join the data.

Now, Druid has added support for large joins at ingestion – architecturally via shuffle joins.

This simplifies data preparation, minimizes reliance on external tools, and adds to Druid’s capabilities for in-database data transformation.

The new shuffle joins are powered by Druid’s multi-stage query engine.

In the future the community will extend shuffle joins to join large data sets at query-time in addition to ingestion-time. 

Imply Polaris, the cloud database service for Apache Druid, is the easiest deployment model for developers, delivering all of Druid’s speed and performance without requiring expertise, management, or configuration of Druid or the underlying infrastructure. 

This cloud database was built to do more than cloudify Druid; it also optimizes data operations and delivers an end-to-end service from stream ingestion to data visualization. 

Imply announces a series of product updates to Polaris that enhance the developer experience, including: 

  • Global Expansion – In addition to the US region, Polaris is now available in Europe, enabling customers to run across multiple availability zones as well as multi-regions for improved fault tolerance.
  • Enhanced Security – Polaris adds private networking options by ingesting data over AWS PrivateLink from customers’ Kafka or Confluent clusters in AWS. Customers who want to lower their data transfer costs can also choose VPC Peering for ingestion with Polaris.
  • Expanded integrations – In addition to native, connectorless support for Confluent Cloud, Polaris adds the same native support for Apache Kafka and Amazon Kinesis to easily ingest streaming data from anywhere. Polaris also now provides an API to export performance metrics to observability tools including Datadog, Prometheus, Elastic and more.
Click below to share this article

Browse our latest issue

Intelligent CIO North America

View Magazine Archive