Need help with quasar?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

802 Stars 119 Forks Apache License 2.0 21.7K Commits 395 Opened issues


A compiler for a SQL dialect and mathematical meta-system with support for heterogeneous structured data

Services available


Need anything else?

Contributors list

Quasar Discord

Quasar is a purely-functional compiler and optimizing planner for queries expressed in terms of the Multidimensional Relational Algebra (MRA). Quasar has support for arbitrary backends, both heavyweight (full evaluation engines) and lightweight (simple reads with optional pushdown of structural operations and columnar predicates), including full classpath isolation for lightweight backends.

It's important to note that Quasar is not, in and of itself, a runnable application. It is a library which is used by the broader Precog product, much of which is closed-source. Contributions are very much welcome, as is feedback, questions, and general conversation. Join the Discord!

Building and Testing

Quasar builds with SBT:

$ ./sbt
> test:compile
> test

If running on Windows, you may use the SBT batch file instead of the shell script.

Code Organization

Probably the most interesting part of the codebase is the optimizing query planner, which is implemented in the qsu submodule, based on data structures defined in qscript. I recommend starting by looking at the

class, which defines a kleisli composition that clearly lays out all of the phases of the compiler. The core data structure used by the compiler is
, which is a purely functional representation of a directed acyclic graph, which in turn represents data flow in a query.

The formulation of the query plan itself is a fixed-point data structure dictated by several pattern functors composed via coproducts. The primary such pattern functor is

. You would generally deconstruct and interpret this query plan using general folds provided by matryoshka.

Query operations which are pushed down to the underlying data source are represented by

, and carried via
. Data sources are always free to only implement a subset of the pushdown functionality.

The codebase makes extremely heavy use of Scalaz and Cats throughout (using shims to solve the impedance between them), and many high-level operations (such as datasets) are represented as fs2 streams.

Local Datasource


implementation providing access to the filesystems local to the JVM.

Configuration for the local datasource has the following JSON format

  "rootDir": String,
  "format": {
    "type": "json",
    "variant": "array-wrapped" | "line-delimited",
    [precise: Boolean]
  ["readChunkSizeBytes": Number,]
  ["compressionScheme": "gzip"]
  • rootDir
    an absolute path to a local directory at which to root the datasource, all paths handled by the datasource will be interpreted relative to this physical directory.
  • format
    the format of all resources in the datasource, currently JSON is supported in both array-wrapped and line-delimited variants.
  • readChunkSizeBytes
    (optional) an integer indicating the chunk size to use when reading local files, the default is
    (1MB). Different values may yield higher throughput depending on the filesystem.
  • compressionScheme
    (optional) whether to expect resources to be compressed, currently
    is the only supported compression scheme. Omitting this option indicates uncompressed resources.


Copyright © 2020 Precog Data

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.