Working with Parquet in ClickHouse
Parquet is an efficient file format to store data in a column-oriented way. ClickHouse provides support for both reading and writing Parquet files.
When you reference a file path in a query, where ClickHouse attempts to read from will depend on the variant of ClickHouse that you're using.
If you're using clickhouse-local
it will read from a location relative to where you launched ClickHouse Local.
If you're using ClickHouse Server or ClickHouse Cloud via clickhouse client
, it will read from a location relative to the /var/lib/clickhouse/user_files/
directory on the server.
Importing from Parquet
Before loading data, we can use file() function to explore an example parquet file structure:
We've used Parquet as a second argument, so ClickHouse knows the file format. This will print columns with the types:
We can also explore files before actually importing data using all power of SQL:
We can skip explicit format setting for file()
and INFILE
/OUTFILE
.
In that case, ClickHouse will automatically detect format based on file extension.
Importing to an existing table
Let's create a table into which we'll import Parquet data:
Now we can import data using the FROM INFILE
clause:
Note how ClickHouse automatically converted Parquet strings (in the date
column) to the Date
type. This is because ClickHouse does a typecast automatically based on the types in the target table.
Inserting a local file to remote server
If you want to insert a local Parquet file to a remote ClickHouse server, you can do this by piping the contents of the file into clickhouse-client
, as shown below:
Creating new tables from Parquet files
Since ClickHouse reads parquet file schema, we can create tables on the fly:
This will automatically create and populate a table from a given parquet file:
By default, ClickHouse is strict with column names, types, and values. But sometimes, we can skip nonexistent columns or unsupported values during import. This can be managed with Parquet settings.
Exporting to Parquet format
When using INTO OUTFILE
with ClickHouse Cloud you will need to run the commands in clickhouse client
on the machine where the file will be written to.
To export any table or query result to the Parquet file, we can use an INTO OUTFILE
clause:
This will create the export.parquet
file in a working directory.
ClickHouse and Parquet data types
ClickHouse and Parquet data types are mostly identical but still differ a bit. For example, ClickHouse will export DateTime
type as a Parquets' int64
. If we then import that back to ClickHouse, we're going to see numbers (time.parquet file):
In this case type conversion can be used:
Further reading
ClickHouse introduces support for many formats, both text, and binary, to cover various scenarios and platforms. Explore more formats and ways to work with them in the following articles:
- CSV and TSV formats
- Avro, Arrow and ORC
- JSON formats
- Regex and templates
- Native and binary formats
- SQL formats
And also check clickhouse-local - a portable full-featured tool to work on local/remote files without the need for Clickhouse server.