Client (0.7.x and earlier)
Java client library to communicate with a DB server thru its protocols. Current implementation supports only HTTP interface. The library provides own API to send requests to a server.
This library will be deprecated soon. Use the latest Java Client for new projects
Setup
- Maven
- Gradle (Kotlin)
- Gradle
Since version 0.5.0
, the driver uses a new client http library that needs to be added as a dependency.
- Maven
- Gradle (Kotlin)
- Gradle
Initialization
Connection URL Format: protocol://host[:port][/database][?param[=value][¶m[=value]][#tag[,tag]]
, for example:
http://localhost:8443?ssl=true&sslmode=NONE
https://(https://explorer@play.clickhouse.com:443
Connect to a single node:
Connect to a cluster with multiple nodes:
Query API
Streaming Query API
See complete code example in the repo.
Insert API
See complete code example in the repo.
RowBinary Encoding
RowBinary format is described on its page.
There is an example of code.
Features
Compression
The client will by default use LZ4 compression, which requires this dependency:
- Maven
- Gradle (Kotlin)
- Gradle
You can choose to use gzip instead by setting compress_algorithm=gzip
in the connection URL.
Alternatively, you can disable compression a few ways.
- Disable by setting
compress=0
in the connection URL:http://localhost:8123/default?compress=0
- Disable via the client configuration:
See the compression documentation to learn more about different compression options.
Multiple queries
Execute multiple queries in a worker thread one after another within same session:
Named Parameters
You can pass parameters by name rather than relying solely on their position in the parameter list. This capability is available using params
function.
All params
signatures involving String
type (String
, String[]
, Map<String, String>
) assume the keys being passed are valid ClickHouse SQL strings. For instance:
If you prefer not to parse String objects to ClickHouse SQL manually, you can use the helper function ClickHouseValues.convertToSqlExpression
located at com.clickhouse.data
:
In the example above, ClickHouseValues.convertToSqlExpression
will escape the inner single quote, and surround the variable with a valid single quotes.
Other types, such as Integer
, UUID
, Array
and Enum
will be converted automatically inside params
.
Node Discovery
Java client provides the ability to discover ClickHouse nodes automatically. Auto-discovery is disabled by default. To manually enable it, set auto_discovery
to true
:
Or in the connection URL:
If auto-discovery is enabled, there is no need to specify all ClickHouse nodes in the connection URL. Nodes specified in the URL will be treated as seeds, and the Java client will automatically discover more nodes from system tables and/or clickhouse-keeper or zookeeper.
The following options are responsible for auto-discovery configuration:
Property | Default | Description |
---|---|---|
auto_discovery | false | Whether the client should discover more nodes from system tables and/or clickhouse-keeper/zookeeper. |
node_discovery_interval | 0 | Node discovery interval in milliseconds, zero or negative value means one-time discovery. |
node_discovery_limit | 100 | Maximum number of nodes that can be discovered at a time; zero or negative value means no limit. |
Load Balancing
The Java client chooses a ClickHouse node to send requests to, according to the load-balancing policy. In general, the load-balancing policy is responsible for the following things:
- Get a node from a managed node list.
- Managing node's status.
- Optionally schedule a background process for node discovery (if auto-discovery is enabled) and run a health check.
Here is a list of options to configure load balancing:
Property | Default | Description |
---|---|---|
load_balancing_policy | "" | The load-balancing policy can be one of: firstAlive - request is sent to the first healthy node from the managed node listrandom - request is sent to a random node from the managed node list roundRobin - request is sent to each node from the managed node list, in turn.ClickHouseLoadBalancingPolicy - custom load balancing policy |
load_balancing_tags | "" | Load balancing tags for filtering out nodes. Requests are sent only to nodes that have the specified tags |
health_check_interval | 0 | Health check interval in milliseconds, zero or negative value means one-time. |
health_check_method | ClickHouseHealthCheckMethod.SELECT_ONE | Health check method. Can be one of: ClickHouseHealthCheckMethod.SELECT_ONE - check with select 1 queryClickHouseHealthCheckMethod.PING - protocol-specific check, which is generally faster |
node_check_interval | 0 | Node check interval in milliseconds, negative number is treated as zero. The node status is checked if the specified amount of time has passed since the last check. The difference between health_check_interval and node_check_interval is that the health_check_interval option schedules the background job, which checks the status for the list of nodes (all or faulty), but node_check_interval specifies the amount of time has passed since the last check for the particular node |
check_all_nodes | false | Whether to perform a health check against all nodes or just faulty ones. |
Failover and retry
Java client provides configuration options to set up failover and retry behavior for failed queries:
Property | Default | Description |
---|---|---|
failover | 0 | Maximum number of times a failover can happen for a request. Zero or a negative value means no failover. Failover sends the failed request to a different node (according to the load-balancing policy) in order to recover from failover. |
retry | 0 | Maximum number of times retry can happen for a request. Zero or a negative value means no retry. Retry sends a request to the same node and only if the ClickHouse server returns the NETWORK_ERROR error code |
repeat_on_session_lock | true | Whether to repeat execution when the session is locked until timed out(according to session_timeout or connect_timeout ). The failed request is repeated if the ClickHouse server returns the SESSION_IS_LOCKED error code |
Adding custom http headers
Java client support HTTP/S transport layer in case we want to add custom HTTP headers to the request.
We should use the custom_http_headers property, and the headers need to be ,
separated. The header key/value should be divided using =