Redis Connector

Overview

this connector allows the use of Redis key/value pair is presented as a single row in openLooKeng.

Note

In Redis,key/value pair can only be mapped to string or hash value types.keys can be stored in a zset,then keys can split into multiple slice

Support Redis 2.8.0 or higher

Configuration

To configure the Redis connector, create a catalog properties file etc/catalog/redis.properties with the following contents, replacing the properties as appropriate:

connector.name=redis
redis.table-names=schema1.table1,schema1.table2
redis.nodes=host1:port

Multiple Redis Servers

You can have as many catalogs as you need. If you have additional Redis servers, simply add another properties file to etc/catalog with a different name, making sure it ends in .properties. For example, if you name the property file sales.properties, openLooKeng will create a catalog named sales using the configured connector.

Configuration properties

The following configuration properties are available:

Property NameDescription
redis.table-namesList of all tables provided by the catalog
redis.default-schemaDefault schema name for tables (default default)
redis.nodesList of nodes in the Redis server
redis.connect-timeoutTimeout for connecting to the Redis server (ms) (default 2000)
redis.scan-countThe number of keys obtained from each scan for string and hash value types (default 100)
redis.key-prefix-schema-tableRedis keys have schema-name:table-name prefix (default false)
redis.key-delimiterDelimiter separating schema_name and table_name if redis.key-prefix-schema-table is used (default :)
redis.table-description-dirDirectory containing table description files (default etc/redis/)
redis.hide-internal-columnsWhether internal columns are shown in table metadata or not. (default true)
redis.database-indexRedis database index (default 0)
redis.passwordRedis server password (default null)
redis.table-description-intervalthe interval of flush description files (ms) (default no flush,table description will be memoized without expiration)

Internal columns

Column nameTypeDescription
_keyVARCHARRedis key.
_valueVARCHARRedis value corresponding to the key
_key_lengthBIGINTNumber of bytes in the key.
_key_corruptBOOLEANTrue if the decoder could not decode the key for this row. When true, data columns mapped from the key should be treated as invalid.
_value_corruptBOOLEANTrue if the decoder could not decode the value for this row. When true, data columns mapped from the value should be treated as invalid.

Table Definition Files

For openLooKeng, every key/value pair must be mapped into columns to allow queries against the data. It is like kafka conntector,so you can refer to kafka-tutorial

A table definition file consists of a JSON definition for a table. The name of the file can be arbitrary but must end in .json.

for example,there is a nation.json

{
    "tableName": "nation",
    "schemaName": "tpch",
    "key": {
        "dataFormat": "raw",
        "fields": [
            {
                "name": "redis_key",
                "type": "VARCHAR(64)",
                "hidden": "true"
            }
        ]
    },
    "value": {
        "dataFormat": "json",
        "fields": [
            {
                "name": "nationkey",
                "mapping": "nationkey",
                "type": "BIGINT"
            },
            {
                "name": "name",
                "mapping": "name",
                "type": "VARCHAR(25)"
            },
                        {
                "name": "regionkey",
                "mapping": "regionkey",
                "type": "BIGINT"
            },
            {
                "name": "comment",
                "mapping": "comment",
                "type": "VARCHAR(152)"
            }
       ]
    }
}

In redis,such data exists

127.0.0.1:6379> keys tpch:nation:*
 1) "tpch:nation:2"
 2) "tpch:nation:4"
 3) "tpch:nation:16"
 4) "tpch:nation:18"
 5) "tpch:nation:10"
 6) "tpch:nation:17"
 7) "tpch:nation:1"
127.0.0.1:6379> get tpch:nation:1
"{\"nationkey\":1,\"name\":\"ARGENTINA\",\"regionkey\":1,\"comment\":\"al foxes promise slyly according to the regular accounts. bold requests alon\"}"

Now we can use redis connector get data from redis,(redis_key don’t show,because we set “hidden”: “true” )

lk> select * from redis.tpch.nation;
 nationkey |      name      | regionkey |                                                      comment                                                       
-----------+----------------+-----------+--------------------------------------------------------------------------------------------------------------------
         3 | CANADA         |         1 | eas hang ironic, silent packages. slyly regular packages are furiously over the tithes. fluffily bold              
         9 | INDONESIA      |         2 |  slyly express asymptotes. regular deposits haggle slyly. carefully ironic hockey players sleep blithely. carefull 
        19 | ROMANIA        |         3 | ular asymptotes are about the furious multipliers. express dependencies nag above the ironically ironic account    
         2 | BRAZIL         |         1 | y alongside of the pending deposits. carefully special packages are about the ironic forges. slyly special   

Note

if redis.key-prefix-schema-table is false (default is false),all keys in redis will be mapped to table’s key,no matching occurs

Please refer to the kafka-tutorial for the description of the dataFormat as well as various available decoders.

In addition to the above Kafka types, the Redis connector supports hash type for the value field which represent data stored in the Redis hash. Redis connector use hgetall key to get data.

      {
      "tableName": ...,
      "schemaName": ...,
      "value": {
        "dataFormat": "hash",
        "fields": [
          ...
        ]
      }
    }

the Redis connector supports zset type for the key field which represent key stored in the Redis zset. if and only if zset is used as key datafomart,the split is truly supported , because we can use zrange zsetkey split.start split.end to get keys of a split.

      {
      "tableName": ...,
      "schemaName": ...,
      "key": {
        "dataFormat": "zset",
        "name": "zsetkey", //zadd zsetkey score member
        "fields": [
            ...
        ]
      }
    }

Redis Connector Limitations

only support read operation,don’t support write operation.

有奖捉虫

“有虫”文档片段

0/500

存在的问题

文档存在风险与错误

● 拼写,格式,无效链接等错误;

● 技术原理、功能、规格等描述和软件不一致,存在错误;

● 原理图、架构图等存在错误;

● 版本号不匹配:文档版本或内容描述和实际软件不一致;

● 对重要数据或系统存在风险的操作,缺少安全提示;

● 排版不美观,影响阅读;

内容描述不清晰

● 描述存在歧义;

● 图形、表格、文字等晦涩难懂;

● 逻辑不清晰,该分类、分项、分步骤的没有给出;

内容获取有困难

● 很难通过搜索引擎,openLooKeng官网,相关博客找到所需内容;

示例代码有错误

● 命令、命令参数等错误;

● 命令无法执行或无法完成对应功能;

内容有缺失

● 关键步骤错误或缺失,无法指导用户完成任务,比如安装、配置、部署等;

● 逻辑不清晰,该分类、分项、分步骤的没有给出

● 图形、表格、文字等晦涩难懂

● 缺少必要的前提条件、注意事项等;

● 描述存在歧义

0/500

您对文档的总体满意度

非常不满意
非常满意

请问是什么原因让您参与到这个问题中

您的邮箱

创Issue赢奖品
根据您的反馈,会自动生成issue模板。您只需点击按钮,创建issue即可。
有奖捉虫