summaryrefslogtreecommitdiff
path: root/utils
diff options
context:
space:
mode:
authorguybe7 <guy.benoish@redislabs.com>2023-03-11 09:14:16 +0100
committerGitHub <noreply@github.com>2023-03-11 10:14:16 +0200
commit4ba47d2d2163ea77aacc9f719db91af2d7298905 (patch)
tree1290c23d28b91fbd237506faf31878918826a40c /utils
parentc46d68d6d273e7c86fd1f1d10caca4e47a3294f8 (diff)
downloadredis-4ba47d2d2163ea77aacc9f719db91af2d7298905.tar.gz
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845 Since ironing the details of the reply schema of each and every command can take a long time, we would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch. Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build. ### Background In #9656 we add a lot of information about Redis commands, but we are missing information about the replies ### Motivation 1. Documentation. This is the primary goal. 2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like. 3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing testsuite, see the "Testing" section) ### Schema The idea is to supply some sort of schema for the various replies of each command. The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3. Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with and without the `FULL` modifier) We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema. Example for `BZPOPMIN`: ``` "reply_schema": { "oneOf": [ { "description": "Timeout reached and no elements were popped.", "type": "null" }, { "description": "The keyname, popped member, and its score.", "type": "array", "minItems": 3, "maxItems": 3, "items": [ { "description": "Keyname", "type": "string" }, { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } ] } ``` #### Notes 1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI, where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one. 2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply schema for documentation (and possibly to create a fuzzer that validates the replies) 3. For documentation, the description field will include an explanation of the scenario in which the reply is sent, including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one is with `WITHSCORES` and the other is without. 4. For documentation, there will be another optional field "notes" in which we will add a short description of the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat array, for example) Given the above: 1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/) (given that "description" and "notes" are comprehensive enough) 2. We can generate a client in a strongly typed language (but the return type could be a conceptual `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support. 3. We can create a fuzzer for RESP3. ### Limitations (because we are using the standard json-schema) The problem is that Redis' replies are more diverse than what the json format allows. This means that, when we convert the reply to a json (in order to validate the schema against it), we lose information (see the "Testing" section below). The other option would have been to extend the standard json-schema (and json format) to include stuff like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that seemed like too much work, so we decided to compromise. Examples: 1. We cannot tell the difference between an "array" and a "set" 2. We cannot tell the difference between simple-string and bulk-string 3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems` compares (member,score) tuples and not just the member name. ### Testing This commit includes some changes inside Redis in order to verify the schemas (existing and future ones) are indeed correct (i.e. describe the actual response of Redis). To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands it executed and their replies. For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with `--log-req-res --force-resp3`) You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate `.reqres` files (same dir as the `stdout` files) which contain request-response pairs. These files are later on processed by `./utils/req-res-log-validator.py` which does: 1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c) 2. For each request-response pair, it validates the response against the request's reply_schema (obtained from the extended COMMAND DOCS) 5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use the existing redis test suite, rather than attempt to write a fuzzer. #### Notes about RESP2 1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to accept RESP3 as the future RESP) 2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3 so that we can validate it, we will need to know how to convert the actual reply to the one expected. - number and boolean are always strings in RESP2 so the conversion is easy - objects (maps) are always a flat array in RESP2 - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command handling (so the client will not be totally auto-generated) Example for ZRANGE: ``` "reply_schema": { "anyOf": [ { "description": "A list of member elements", "type": "array", "uniqueItems": true, "items": { "type": "string" } }, { "description": "Members and their scores. Returned in case `WITHSCORES` was used.", "notes": "In RESP2 this is returned as a flat array", "type": "array", "uniqueItems": true, "items": { "type": "array", "minItems": 2, "maxItems": 2, "items": [ { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } } ] } ``` ### Other changes 1. Some tests that behave differently depending on the RESP are now being tested for both RESP, regardless of the special log-req-res mode ("Pub/Sub PING" for example) 2. Update the history field of CLIENT LIST 3. Added basic tests for commands that were not covered at all by the testsuite ### TODO - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g. when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896 - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator) - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output of the tests - https://github.com/redis/redis/issues/11897 - [x] (probably a separate PR) add all missing schemas - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to fight with the tcl including mechanism a bit) - [x] issue: module API - https://github.com/redis/redis/issues/11898 - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899 Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com> Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Shaya Potter <shaya@redislabs.com>
Diffstat (limited to 'utils')
-rwxr-xr-xutils/generate-command-code.py118
-rw-r--r--utils/reply_schema_linter.js31
-rwxr-xr-xutils/req-res-log-validator.py349
-rw-r--r--utils/req-res-validator/requirements.txt2
4 files changed, 472 insertions, 28 deletions
diff --git a/utils/generate-command-code.py b/utils/generate-command-code.py
index 24ecaef3e..b5847c469 100755
--- a/utils/generate-command-code.py
+++ b/utils/generate-command-code.py
@@ -2,6 +2,7 @@
import glob
import json
import os
+import argparse
ARG_TYPES = {
"string": "ARG_TYPE_STRING",
@@ -35,29 +36,6 @@ GROUPS = {
"bitmap": "COMMAND_GROUP_BITMAP",
}
-RESP2_TYPES = {
- "simple-string": "RESP2_SIMPLE_STRING",
- "error": "RESP2_ERROR",
- "integer": "RESP2_INTEGER",
- "bulk-string": "RESP2_BULK_STRING",
- "null-bulk-string": "RESP2_NULL_BULK_STRING",
- "array": "RESP2_ARRAY",
- "null-array": "RESP2_NULL_ARRAY",
-}
-
-RESP3_TYPES = {
- "simple-string": "RESP3_SIMPLE_STRING",
- "error": "RESP3_ERROR",
- "integer": "RESP3_INTEGER",
- "double": "RESP3_DOUBLE",
- "bulk-string": "RESP3_BULK_STRING",
- "array": "RESP3_ARRAY",
- "map": "RESP3_MAP",
- "set": "RESP3_SET",
- "bool": "RESP3_BOOL",
- "null": "RESP3_NULL",
-}
-
def get_optional_desc_string(desc, field, force_uppercase=False):
v = desc.get(field, None)
@@ -194,7 +172,6 @@ class Argument(object):
self.type = self.desc["type"]
self.key_spec_index = self.desc.get("key_spec_index", None)
self.subargs = []
- self.subargs_name = None
if self.type in ["oneof", "block"]:
self.display = None
for subdesc in self.desc["arguments"]:
@@ -264,6 +241,75 @@ class Argument(object):
f.write("};\n\n")
+def to_c_name(str):
+ return str.replace(":", "").replace(".", "_").replace("$", "_")\
+ .replace("^", "_").replace("*", "_").replace("-", "_")
+
+
+class ReplySchema(object):
+ def __init__(self, name, desc):
+ self.name = to_c_name(name)
+ self.schema = {}
+ if desc.get("type") == "object":
+ if desc.get("properties") and desc.get("additionalProperties") is None:
+ print(f"{self.name}: Any object that has properties should have the additionalProperties field")
+ exit(1)
+ elif desc.get("type") == "array":
+ if desc.get("items") and isinstance(desc["items"], list) and any([desc.get(k) is None for k in ["minItems", "maxItems"]]):
+ print(f"{self.name}: Any array that has items should have the minItems and maxItems fields")
+ exit(1)
+ for k, v in desc.items():
+ if isinstance(v, dict):
+ self.schema[k] = ReplySchema("%s_%s" % (self.name, k), v)
+ elif isinstance(v, list):
+ self.schema[k] = []
+ for i, subdesc in enumerate(v):
+ self.schema[k].append(ReplySchema("%s_%s_%i" % (self.name, k,i), subdesc))
+ else:
+ self.schema[k] = v
+
+ def write(self, f):
+ def struct_code(name, k, v):
+ if isinstance(v, ReplySchema):
+ t = "JSON_TYPE_OBJECT"
+ vstr = ".value.object=&%s" % name
+ elif isinstance(v, list):
+ t = "JSON_TYPE_ARRAY"
+ vstr = ".value.array={.objects=%s,.length=%d}" % (name, len(v))
+ elif isinstance(v, bool):
+ t = "JSON_TYPE_BOOLEAN"
+ vstr = ".value.boolean=%d" % int(v)
+ elif isinstance(v, str):
+ t = "JSON_TYPE_STRING"
+ vstr = ".value.string=\"%s\"" % v
+ elif isinstance(v, int):
+ t = "JSON_TYPE_INTEGER"
+ vstr = ".value.integer=%d" % v
+
+ return "%s,\"%s\",%s" % (t, k, vstr)
+
+ for k, v in self.schema.items():
+ if isinstance(v, ReplySchema):
+ v.write(f)
+ elif isinstance(v, list):
+ for i, schema in enumerate(v):
+ schema.write(f)
+ name = to_c_name("%s_%s" % (self.name, k))
+ f.write("/* %s array reply schema */\n" % name)
+ f.write("struct jsonObject *%s[] = {\n" % name)
+ for i, schema in enumerate(v):
+ f.write("&%s,\n" % schema.name)
+ f.write("};\n\n")
+
+ f.write("/* %s reply schema */\n" % self.name)
+ f.write("struct jsonObjectElement %s_elements[] = {\n" % self.name)
+ for k, v in self.schema.items():
+ name = to_c_name("%s_%s" % (self.name, k))
+ f.write("{%s},\n" % struct_code(name, k, v))
+ f.write("};\n\n")
+ f.write("struct jsonObject %s = {%s_elements,.length=%d};\n\n" % (self.name, self.name, len(self.schema)))
+
+
class Command(object):
def __init__(self, name, desc):
self.name = name.upper()
@@ -273,9 +319,11 @@ class Command(object):
self.subcommands = []
self.args = []
for arg_desc in self.desc.get("arguments", []):
- arg = Argument(self.fullname(), arg_desc)
- self.args.append(arg)
+ self.args.append(Argument(self.fullname(), arg_desc))
verify_no_dup_names(self.fullname(), self.args)
+ self.reply_schema = None
+ if "reply_schema" in self.desc:
+ self.reply_schema = ReplySchema(self.reply_schema_name(), self.desc["reply_schema"])
def fullname(self):
return self.name.replace("-", "_").replace(":", "")
@@ -296,6 +344,9 @@ class Command(object):
def arg_table_name(self):
return "%s_Args" % (self.fullname().replace(" ", "_"))
+ def reply_schema_name(self):
+ return "%s_ReplySchema" % (self.fullname().replace(" ", "_"))
+
def struct_name(self):
return "%s_Command" % (self.fullname().replace(" ", "_"))
@@ -377,6 +428,9 @@ class Command(object):
if self.args:
s += ".args=%s," % self.arg_table_name()
+ if self.reply_schema and args.with_reply_schema:
+ s += ".reply_schema=&%s," % self.reply_schema_name()
+
return s[:-1]
def write_internal_structs(self, f):
@@ -423,6 +477,9 @@ class Command(object):
f.write("{0}\n")
f.write("};\n\n")
+ if self.reply_schema and args.with_reply_schema:
+ self.reply_schema.write(f)
+
class Subcommand(Command):
def __init__(self, name, desc):
@@ -447,6 +504,10 @@ def create_command(name, desc):
# Figure out where the sources are
srcdir = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + "/../src")
+parser = argparse.ArgumentParser()
+parser.add_argument('--with-reply-schema', action='store_true')
+args = parser.parse_args()
+
# Create all command objects
print("Processing json files...")
for filename in glob.glob('%s/commands/*.json' % srcdir):
@@ -481,8 +542,9 @@ if check_command_error_counter != 0:
print("Error: There are errors in the commands check, please check the above logs.")
exit(1)
-print("Generating commands.c...")
-with open("%s/commands.c" % srcdir, "w") as f:
+commands_filename = "commands_with_reply_schema" if args.with_reply_schema else "commands"
+print(f"Generating {commands_filename}.c...")
+with open(f"{srcdir}/{commands_filename}.c", "w") as f:
f.write("/* Automatically generated by %s, do not edit. */\n\n" % os.path.basename(__file__))
f.write("#include \"server.h\"\n")
f.write(
diff --git a/utils/reply_schema_linter.js b/utils/reply_schema_linter.js
new file mode 100644
index 000000000..e2358d4b9
--- /dev/null
+++ b/utils/reply_schema_linter.js
@@ -0,0 +1,31 @@
+function validate_schema(command_schema) {
+ var error_status = false
+ const Ajv = require("ajv/dist/2019")
+ const ajv = new Ajv({strict: true, strictTuples: false})
+ let json = require('../src/commands/'+ command_schema);
+ for (var item in json) {
+ const schema = json[item].reply_schema
+ if (schema == undefined)
+ continue;
+ try {
+ ajv.compile(schema)
+ } catch (error) {
+ console.error(command_schema + " : " + error.toString())
+ error_status = true
+ }
+ }
+ return error_status
+}
+
+const schema_directory_path = './src/commands'
+const path = require('path')
+var fs = require('fs');
+var files = fs.readdirSync(schema_directory_path);
+jsonFiles = files.filter(el => path.extname(el) === '.json')
+var error_status = false
+jsonFiles.forEach(function(file){
+ if (validate_schema(file))
+ error_status = true
+})
+if (error_status)
+ process.exit(1)
diff --git a/utils/req-res-log-validator.py b/utils/req-res-log-validator.py
new file mode 100755
index 000000000..e2b9d4f8d
--- /dev/null
+++ b/utils/req-res-log-validator.py
@@ -0,0 +1,349 @@
+#!/usr/bin/env python3
+import os
+import glob
+import json
+import sys
+
+import jsonschema
+import subprocess
+import redis
+import time
+import argparse
+import multiprocessing
+import collections
+import io
+import signal
+import traceback
+from datetime import timedelta
+from functools import partial
+try:
+ from jsonschema import Draft201909Validator as schema_validator
+except ImportError:
+ from jsonschema import Draft7Validator as schema_validator
+
+"""
+The purpose of this file is to validate the reply_schema values of COMMAND DOCS.
+Basically, this is what it does:
+1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
+2. For each request-response pair, it validates the response against the request's reply_schema (obtained from COMMAND DOCS)
+
+This script spins up a redis-server and a redis-cli in order to obtain COMMAND DOCS.
+
+In order to use this file you must run the redis testsuite with the following flags:
+./runtest --dont-clean --force-resp3 --log-req-res
+
+And then:
+./utils/req-res-log-validator.py
+
+The script will fail only if:
+1. One or more of the replies doesn't comply with its schema.
+2. One or more of the commands in COMMANDS DOCS doesn't have the reply_schema field (with --fail-missing-reply-schemas)
+3. The testsuite didn't execute all of the commands (with --fail-commands-not-all-hit)
+
+Future validations:
+1. Fail the script if one or more of the branches of the reply schema (e.g. oneOf, anyOf) was not hit.
+"""
+
+IGNORED_COMMANDS = [
+ "sync",
+ "psync",
+ "monitor",
+ "subscribe",
+ "unsubscribe",
+ "ssubscribe",
+ "sunsubscribe",
+ "psubscribe",
+ "punsubscribe",
+ "debug",
+ "pfdebug",
+ "lolwut",
+]
+
+
+class Request(object):
+ """
+ This class represents a Redis request (AKA command, argv)
+ """
+ def __init__(self, f, docs, line_counter):
+ """
+ Read lines from `f` (generated by logreqres.c) and populates the argv array
+ """
+ self.command = None
+ self.schema = None
+ self.argv = []
+
+ while True:
+ line = f.readline()
+ line_counter[0] += 1
+ if not line:
+ break
+ length = int(line)
+ arg = str(f.read(length))
+ f.read(2) # read \r\n
+ line_counter[0] += 1
+ if arg == "__argv_end__":
+ break
+ self.argv.append(arg)
+
+ if not self.argv:
+ return
+
+ self.command = self.argv[0].lower()
+ doc = docs.get(self.command, {})
+ if not doc and len(self.argv) > 1:
+ self.command = f"{self.argv[0].lower()}|{self.argv[1].lower()}"
+ doc = docs.get(self.command, {})
+
+ if not doc:
+ self.command = None
+ return
+
+ self.schema = doc.get("reply_schema")
+
+ def __str__(self):
+ return json.dumps(self.argv)
+
+
+class Response(object):
+ """
+ This class represents a Redis response in RESP3
+ """
+ def __init__(self, f, line_counter):
+ """
+ Read lines from `f` (generated by logreqres.c) and build the JSON representing the response in RESP3
+ """
+ self.error = False
+ self.queued = False
+ self.json = None
+
+ line = f.readline()[:-2]
+ line_counter[0] += 1
+ if line[0] == '+':
+ self.json = line[1:]
+ if self.json == "QUEUED":
+ self.queued = True
+ elif line[0] == '-':
+ self.json = line[1:]
+ self.error = True
+ elif line[0] == '$':
+ self.json = str(f.read(int(line[1:])))
+ f.read(2) # read \r\n
+ line_counter[0] += 1
+ elif line[0] == ':':
+ self.json = int(line[1:])
+ elif line[0] == ',':
+ self.json = float(line[1:])
+ elif line[0] == '_':
+ self.json = None
+ elif line[0] == '#':
+ self.json = line[1] == 't'
+ elif line[0] == '!':
+ self.json = str(f.read(int(line[1:])))
+ f.read(2) # read \r\n
+ line_counter[0] += 1
+ self.error = True
+ elif line[0] == '=':
+ self.json = str(f.read(int(line[1:])))[4:] # skip "txt:" or "mkd:"
+ f.read(2) # read \r\n
+ line_counter[0] += 1 + self.json.count("\r\n")
+ elif line[0] == '(':
+ self.json = line[1:] # big-number is actually a string
+ elif line[0] in ['*', '~', '>']: # unfortunately JSON doesn't tell the difference between a list and a set
+ self.json = []
+ count = int(line[1:])
+ for i in range(count):
+ ele = Response(f, line_counter)
+ self.json.append(ele.json)
+ elif line[0] in ['%', '|']:
+ self.json = {}
+ count = int(line[1:])
+ for i in range(count):
+ field = Response(f, line_counter)
+ # Redis allows fields to be non-strings but JSON doesn't.
+ # Luckily, for any kind of response we can validate, the fields are
+ # always strings (example: XINFO STREAM)
+ # The reason we can't always convert to string is because of DEBUG PROTOCOL MAP
+ # which anyway doesn't have a schema
+ if isinstance(field.json, str):
+ field = field.json
+ value = Response(f, line_counter)
+ self.json[field] = value.json
+ if line[0] == '|':
+ # We don't care abou the attributes, read the real response
+ real_res = Response(f, line_counter)
+ self.__dict__.update(real_res.__dict__)
+
+
+ def __str__(self):
+ return json.dumps(self.json)
+
+
+def process_file(docs, path):
+ """
+ This function processes a single filegenerated by logreqres.c
+ """
+ line_counter = [0] # A list with one integer: to force python to pass it by reference
+ command_counter = dict()
+
+ print(f"Processing {path} ...")
+
+ # Convert file to StringIO in order to minimize IO operations
+ with open(path, "r", newline="\r\n", encoding="latin-1") as f:
+ content = f.read()
+
+ with io.StringIO(content) as fakefile:
+ while True:
+ try:
+ req = Request(fakefile, docs, line_counter)
+ if not req.argv:
+ # EOF
+ break
+ res = Response(fakefile, line_counter)
+ except json.decoder.JSONDecodeError as err:
+ print(f"JSON decoder error while processing {path}:{line_counter[0]}: {err}")
+ print(traceback.format_exc())
+ raise
+ except Exception as err:
+ print(f"General error while processing {path}:{line_counter[0]}: {err}")
+ print(traceback.format_exc())
+ raise
+
+ if not req.command:
+ # Unknown command
+ continue
+
+ command_counter[req.command] = command_counter.get(req.command, 0) + 1
+
+ if res.error or res.queued:
+ continue
+
+ try:
+ jsonschema.validate(instance=res.json, schema=req.schema, cls=schema_validator)
+ except (jsonschema.ValidationError, jsonschema.exceptions.SchemaError) as err:
+ print(f"JSON schema validation error on {path}: {err}")
+ print(f"argv: {req.argv}")
+ try:
+ print(f"Response: {res}")
+ except UnicodeDecodeError as err:
+ print("Response: (unprintable)")
+ print(f"Schema: {json.dumps(req.schema, indent=2)}")
+ print(traceback.format_exc())
+ raise
+
+ return command_counter
+
+
+def fetch_schemas(cli, port, args, docs):
+ redis_proc = subprocess.Popen(args, stdout=subprocess.PIPE)
+
+ while True:
+ try:
+ print('Connecting to Redis...')
+ r = redis.Redis(port=port)
+ r.ping()
+ break
+ except Exception as e:
+ time.sleep(0.1)
+ pass
+ print('Connected')
+
+ cli_proc = subprocess.Popen([cli, '-p', str(port), '--json', 'command', 'docs'], stdout=subprocess.PIPE)
+ stdout, stderr = cli_proc.communicate()
+ docs_response = json.loads(stdout)
+
+ for name, doc in docs_response.items():
+ if "subcommands" in doc:
+ for subname, subdoc in doc["subcommands"].items():
+ docs[subname] = subdoc
+ else:
+ docs[name] = doc
+
+ redis_proc.terminate()
+ redis_proc.wait()
+
+
+if __name__ == '__main__':
+ # Figure out where the sources are
+ srcdir = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + "/../src")
+ testdir = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + "/../tests")
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--server', type=str, default='%s/redis-server' % srcdir)
+ parser.add_argument('--port', type=int, default=6534)
+ parser.add_argument('--cli', type=str, default='%s/redis-cli' % srcdir)
+ parser.add_argument('--module', type=str, action='append', default=[])
+ parser.add_argument('--verbose', action='store_true')
+ parser.add_argument('--fail-commands-not-all-hit', action='store_true')
+ parser.add_argument('--fail-missing-reply-schemas', action='store_true')
+ args = parser.parse_args()
+
+ docs = dict()
+
+ # Fetch schemas from a Redis instance
+ print('Starting Redis server')
+ redis_args = [args.server, '--port', str(args.port)]
+ for module in args.module:
+ redis_args += ['--loadmodule', 'tests/modules/%s.so' % module]
+
+ fetch_schemas(args.cli, args.port, redis_args, docs)
+
+ missing_schema = [k for k, v in docs.items()
+ if "reply_schema" not in v and k not in IGNORED_COMMANDS]
+ if missing_schema:
+ print("WARNING! The following commands are missing a reply_schema:")
+ for k in sorted(missing_schema):
+ print(f" {k}")
+ if args.fail_missing_reply_schemas:
+ print("ERROR! at least one command does not have a reply_schema")
+ sys.exit(1)
+
+ # Fetch schemas from a sentinel
+ print('Starting Redis sentinel')
+
+ # Sentinel needs a config file to start
+ config_file = "tmpsentinel.conf"
+ open(config_file, 'a').close()
+
+ sentinel_args = [args.server, config_file, '--port', str(args.port), "--sentinel"]
+ fetch_schemas(args.cli, args.port, sentinel_args, docs)
+ os.unlink(config_file)
+
+ start = time.time()
+
+ # Obtain all the files toprocesses
+ paths = []
+ for path in glob.glob('%s/tmp/*/*.reqres' % testdir):
+ paths.append(path)
+
+ for path in glob.glob('%s/cluster/tmp/*/*.reqres' % testdir):
+ paths.append(path)
+
+ for path in glob.glob('%s/sentinel/tmp/*/*.reqres' % testdir):
+ paths.append(path)
+
+ counter = collections.Counter()
+ # Spin several processes to handle the files in parallel
+ with multiprocessing.Pool(multiprocessing.cpu_count()) as pool:
+ func = partial(process_file, docs)
+ # pool.map blocks until all the files have been processed
+ for result in pool.map(func, paths):
+ counter.update(result)
+ command_counter = dict(counter)
+
+ elapsed = time.time() - start
+ print(f"Done. ({timedelta(seconds=elapsed)})")
+ print("Hits per command:")
+ for k, v in sorted(command_counter.items()):
+ print(f" {k}: {v}")
+ # We don't care about SENTINEL commands
+ not_hit = set(filter(lambda x: not x.startswith("sentinel"),
+ set(docs.keys()) - set(command_counter.keys()) - set(IGNORED_COMMANDS)))
+ if not_hit:
+ if args.verbose:
+ print("WARNING! The following commands were not hit at all:")
+ for k in sorted(not_hit):
+ print(f" {k}")
+ if args.fail_commands_not_all_hit:
+ print("ERROR! at least one command was not hit by the tests")
+ sys.exit(1)
+
diff --git a/utils/req-res-validator/requirements.txt b/utils/req-res-validator/requirements.txt
new file mode 100644
index 000000000..0e3024b86
--- /dev/null
+++ b/utils/req-res-validator/requirements.txt
@@ -0,0 +1,2 @@
+jsonschema==4.17.3
+redis==4.5.1 \ No newline at end of file